Intel promotion allegedly reveals Core i5-based MacBook Pro

12346»

Comments

  • Reply 101 of 108
    Quote:
    Originally Posted by foljs View Post


    With TWICE (or more, depending on disk count) the failure rate of a physical one disk.



    Maybe, that's the RAID0 argument you could never understand.



    Nope. I still will never get it. Hence the "never." If you are interested in a performance gain such as RAID sets, then you are smart enough to keep redundant sets of data, and understand the risks involved. So the chances of losing your drives does not equal the chances of losing your data, because you are smart enough not to rely on just your RAID array. Just as you're intelligent enough to back up a single disk.



    The chances of losing your data in a two-disk RAID0 setup like the OP suggests isn't nuts at all, IMO, it's not as if you are talking about failure in days instead of years. Even if you halve the warranty period of most drives you're talking about 2-3 years.
  • Reply 102 of 108
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by technohermit View Post


    Nope. I still will never get it. Hence the "never." If you are interested in a performance gain such as RAID sets, then you are smart enough to keep redundant sets of data, and understand the risks involved. So the chances of losing your drives does not equal the chances of losing your data, because you are smart enough not to rely on just your RAID array. Just as you're intelligent enough to back up a single disk.



    The chances of losing your data in a two-disk RAID0 setup like the OP suggests isn't nuts at all, IMO, it's not as if you are talking about failure in days instead of years. Even if you halve the warranty period of most drives you're talking about 2-3 years.



    I wouldn't trust a drive for two years. Not that I haven't had drives for two years, but they become shaky around that time.



    There are better Raid systems such as 5, 6, 8, 10. But they require at least three drives. At todays prices, that's not such a big deal. Often the case for a four drive Raid costs as much as the drives, as the drives are so cheap.
  • Reply 103 of 108
    Quote:
    Originally Posted by melgross View Post


    I wouldn't trust a drive for two years. Not that I haven't had drives for two years, but they become shaky around that time.



    There are better Raid systems such as 5, 6, 8, 10. But they require at least three drives. At todays prices, that's not such a big deal. Often the case for a four drive Raid costs as much as the drives, as the drives are so cheap.



    Drives becoming shaky, as in wipe and start over or drive head scraping on platter shaky? Two different problems altogether. Two years isn't asking much out of a drive mechanically, I've got 6GB drives that still function perfectly well, from 1999.



    What we were talking about here was using two in a laptop on RAID0 anyhow. I understand the redundancy in other RAID setups, but two SSD's on RAID0 in a notebook is a different animal, and you also need not worry about moving part failure. Performance gains are something to seriously consider in that circumstance.



    Here is a quote from benchmarkreviews.com:



    Compared to a single Vertex SSD configuration, the RAID-0 SSDs don't plateau performance until the 256 KB sized file chunks. A single Vertex offered 249 MBps maximum read performance, while the RAID-0 Vertex recorded a 438 MBps top speed. That's not quite 100% of a single Vertex SSD, but 76% isn't bad considering overhead and throughput managment. Moving on to the read-from performance, a single Vertex SSD gave a best speed of 137 MBps while the RAID-0 Vertex SSDs offered an impressive 358 MBps for 161% improvement!
  • Reply 104 of 108
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by technohermit View Post


    Drives becoming shaky, as in wipe and start over or drive head scraping on platter shaky? Two different problems altogether. Two years isn't asking much out of a drive mechanically, I've got 6GB drives that still function perfectly well, from 1999.



    What we were talking about here was using two in a laptop on RAID0 anyhow. I understand the redundancy in other RAID setups, but two SSD's on RAID0 in a notebook is a different animal, and you also need not worry about moving part failure. Performance gains are something to seriously consider in that circumstance.



    Here is a quote from benchmarkreviews.com:



    Compared to a single Vertex SSD configuration, the RAID-0 SSDs don't plateau performance until the 256 KB sized file chunks. A single Vertex offered 249 MBps maximum read performance, while the RAID-0 Vertex recorded a 438 MBps top speed. That's not quite 100% of a single Vertex SSD, but 76% isn't bad considering overhead and throughput managment. Moving on to the read-from performance, a single Vertex SSD gave a best speed of 137 MBps while the RAID-0 Vertex SSDs offered an impressive 358 MBps for 161% improvement!



    Shaky as in mechanically and electrically unreliable. Shaky in that my Smart software is warning me of impending failure, which is a drive problem, not a software one. I've got over a dozen drives in a pile in my computer room here that have failed.



    There was a study done a couple of years ago that showed that brand new drives, still in the box, and stored properly, had the same failure rates as drives used every day. This surprised a lot of people, but it makes sense.



    I know people who think they can back-up, take the drive out, and they're safe. They're not.



    Right now, it's known that SSD's are LESS reliable than HDDs. Again, it doesn't seem to make sense, but it's true. In a few years, that shouldn't be the case, but it is now.
  • Reply 105 of 108
    seek3rseek3r Posts: 179member
    Quote:
    Originally Posted by melgross View Post


    Shaky as in mechanically and electrically unreliable. Shaky in that my Smart software is warning me of impending failure, which is a drive problem, not a software one. I've got over a dozen drives in a pile in my computer room here that have failed.



    There was a study done a couple of years ago that showed that brand new drives, still in the box, and stored properly, had the same failure rates as drives used every day. This surprised a lot of people, but it makes sense.



    I know people who think they can back-up, take the drive out, and they're safe. They're not.



    Right now, it's known that SSD's are LESS reliable than HDDs. Again, it doesn't seem to make sense, but it's true. In a few years, that shouldn't be the case, but it is now.



    Do you pay attention to the reviews and expected failure rate on drives you buy? do you wait for second revisions? do you use proper power protection (surge protection, UPS, etc)



    I used to manage a compute cluster at my uni, including our primary storage arrays. I handled enough drives for my data to be significantly significant and I generally saw large failures at the 3-4 year mark, on drives under heavy usage. At home... I've had very few drives fail on me personally honestly, I have drives that are nearly 20 years old (not much younger than me :-P) now and still work!
  • Reply 106 of 108
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by seek3r View Post


    Do you pay attention to the reviews and expected failure rate on drives you buy? do you wait for second revisions? do you use proper power protection (surge protection, UPS, etc)



    I used to manage a compute cluster at my uni, including our primary storage arrays. I handled enough drives for my data to be significantly significant and I generally saw large failures at the 3-4 year mark, on drives under heavy usage. At home... I've had very few drives fail on me personally honestly, I have drives that are nearly 20 years old (not much younger than me :-P) now and still work!



    I've been doing this for many years. I'm not a neophyte.



    You're very unusual to have drives work for you after 20 years. usually they won't even start up after 10 or so.



    Yes, most failures occur after 3 years, which is why the study used the three year point. But two years is when they begin to rise.



    I've got a lot of drives. They fail. That's what drives do.
  • Reply 107 of 108
    seek3rseek3r Posts: 179member
    Quote:
    Originally Posted by melgross View Post


    I've been doing this for many years. I'm not a neophyte.



    You're very unusual to have drives work for you after 20 years. usually they won't even start up after 10 or so.



    Yes, most failures occur after 3 years, which is why the study used the three year point. But two years is when they begin to rise.



    I've got a lot of drives. They fail. That's what drives do.



    Agreed in general, which is why I mostly use raid10, 5, 6, zpool, or other redundancies + backups. I just took issue with the idea that drives are quite as volatile as the picture you're painting. If you take proper precautions they should at least last to the 3 year mark with no problems, many will last quite a bit longer.



    Point is, technohermit is right, raid0 as a primary drive in a machine is not particularly worrisome within that 3 year mark, or not much more so than a single drive without raid - as long as you have appropriate backups. For that matter, RAID5 on a large array isnt much safer. With large drives from around the same era (if you havent been slowly upgrading the array, which when you're talking *large* arrays can be expensive) you run the risk of a second failure during rebuild. RAID6 helps a bit by giving you 2 failures, but that only pushes the problem to larger drives sets. For example, a Dell MD1000 15 drive array full of 2TB drives isnt particularly safe in RAID5 if a drive fails - there had better be backups.



    Anyways, as a safe raid0 example, for speed reasons I've been considering switching my MP to a raid0 for the primary OS drive. My data is backed up on a raid5 NAS with a lot more storage than the local system anyway and truly important stuff on DVD and offsite colo anyway as well. I can easily image the install once I've got everything the way I like in case I need to restore after a failure.
  • Reply 108 of 108
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by seek3r View Post


    Agreed in general, which is why I mostly use raid10, 5, 6, zpool, or other redundancies + backups. I just took issue with the idea that drives are quite as volatile as the picture you're painting. If you take proper precautions they should at least last to the 3 year mark with no problems, many will last quite a bit longer.



    Point is, technohermit is right, raid0 as a primary drive in a machine is not particularly worrisome within that 3 year mark, or not much more so than a single drive without raid - as long as you have appropriate backups. For that matter, RAID5 on a large array isnt much safer. With large drives from around the same era (if you havent been slowly upgrading the array, which when you're talking *large* arrays can be expensive) you run the risk of a second failure during rebuild. RAID6 helps a bit by giving you 2 failures, but that only pushes the problem to larger drives sets. For example, a Dell MD1000 15 drive array full of 2TB drives isnt particularly safe in RAID5 if a drive fails - there had better be backups.



    Anyways, as a safe raid0 example, for speed reasons I've been considering switching my MP to a raid0 for the primary OS drive. My data is backed up on a raid5 NAS with a lot more storage than the local system anyway and truly important stuff on DVD and offsite colo anyway as well. I can easily image the install once I've got everything the way I like in case I need to restore after a failure.



    I just like to make people aware that a drive can fail at any time. Drives are rated like lightbulbs. MTBF doesn't meant that drives are reliable until a certain time given with that number, whatever it might be.



    Remember that these days, even cheap junk such as ATA drives are often rated above 500,000 hours. That's 50 years of use! Who trusts those numbers?



    While the average drive might fail at the 3 year point, that point is just the center of the normal curve for drive failures, and drives fail after less time, and after more time. If a drive has made it past 4 years, it might last another ten, but that would be rare.



    It's also rare for a drive to fail after 500 hours, but it happens.



    I'm saying that people must be prepared for that. Raid 0 gives a three times greater chance for failure than does one drive alone. That's the way it goes. A Raid 1 has a .5 failure rate as compared to one drive.



    You can figure out hat's more important for you, reliability, or speed and size.



    As I said earlier, in the video business, we never kept work on Raid 0 systems any longer than required.



    What's interesting is that drives aren't any more reliable than they were 10 years ago. Despite the longer MTBF, they still fail at the same rates over the same amount of time.



    This is a good place to start learning about this, and they have good links from there.



    http://www.eweek.com/c/a/Data-Storag...Flap-or-Farce/
Sign In or Register to comment.