It all depends on what you're using your RAID for (fault tolerance, speed, or capacity). There's also RAID 10 which gives you the fault tolerance gains of RAID 1 plus the speed gains of RAID 0 (at the cost of more drives). And yes, I've lost RAID 5 arrays in that same way too -- I feel your pain.
How about RAID 6?
From what I understand, two drives can fail.
I've been looking high and low for something like this. Areca ARC-7050 was an eight-bay Thunderbolt unit.
I believe this is the first eight-bay from Promise.
Again, it depends on what you need from your RAID. For fault tolerance, you need to consider how many more reads and writes a given RAID level adds to achieve redundancy. Increasing the number will decrease the life span of all the drives involved (even if the RAID level allows for one or more drives to fail). And also, ensuring that you have drive diversity (different models and production runs) is important to decrease the probability of multiple drives failing at once.
And also, ensuring that you have drive diversity (different models and production runs) is important to decrease the probability of multiple drives failing at once.
This is the single most common mistake I've seen on almost every RAID enclosure, they all comes with disk with sequential SN. I can't count how many RAID 5 with 2 failing disk I've seen so far.
This is the single most common mistake I've seen on almost every RAID enclosure, they all comes with disk with sequential SN. I can't count how many RAID 5 with 2 failing disk I've seen so far.
Indeed. This is why people who really understand RAID just want an empty enclosure.
No sane pro keeps important files on a drive with no redundancy or backup for any longer than they need to (if at all).
Many wrongfully think that RAID is a way for securing data, it is not. Statistically, even a RAID 5 is more susceptible of failing than a single drive volume, and no RAID level (1, 5, 10, 50) will ever protect the data from users errors. RAIDs are made for 1) having a big volume than it possible with a single drive volume, 2) have a greater disk access performance by accessing multiple drive at the same time.
You always need a backup, Raid or not.
The only risk of data loss in a reasonably engineered Raid 5 array is the failure of a second drive during a rebuild. I suppose someone may have some statistics saying that a poorly timed two drive failure is more likely than a single drive failure, but I really doubt it. It would be a logical impossibility for RAID 10 not to be more reliable than a single drive. They are a way of securing data, your first sentence is simply wrong. They are not a way of completely securing data.
" RAIDs are made for 1) having a big volume than it possible with a single drive volume, 2) have a greater disk access performance by accessing multiple drive at the same time."
RAIDS are made for different things so your general statement is not true. RAID 0 fits your statement. RAID 5 does (well only for reads on point 2) but adds reliability over RAID 0. RAID 10 does both of those and enhances data reliability. RAID 1 does neither of the things in your statement but does increase reliability.
It all depends on what you're using your RAID for (fault tolerance, speed, or capacity). There's also RAID 10 which gives you the fault tolerance gains of RAID 1 plus the speed gains of RAID 0 (at the cost of more drives). And yes, I've lost RAID 5 arrays in that same way too -- I feel your pain.
True, there is so many exotic configuration possible with RAID at great expense. My own rule of thumb is to avoid RAID setup as much as possible on workstation, for unattended server or SAN this is another games.
That is truly absurd. Unless you are confusing desktops and workstations. I could not imagine anyone in post-production, animation, drafting or any other field really leveraging workstations accepting something without RAID 1 (or a similar configuration) for data storage in a work station environment. Anyone getting paid to specify something like that is in way over their head.
BTW RAID 10 is in no way "exotic" it has been in wide production use for at least 10 years now...
<span style="line-height:1.4em;">And also, ensuring that you have drive diversity (different models and production runs) is important to decrease the probability of multiple drives failing at once.</span>
This is the single most common mistake I've seen on almost every RAID enclosure, they all comes with disk with sequential SN. I can't count how many RAID 5 with 2 failing disk I've seen so far.
For home or SMB use, people should source the drives themselves. They should also spend the extra $100 on real enterprise drives.
The only risk of data loss in a reasonably engineered Raid 5 array is the failure of a second drive during a rebuild. I suppose someone may have some statistics saying that a poorly timed two drive failure is more likely than a single drive failure, but I really doubt it. It would be a logical impossibility for RAID 10 not to be more reliable than a single drive. They are a way of securing data, your first sentence is simply wrong. They are not a way of completely securing data.
" RAIDs are made for 1) having a big volume than it possible with a single drive volume, 2) have a greater disk access performance by accessing multiple drive at the same time."
RAIDS are made for different things so your general statement is not true. RAID 0 fits your statement. RAID 5 does (well only for reads on point 2) but adds reliability over RAID 0. RAID 10 does both of those and enhances data reliability. RAID 1 does neither of the things in your statement but does increase reliability.
Like Auxio has exposed before me, the most common problems with RAIDs is being sold with a bunch of HD comings from the same production, If one disk fail from old age / mfg defects you've have all the chances in the world for having another drive failing within the same period.
I still reiterate, RAID as being created at the first place to overcome the capacity limit of a mechanical disc by virtualizing the volume and spreading data across multiple disk. RAID 5, 6, 10, 10+1, 50 etc, has come much later to overcome disk reliability with array over 2 disk. A RAID 10 is not a way more reliable than a RAID 5, and less than a RAID 6.
My preference would be for an 8-bay chassis and i'd run RAID-10.
I like media apps like Logic and Final Cut Pro. I don't want parity striping and
long rebuild times.
Next year i'll be looking at offerings from QNAP, Synology, Thecus and more. I'm hoping Thunderbolt gets cheaper but I still think we're a couple more revisions from it being just a slight premium over today's connectivity.
The only risk of data loss in a reasonably engineered Raid 5 array is the failure of a second drive during a rebuild. I suppose someone may have some statistics saying that a poorly timed two drive failure is more likely than a single drive failure, but I really doubt it. It would be a logical impossibility for RAID 10 not to be more reliable than a single drive. They are a way of securing data, your first sentence is simply wrong. They are not a way of completely securing data.
" RAIDs are made for 1) having a big volume than it possible with a single drive volume, 2) have a greater disk access performance by accessing multiple drive at the same time."
RAIDS are made for different things so your general statement is not true. RAID 0 fits your statement. RAID 5 does (well only for reads on point 2) but adds reliability over RAID 0. RAID 10 does both of those and enhances data reliability. RAID 1 does neither of the things in your statement but does increase reliability.
Like Auxio has exposed before me, the most common problems with RAIDs is being sold with a bunch of HD comings from the same production, If one disk fail from old age / mfg defects you've have all the chances in the world for having another drive failing within the same period.
This is not a raid problem at all. This is a problem with inexperienced people doing raid deployments. Perhaps I have just spent too much time working with people that know what they are doing...
I still reiterate, RAID as being created at the first place to overcome the capacity limit of a mechanical disc by virtualizing the volume and spreading data across multiple disk.
RAID 5, 6, 10, 10+1, 50 etc, has come much later to overcome disk reliability with array over 2 disk. A RAID 10 is not a way more reliable than a RAID 5, and less than a RAID 6.
I think you are just showing a fundamental misunderstanding of RAID 10. It is more reliable than raid 5. There is simply no way it isn't.
I think you are just showing a fundamental misunderstanding of RAID 10. It is more reliable than raid 5. There is simply no way it isn't.
Agreed. RAID 10 with proper disk diversity is definitely more reliable than no RAID or RAID 5. Same number of reads and writes (or less) on all disks as no RAID. Hence, no decreased average disk lifespan + decreased average time between multiple drive failures (as RAID 5 has). While still allowing for one or more drives to fail (same as RAID 5). It's all a matter of reducing the probability of catastrophic failure (whatever that is for a given RAID setup).
I think you are just showing a fundamental misunderstanding of RAID 10. It is more reliable than raid 5. There is simply no way it isn't.
Wether choosing RAID 5,6 or 10, one thing to keep in mind is to buy drives from different batches since batches tend to fail together. The biggest danger in RAID 10 if both drives in a subset fail, and that usually happens due to same batch drives. In a RAID 5 (or 6) it's best to have an online failover. I once set up two hardware RAID 5s (with online fail overs) and mirrored them for mega data security
As for the brand Promise (VTrak), I had nothing but bad experiences. The software is very problematic and I had multiple close calls of losing my entire RAID, their customer service is rubbish, never again! Also, their RAIDS are obnoxiously loud, their fans do have variable speeds, but the slowest speed is insanely loud. I had the RAID in a vented sound proof rack, and I could still hear the darn thing from nearby rooms.
At our facility I used a Retrospect LTO4 system to automatically backup all my RAIDs - whether it was Xserve RAIDs or internal RAIDs in the Mac Pros. It was very helpful when we had the inevitable drive failure. BTW, I did see a two drive failure on RAID 5 systems a couple of times. Naturally, one two drive failure was on the Autodesk Smoke system that was not compatible with the Retrospect backup. Figures.
Comments
Prosers? Though writers may find that insulting...
It all depends on what you're using your RAID for (fault tolerance, speed, or capacity). There's also RAID 10 which gives you the fault tolerance gains of RAID 1 plus the speed gains of RAID 0 (at the cost of more drives). And yes, I've lost RAID 5 arrays in that same way too -- I feel your pain.
How about RAID 6?
From what I understand, two drives can fail.
I've been looking high and low for something like this. Areca ARC-7050 was an eight-bay Thunderbolt unit.
I believe this is the first eight-bay from Promise.
How about RAID 6?
From what I understand, two drives can fail.
Again, it depends on what you need from your RAID. For fault tolerance, you need to consider how many more reads and writes a given RAID level adds to achieve redundancy. Increasing the number will decrease the life span of all the drives involved (even if the RAID level allows for one or more drives to fail). And also, ensuring that you have drive diversity (different models and production runs) is important to decrease the probability of multiple drives failing at once.
And also, ensuring that you have drive diversity (different models and production runs) is important to decrease the probability of multiple drives failing at once.
This is the single most common mistake I've seen on almost every RAID enclosure, they all comes with disk with sequential SN. I can't count how many RAID 5 with 2 failing disk I've seen so far.
This is the single most common mistake I've seen on almost every RAID enclosure, they all comes with disk with sequential SN. I can't count how many RAID 5 with 2 failing disk I've seen so far.
Indeed. This is why people who really understand RAID just want an empty enclosure.
The only risk of data loss in a reasonably engineered Raid 5 array is the failure of a second drive during a rebuild. I suppose someone may have some statistics saying that a poorly timed two drive failure is more likely than a single drive failure, but I really doubt it. It would be a logical impossibility for RAID 10 not to be more reliable than a single drive. They are a way of securing data, your first sentence is simply wrong. They are not a way of completely securing data.
" RAIDs are made for 1) having a big volume than it possible with a single drive volume, 2) have a greater disk access performance by accessing multiple drive at the same time."
RAIDS are made for different things so your general statement is not true. RAID 0 fits your statement. RAID 5 does (well only for reads on point 2) but adds reliability over RAID 0. RAID 10 does both of those and enhances data reliability. RAID 1 does neither of the things in your statement but does increase reliability.
That is truly absurd. Unless you are confusing desktops and workstations. I could not imagine anyone in post-production, animation, drafting or any other field really leveraging workstations accepting something without RAID 1 (or a similar configuration) for data storage in a work station environment. Anyone getting paid to specify something like that is in way over their head.
BTW RAID 10 is in no way "exotic" it has been in wide production use for at least 10 years now...
For home or SMB use, people should source the drives themselves. They should also spend the extra $100 on real enterprise drives.
The only risk of data loss in a reasonably engineered Raid 5 array is the failure of a second drive during a rebuild. I suppose someone may have some statistics saying that a poorly timed two drive failure is more likely than a single drive failure, but I really doubt it. It would be a logical impossibility for RAID 10 not to be more reliable than a single drive. They are a way of securing data, your first sentence is simply wrong. They are not a way of completely securing data.
" RAIDs are made for 1) having a big volume than it possible with a single drive volume, 2) have a greater disk access performance by accessing multiple drive at the same time."
RAIDS are made for different things so your general statement is not true. RAID 0 fits your statement. RAID 5 does (well only for reads on point 2) but adds reliability over RAID 0. RAID 10 does both of those and enhances data reliability. RAID 1 does neither of the things in your statement but does increase reliability.
Like Auxio has exposed before me, the most common problems with RAIDs is being sold with a bunch of HD comings from the same production, If one disk fail from old age / mfg defects you've have all the chances in the world for having another drive failing within the same period.
I still reiterate, RAID as being created at the first place to overcome the capacity limit of a mechanical disc by virtualizing the volume and spreading data across multiple disk. RAID 5, 6, 10, 10+1, 50 etc, has come much later to overcome disk reliability with array over 2 disk. A RAID 10 is not a way more reliable than a RAID 5, and less than a RAID 6.
My preference would be for an 8-bay chassis and i'd run RAID-10.
I like media apps like Logic and Final Cut Pro. I don't want parity striping and
long rebuild times.
Next year i'll be looking at offerings from QNAP, Synology, Thecus and more. I'm hoping Thunderbolt gets cheaper but I still think we're a couple more revisions from it being just a slight premium over today's connectivity.
I think you are just showing a fundamental misunderstanding of RAID 10. It is more reliable than raid 5. There is simply no way it isn't.
I think you are just showing a fundamental misunderstanding of RAID 10. It is more reliable than raid 5. There is simply no way it isn't.
You've misunderstand me, I do agree a RAID 10 is more reliable that a RAID 5, but not more in some cases than a RAID 6, 50 or 60.
As for inexperienced people who deployed RAID, I blame RAIDs mfg first who should cares when they sold RAID enclosure filled with disk.
I think you are just showing a fundamental misunderstanding of RAID 10. It is more reliable than raid 5. There is simply no way it isn't.
Agreed. RAID 10 with proper disk diversity is definitely more reliable than no RAID or RAID 5. Same number of reads and writes (or less) on all disks as no RAID. Hence, no decreased average disk lifespan + decreased average time between multiple drive failures (as RAID 5 has). While still allowing for one or more drives to fail (same as RAID 5). It's all a matter of reducing the probability of catastrophic failure (whatever that is for a given RAID setup).
I think you are just showing a fundamental misunderstanding of RAID 10. It is more reliable than raid 5. There is simply no way it isn't.
Wether choosing RAID 5,6 or 10, one thing to keep in mind is to buy drives from different batches since batches tend to fail together. The biggest danger in RAID 10 if both drives in a subset fail, and that usually happens due to same batch drives. In a RAID 5 (or 6) it's best to have an online failover. I once set up two hardware RAID 5s (with online fail overs) and mirrored them for mega data security
As for the brand Promise (VTrak), I had nothing but bad experiences. The software is very problematic and I had multiple close calls of losing my entire RAID, their customer service is rubbish, never again! Also, their RAIDS are obnoxiously loud, their fans do have variable speeds, but the slowest speed is insanely loud. I had the RAID in a vented sound proof rack, and I could still hear the darn thing from nearby rooms.
At our facility I used a Retrospect LTO4 system to automatically backup all my RAIDs - whether it was Xserve RAIDs or internal RAIDs in the Mac Pros. It was very helpful when we had the inevitable drive failure. BTW, I did see a two drive failure on RAID 5 systems a couple of times. Naturally, one two drive failure was on the Autodesk Smoke system that was not compatible with the Retrospect backup. Figures.