Apple has offered the choice of formatting a drive using UFS for a while now, but not all applications are compatible with that file system. Will applications have to be updated to run on ZFS?
Does ZFS support resource forks?
Is ZFS case sensitive, and will Apple allow users to choose whether they want case sensitivity?
Can a ZFS formatted hard drive be used as a Mac OS startup disk?
Apple has offered the choice of formatting a drive using UFS for a while now, but not all applications are compatible with that file system. Will applications have to be updated to run on ZFS?
Does ZFS support resource forks?
It supports multiple streams, so this would be implementable.
Quote:
Is ZFS case sensitive, and will Apple allow users to choose whether they want case sensitivity?
They can always provide an option.
Quote:
Can a ZFS formatted hard drive be used as a Mac OS startup disk?
ZFS is non-bootable at this point, though bootstrap solutions exist.
It supports multiple streams, so this would be implementable.
Aside from boot-ability (for now), do you know of any advantages that HFS+ still has over ZFS? Maybe HFS+ still has better performance for certain situations?
Looks like Apple has quietly upgraded the XServer RAID with 750GB drives though they are still Ultra ATA. The 14x750GB configuration (10.5TB) costs an additional $1400 over the 7TB version.
ZFS is nice as well. The checksum features go beyond what you get with a basic RAID 5 setup which cannot prevent corruption.
I'd agree about adding a SAS option. It appears to be _the_ future for storage servers.
About ZFS... From my reading, RAID-Z simply looks like one vendor's implementation of RAID-5. Good RAID implementations have all kinds of error checking done right on-board. Some vendors like to appear special by explaining how their implementation is better than "standard" RAID. When in actuallity, no two vendors have the same definition of how each RAID level is accomplished.
We recently purchased a SnapServer, expandable to 44TB. If apple had supported SAS and combining of multiple chassis into a single storage server, we would have gone the apple route.
I don't think it's a need to totally migrate to SAS, but to offer it in addition to SATA. The new Xserve shows that you can support SATA and SAS in the same bay, just slap in a new drive module with the other drive type. Making one that is SAS-only would make it exorbitantly expensive and less flexible than a dual-standard design.
Quote:
Deliver the same casing SAS XR with 14 drive bays. Then ship a 3U 22 drive bay XR with 2.5" SFF drives. This allows companies to tier their storage right in the RAID box. Use 15k SAS drives for your production data and SATA drives for your nearline storage. Works well with SAN file systems as well because you don't need basic stuff sitting on your expensive drives.
The 14 bay design would hold more data than a 22 bay SFF drive design. The SFF SAS drives are half the capacity as the larger sized SAS drives of the same general speed. I don't know why this is, because the platters for the 10k and 15k drives are very small in diameter anyway.
I think that is where the rub is. In my opinion, "RAID-5" doesn't preclude copy-on-write.
Various RAID vendors tweak the text-book definition of a standard RAID level and then claim that it is a brand new type of RAID. When in actuality, all they've done is implement a nice or slightly different version of a basic RAID type.
Each RAID level is, in my opinion, more like a category than an exact definition.
For instance, I'm running a SyncRAID card in my old G4 as a multi-TB media server. They initially tried to call it RAID-XL but eventually gave up and just admitted it was RAID-3.
Vendors like to give their RAID implementations a new name so that they have something to advertise. "RAID-5 with hardware ensured transaction integrity" just isn't as marketable as "RAID-Z".
I think they should switch from FC to SAS for the host ports, but continue using SATA drives.
That's an interesting idea.
Quote:
Originally Posted by macgyvr64
On a related note -- how is JBOD different from RAID 0? I thought RAID 0 was 'concatenation', and now we have JBOD, too?
No, they are different. Striping basically takes a block of data and splits that block of data between the drives equally. This increases sequential read and write speed. Striping requires that the drives (or the partitions on all drives) be the same size to stripe across.
JBOD doesn't increase sequential read and write peeds for saving or retrieving large files. Maybe JBOD helps random reads and writes, I've never thought of that. JBOD allows different sizes of drives to be used together.
On a related note -- how is JBOD different from RAID 0? I thought RAID 0 was 'concatenation', and now we have JBOD, too?
JBOD is multiple disks with no RAID, so each physical disk appears as a separate volume. For example, if you attach a bunch of disks to your Mac and format each one individually with no RAID, you get a JBOD. With RAID 0, it looks like one big volume.
JBOD is multiple disks with no RAID, so each physical disk appears as a separate volume. For example, if you attach a bunch of disks to your Mac and format each one individually with no RAID, you get a JBOD. With RAID 0, it looks like one big volume.
No. JBOD combines multiple disks, regardless of size, into one and the same volume.
I don't think it's a need to totally migrate to SAS, but to offer it in addition to SATA. The new Xserve shows that you can support SATA and SAS in the same bay, just slap in a new drive module with the other drive type. Making one that is SAS-only would make it exorbitantly expensive and less flexible than a dual-standard design.
The 14 bay design would hold more data than a 22 bay SFF drive design. The SFF SAS drives are half the capacity as the larger sized SAS drives of the same general speed. I don't know why this is, because the platters for the 10k and 15k drives are very small in diameter anyway.
SAS share the same backplane with SATA connections so you actually have to willfully engineer SATA support out of a SAS backplane. You want a SAS backplane that is full featured with staggered startup and eventually dual ported design for high availability connections but also maintain SATA support for nearline storage. I'm not sure how many SAS only products exist but I think you'll find that most SAS products now are SAS/SATA backplanes.
I think Apple needs to focus on two distinct needs. The 14 Bay 3.5" drive Xserve RAID accomplishes storage need for many people. However the reason why you will see 2.5" drives in 2U and 3u chassis is to cover the market that is less concerned about total storage and most concerned about IOPS per rack. More spindles = more IOPS and often this is more important for some vertical markets.
though it's only 4200k. Seagate makes the highest density SFF drive at 146GB and 10k speeds. I believe they just announced 73GB 15k SFF drives. Perpendicular recording will boost these numbers soon.
Comments
FC is still too expensive.
ADMs are still too expensive.
It appears to still use the "split-brain" controller design instead of the redundant active-active controller design that everyone else uses.
Does ZFS support resource forks?
Is ZFS case sensitive, and will Apple allow users to choose whether they want case sensitivity?
Can a ZFS formatted hard drive be used as a Mac OS startup disk?
Apple has offered the choice of formatting a drive using UFS for a while now, but not all applications are compatible with that file system. Will applications have to be updated to run on ZFS?
Does ZFS support resource forks?
It supports multiple streams, so this would be implementable.
Is ZFS case sensitive, and will Apple allow users to choose whether they want case sensitivity?
They can always provide an option.
Can a ZFS formatted hard drive be used as a Mac OS startup disk?
ZFS is non-bootable at this point, though bootstrap solutions exist.
It supports multiple streams, so this would be implementable.
Aside from boot-ability (for now), do you know of any advantages that HFS+ still has over ZFS? Maybe HFS+ still has better performance for certain situations?
I'm with bbatsell
The Xserve RAID need to migrate to SAS.
...
ZFS is nice as well. The checksum features go beyond what you get with a basic RAID 5 setup which cannot prevent corruption.
I'd agree about adding a SAS option. It appears to be _the_ future for storage servers.
About ZFS... From my reading, RAID-Z simply looks like one vendor's implementation of RAID-5. Good RAID implementations have all kinds of error checking done right on-board. Some vendors like to appear special by explaining how their implementation is better than "standard" RAID. When in actuallity, no two vendors have the same definition of how each RAID level is accomplished.
We recently purchased a SnapServer, expandable to 44TB. If apple had supported SAS and combining of multiple chassis into a single storage server, we would have gone the apple route.
About ZFS... From my reading, RAID-Z simply looks like one vendor's implementation of RAID-5.
RAID-Z has copy-on-write; RAID-5 does not.
I'm with bbatsell
The Xserve RAID need to migrate to SAS.
I'd like to see two different models hit.
I don't think it's a need to totally migrate to SAS, but to offer it in addition to SATA. The new Xserve shows that you can support SATA and SAS in the same bay, just slap in a new drive module with the other drive type. Making one that is SAS-only would make it exorbitantly expensive and less flexible than a dual-standard design.
Deliver the same casing SAS XR with 14 drive bays. Then ship a 3U 22 drive bay XR with 2.5" SFF drives. This allows companies to tier their storage right in the RAID box. Use 15k SAS drives for your production data and SATA drives for your nearline storage. Works well with SAN file systems as well because you don't need basic stuff sitting on your expensive drives.
The 14 bay design would hold more data than a 22 bay SFF drive design. The SFF SAS drives are half the capacity as the larger sized SAS drives of the same general speed. I don't know why this is, because the platters for the 10k and 15k drives are very small in diameter anyway.
RAID-Z has copy-on-write; RAID-5 does not.
I think that is where the rub is. In my opinion, "RAID-5" doesn't preclude copy-on-write.
Various RAID vendors tweak the text-book definition of a standard RAID level and then claim that it is a brand new type of RAID. When in actuality, all they've done is implement a nice or slightly different version of a basic RAID type.
Each RAID level is, in my opinion, more like a category than an exact definition.
For instance, I'm running a SyncRAID card in my old G4 as a multi-TB media server. They initially tried to call it RAID-XL but eventually gave up and just admitted it was RAID-3.
Vendors like to give their RAID implementations a new name so that they have something to advertise. "RAID-5 with hardware ensured transaction integrity" just isn't as marketable as "RAID-Z".
I think they should switch from FC to SAS for the host ports, but continue using SATA drives.
That's an interesting idea.
On a related note -- how is JBOD different from RAID 0? I thought RAID 0 was 'concatenation', and now we have JBOD, too?
No, they are different. Striping basically takes a block of data and splits that block of data between the drives equally. This increases sequential read and write speed. Striping requires that the drives (or the partitions on all drives) be the same size to stripe across.
JBOD doesn't increase sequential read and write peeds for saving or retrieving large files. Maybe JBOD helps random reads and writes, I've never thought of that. JBOD allows different sizes of drives to be used together.
On a related note -- how is JBOD different from RAID 0? I thought RAID 0 was 'concatenation', and now we have JBOD, too?
JBOD is multiple disks with no RAID, so each physical disk appears as a separate volume. For example, if you attach a bunch of disks to your Mac and format each one individually with no RAID, you get a JBOD. With RAID 0, it looks like one big volume.
JBOD is multiple disks with no RAID, so each physical disk appears as a separate volume. For example, if you attach a bunch of disks to your Mac and format each one individually with no RAID, you get a JBOD. With RAID 0, it looks like one big volume.
No. JBOD combines multiple disks, regardless of size, into one and the same volume.
I don't think it's a need to totally migrate to SAS, but to offer it in addition to SATA. The new Xserve shows that you can support SATA and SAS in the same bay, just slap in a new drive module with the other drive type. Making one that is SAS-only would make it exorbitantly expensive and less flexible than a dual-standard design.
The 14 bay design would hold more data than a 22 bay SFF drive design. The SFF SAS drives are half the capacity as the larger sized SAS drives of the same general speed. I don't know why this is, because the platters for the 10k and 15k drives are very small in diameter anyway.
SAS share the same backplane with SATA connections so you actually have to willfully engineer SATA support out of a SAS backplane. You want a SAS backplane that is full featured with staggered startup and eventually dual ported design for high availability connections but also maintain SATA support for nearline storage. I'm not sure how many SAS only products exist but I think you'll find that most SAS products now are SAS/SATA backplanes.
I think Apple needs to focus on two distinct needs. The 14 Bay 3.5" drive Xserve RAID accomplishes storage need for many people. However the reason why you will see 2.5" drives in 2U and 3u chassis is to cover the market that is less concerned about total storage and most concerned about IOPS per rack. More spindles = more IOPS and often this is more important for some vertical markets.
There is a 300GB 2.5" drive coming
http://www.wwpi.com/index.php?option...1692&Itemid=39
though it's only 4200k. Seagate makes the highest density SFF drive at 146GB and 10k speeds. I believe they just announced 73GB 15k SFF drives. Perpendicular recording will boost these numbers soon.