SATA-based Xserve RAID prototype escapes from Apple (photos)

2»

Comments

  • Reply 21 of 35
    wmfwmf Posts: 1,164member
    Let's count the ways this sucks:



    FC is still too expensive.

    ADMs are still too expensive.

    It appears to still use the "split-brain" controller design instead of the redundant active-active controller design that everyone else uses.
  • Reply 22 of 35
    haggarhaggar Posts: 1,568member
    Apple has offered the choice of formatting a drive using UFS for a while now, but not all applications are compatible with that file system. Will applications have to be updated to run on ZFS?



    Does ZFS support resource forks?



    Is ZFS case sensitive, and will Apple allow users to choose whether they want case sensitivity?



    Can a ZFS formatted hard drive be used as a Mac OS startup disk?
  • Reply 23 of 35
    chuckerchucker Posts: 5,089member
    Quote:
    Originally Posted by Haggar View Post


    Apple has offered the choice of formatting a drive using UFS for a while now, but not all applications are compatible with that file system. Will applications have to be updated to run on ZFS?



    Does ZFS support resource forks?



    It supports multiple streams, so this would be implementable.



    Quote:

    Is ZFS case sensitive, and will Apple allow users to choose whether they want case sensitivity?



    They can always provide an option.



    Quote:

    Can a ZFS formatted hard drive be used as a Mac OS startup disk?



    ZFS is non-bootable at this point, though bootstrap solutions exist.
  • Reply 24 of 35
    sjksjk Posts: 603member
    Re: ZFS supporting resource forks

    Quote:
    Originally Posted by Chucker View Post


    It supports multiple streams, so this would be implementable.



    Aside from boot-ability (for now), do you know of any advantages that HFS+ still has over ZFS? Maybe HFS+ still has better performance for certain situations?
  • Reply 25 of 35
    Looks like Apple has quietly upgraded the XServer RAID with 750GB drives though they are still Ultra ATA. The 14x750GB configuration (10.5TB) costs an additional $1400 over the 7TB version.
  • Reply 26 of 35
    dfilerdfiler Posts: 3,420member
    Quote:
    Originally Posted by hmurchison View Post


    I'm with bbatsell



    The Xserve RAID need to migrate to SAS.



    ...



    ZFS is nice as well. The checksum features go beyond what you get with a basic RAID 5 setup which cannot prevent corruption.



    I'd agree about adding a SAS option. It appears to be _the_ future for storage servers.



    About ZFS... From my reading, RAID-Z simply looks like one vendor's implementation of RAID-5. Good RAID implementations have all kinds of error checking done right on-board. Some vendors like to appear special by explaining how their implementation is better than "standard" RAID. When in actuallity, no two vendors have the same definition of how each RAID level is accomplished.



    We recently purchased a SnapServer, expandable to 44TB. If apple had supported SAS and combining of multiple chassis into a single storage server, we would have gone the apple route.
  • Reply 27 of 35
    chuckerchucker Posts: 5,089member
    Quote:
    Originally Posted by dfiler View Post


    About ZFS... From my reading, RAID-Z simply looks like one vendor's implementation of RAID-5.



    RAID-Z has copy-on-write; RAID-5 does not.
  • Reply 28 of 35
    jeffdmjeffdm Posts: 12,951member
    Quote:
    Originally Posted by hmurchison View Post


    I'm with bbatsell



    The Xserve RAID need to migrate to SAS.



    I'd like to see two different models hit.



    I don't think it's a need to totally migrate to SAS, but to offer it in addition to SATA. The new Xserve shows that you can support SATA and SAS in the same bay, just slap in a new drive module with the other drive type. Making one that is SAS-only would make it exorbitantly expensive and less flexible than a dual-standard design.



    Quote:

    Deliver the same casing SAS XR with 14 drive bays. Then ship a 3U 22 drive bay XR with 2.5" SFF drives. This allows companies to tier their storage right in the RAID box. Use 15k SAS drives for your production data and SATA drives for your nearline storage. Works well with SAN file systems as well because you don't need basic stuff sitting on your expensive drives.



    The 14 bay design would hold more data than a 22 bay SFF drive design. The SFF SAS drives are half the capacity as the larger sized SAS drives of the same general speed. I don't know why this is, because the platters for the 10k and 15k drives are very small in diameter anyway.
  • Reply 29 of 35
    dfilerdfiler Posts: 3,420member
    Quote:
    Originally Posted by Chucker View Post


    RAID-Z has copy-on-write; RAID-5 does not.



    I think that is where the rub is. In my opinion, "RAID-5" doesn't preclude copy-on-write.



    Various RAID vendors tweak the text-book definition of a standard RAID level and then claim that it is a brand new type of RAID. When in actuality, all they've done is implement a nice or slightly different version of a basic RAID type.



    Each RAID level is, in my opinion, more like a category than an exact definition.



    For instance, I'm running a SyncRAID card in my old G4 as a multi-TB media server. They initially tried to call it RAID-XL but eventually gave up and just admitted it was RAID-3.



    Vendors like to give their RAID implementations a new name so that they have something to advertise. "RAID-5 with hardware ensured transaction integrity" just isn't as marketable as "RAID-Z".
  • Reply 30 of 35
    wmfwmf Posts: 1,164member
    I think they should switch from FC to SAS for the host ports, but continue using SATA drives.
  • Reply 31 of 35
    On a related note -- how is JBOD different from RAID 0? I thought RAID 0 was 'concatenation', and now we have JBOD, too?
  • Reply 32 of 35
    jeffdmjeffdm Posts: 12,951member
    Quote:
    Originally Posted by wmf View Post


    I think they should switch from FC to SAS for the host ports, but continue using SATA drives.



    That's an interesting idea.



    Quote:
    Originally Posted by macgyvr64 View Post


    On a related note -- how is JBOD different from RAID 0? I thought RAID 0 was 'concatenation', and now we have JBOD, too?



    No, they are different. Striping basically takes a block of data and splits that block of data between the drives equally. This increases sequential read and write speed. Striping requires that the drives (or the partitions on all drives) be the same size to stripe across.



    JBOD doesn't increase sequential read and write peeds for saving or retrieving large files. Maybe JBOD helps random reads and writes, I've never thought of that. JBOD allows different sizes of drives to be used together.
  • Reply 33 of 35
    wmfwmf Posts: 1,164member
    Quote:
    Originally Posted by macgyvr64 View Post


    On a related note -- how is JBOD different from RAID 0? I thought RAID 0 was 'concatenation', and now we have JBOD, too?



    JBOD is multiple disks with no RAID, so each physical disk appears as a separate volume. For example, if you attach a bunch of disks to your Mac and format each one individually with no RAID, you get a JBOD. With RAID 0, it looks like one big volume.
  • Reply 34 of 35
    chuckerchucker Posts: 5,089member
    Quote:
    Originally Posted by wmf View Post


    JBOD is multiple disks with no RAID, so each physical disk appears as a separate volume. For example, if you attach a bunch of disks to your Mac and format each one individually with no RAID, you get a JBOD. With RAID 0, it looks like one big volume.



    No. JBOD combines multiple disks, regardless of size, into one and the same volume.
  • Reply 35 of 35
    hmurchisonhmurchison Posts: 12,425member
    Quote:
    Originally Posted by JeffDM View Post


    I don't think it's a need to totally migrate to SAS, but to offer it in addition to SATA. The new Xserve shows that you can support SATA and SAS in the same bay, just slap in a new drive module with the other drive type. Making one that is SAS-only would make it exorbitantly expensive and less flexible than a dual-standard design.







    The 14 bay design would hold more data than a 22 bay SFF drive design. The SFF SAS drives are half the capacity as the larger sized SAS drives of the same general speed. I don't know why this is, because the platters for the 10k and 15k drives are very small in diameter anyway.



    SAS share the same backplane with SATA connections so you actually have to willfully engineer SATA support out of a SAS backplane. You want a SAS backplane that is full featured with staggered startup and eventually dual ported design for high availability connections but also maintain SATA support for nearline storage. I'm not sure how many SAS only products exist but I think you'll find that most SAS products now are SAS/SATA backplanes.



    I think Apple needs to focus on two distinct needs. The 14 Bay 3.5" drive Xserve RAID accomplishes storage need for many people. However the reason why you will see 2.5" drives in 2U and 3u chassis is to cover the market that is less concerned about total storage and most concerned about IOPS per rack. More spindles = more IOPS and often this is more important for some vertical markets.



    There is a 300GB 2.5" drive coming



    http://www.wwpi.com/index.php?option...1692&Itemid=39



    though it's only 4200k. Seagate makes the highest density SFF drive at 146GB and 10k speeds. I believe they just announced 73GB 15k SFF drives. Perpendicular recording will boost these numbers soon.
Sign In or Register to comment.