Apple NAS . Why not?

Posted:
in Future Apple Hardware edited January 2014
Why doesn't apple just leverage their investment in the 1U Xserves and create an Apple Network Attached Storage device?



Simply use a lowcost G4 processor and the Apple Drive Modules currently available from the Xserve line. This way there is an option for those who want to buy G5's but have issues with internal expandability.



Hell the future may have us booting our Computers ..and running the OS in Non Volatile RAM forgoing on the need for internal Hard Drives for certain applications.
«1

Comments

  • Reply 1 of 23
    placeboplacebo Posts: 5,767member
    Those people will buy a Fibre Channel PCI card and an Xserve RAID.
  • Reply 2 of 23
    709709 Posts: 2,016member
    Um, no. Those people will not. What those people will do is max out any HD speed to a FW800 RAID, becuse that will be far cheaper in the short run. Apple's ADM's are WAY too expensive for one, and you can't buy individual trays without paying the *Apple Tax* for HDs.



    The results I've seen for FW800 RAIDs so far are impressive. Not competing bit-for-bit with SCSI yet, but not too far off.
  • Reply 3 of 23
    ionyzionyz Posts: 491member
    Some people put _WAY_ too much into the Xserve RAID. People mention it as RAID for the G5, or what have you. There is a middle ground that gets overlooked. A single drive FW800 setup is $200? Then I hear Xserve RAID mentioned which is $6000 for four _ATA_ drives. Nah. A middle ground is not there, and I don't believe many are jumping on the Xserve RAID.



    4 and 8 bay FireWire 800 units, or how about SCSI speed? 709 is right. There is FW800 for the middle ground. SCSI arrays will also get you more performance. I can't imagine 7200rpm Apple drives being very speedy even RAID'd. Xserve RAID is often extreme overkill whenever it is mentioned.



    (sorry for the rant)
  • Reply 4 of 23
    hmurchisonhmurchison Posts: 12,441member
    Quote:

    Originally posted by Placebo

    Those people will buy a Fibre Channel PCI card and an Xserve RAID.



    I knew someone would fall into the RAID trap. Placebo. Remember a Raid is connection to ONE Computer. The benefits a NAS is that it sits on the Network and does File Sharing Within the box itself hence it must have a processor of it's own. The two are very different paradigms.



    RAID has it's place I will not dispute that but NAS benefits uses with multiple computers more. ADM are too expensive and will stay that way unless they can be used in other areas. However many NAS devices I've seen don't have swappable drives either.



    Simply there has to be a way to to run a Fileserver without spending $5k on a RAID box. Backing up a FS is easier than backing up 4 client machines.
  • Reply 5 of 23
    aphelionaphelion Posts: 736member
    If you are willing to entertain possibility of an Expansion Chassis developed by Apple, how might it fill the need for those who want a RAID array or network storage device?



    Would this be a same box as the G5, or a smaller version like a mini-me as one of the posters here depicted? What would likely be in it? Optical drive or drives? PCI-X slots? room for four hard drives?
  • Reply 6 of 23
    ionyzionyz Posts: 491member
    Quote:

    Originally posted by Aphelion

    ... Would this be a same box as the G5, or a smaller version like a mini-me as one of the posters here depicted? What would likely be in it? Optical drive or drives? PCI-X slots? room for four hard drives?



    Something that is versatile, although that is not a common Apple trait IMO. This would make an ideal candidate:







    The box on the right. Put in hard drives, opticals, what have you. Not using this as design cues but it would be a welcome product for the G5 crowd.
  • Reply 7 of 23
    This is absolutely what Apple should be doing, for the simple reason that the NAS/SAN Gateway concept is the only truly logical way forward for storage.



    Firstly, FC-AL/FC-SW is expensive to implement and expensive to scale. The host cards are expensive. The switches are expensive. The expertise is expensive.



    Secondly, to implement remote mirroring, you need to implement FC over ATM over a dedicated connection that cannot be shared with any other traffic.



    Thirdly, you have to implement an additional and completely different cabling infrastructure.



    Fourthly, whilst FC-AL bandwidth rates tend to double, Ethernet bandwidth tends to step using an order of magnitude.



    NAS, or to be more accurate SoIP, delivers a routable storage architecture using TCP/IP and QoS protocols, which allows a mirror for a small company to be implemented over a T1 link shared with other traffic.



    It allows the infrastructure to take advantage of advances in Ethernet performance, which means that you can have multiple Gigabit-attached NAS devices to share a 10-Gbit Ethernet backbone.



    And the biggest advantage of all is that a single NAS gateway device can then interact on private basis with a dedicated Fibre Channel SAN, restricting the need for FC wiring to a single contiguous rack space and the rationalising the need for expensive host adapters.



    SAN controllers tend to be quite limited on the number of hosts they will support (64 seems to the top end in the HP/IBM mainstream), whilst NAS Gateways can typically support hundreds or even thousands of host connections using multiple "fatpipe" Gigabit connections.



    NAS is the only way to go as blade computing becomes more prevalent, a process that is inevitable given the onset of the 4-way blade running either 2.8GHz Xeons or 2.5GHz 970s. If I wanted a render farm or a compute farm or even a clever way of delivering ASP-type services to the SME market, blade servers are the way forward, and with that level of density NAS is the only choice for storage.
  • Reply 8 of 23
    hmurchisonhmurchison Posts: 12,441member
    Mark- That's an excellent breakdown.



    Quote:

    It allows the infrastructure to take advantage of advances in Ethernet performance, which means that you can have multiple Gigabit-attached NAS devices to share a 10-Gbit Ethernet backbone



    Yes, you'd think with Gigabit fast becoming a commodity that SAN/NAS manufacturers will be taking advantage of this.



    I was wondering if Airport Extreme would be feasible for a NAS as well. I'm unaware of NAS with Wireless capability but someone has to have something like this.
  • Reply 9 of 23
    aphelionaphelion Posts: 736member
    So Mark, it would seem that Gigabit Ethernet would be the transport method of choice for an expansion chassis from Apple. What would this mean specifically to the backplane for such a device?



    Also the suggestion that Airport Extreme might be useful in such a device makes sense for an outbound sales staff when they are officebound.



    Do the above suppositions point to a mini-server type of device? With a G3 from IBM as "adequate" to handle the traffic? Not a "headless" mac but a Mac-less PPC "node" running Darwin maybe?



    I guess what I'm getting at is how cheap could Apple make this device? You may note that I have more questions than answers here, I just would like an idea of what is possible.



    Ever since finding info on the development of this product I've been trying to figure out what would make it insanely great. Affordability would go along way here. Is something like this possible from Apple?
  • Reply 10 of 23
    hmurchisonhmurchison Posts: 12,441member
    http://www.teac.com/DSPD/Microservers.html



    Wireless NAS Server. Supports Mac too.
  • Reply 11 of 23
    Quote:

    Originally posted by Aphelion

    So Mark, it would seem that Gigabit Ethernet would be the transport method of choice for an expansion chassis from Apple. What would this mean specifically to the backplane for such a device?



    Also the suggestion that Airport Extreme might be useful in such a device makes sense for an outbound sales staff when they are officebound.



    Do the above suppositions point to a mini-server type of device? With a G3 from IBM as "adequate" to handle the traffic? Not a "headless" mac but a Mac-less PPC "node" running Darwin maybe?



    I guess what I'm getting at is how cheap could Apple make this device? You may note that I have more questions than answers here, I just would like an idea of what is possible.



    Ever since finding info on the development of this product I've been trying to figure out what would make it insanely great. Affordability would go along way here. Is something like this possible from Apple?




    I'm not quite sure that's what I'm saying: I believe that technologies like HyperTransport and Infiniband are the route for developing flexible shared I/O platforms such as a putative Apple PCI/PCI-X expansion chassis.



    However, I do firmly believe that Gigabit and 10-Gea are the future of flexible storage, simply because using Storage-over-IP (SoIP) enables an enterprise - like BP - or a service provider like an ASP to maintain redundant storage pools in dispersed locations using standard wide-area technology and skills.



    One likely datacenter/server farm scenario is that a local cluster of servers will share a n+1 arrangement of IB type I/O expansion chassis delivering multiple gigabit+ connections to a shared 10Gea backbone.



    These compute + I/O clusters can be replicated remotely several times at different locations thus providing both local fault tolerance and redundancy and remote business continuity provisioning.



    These meta-clusters then share a pool of SAN-type mass storage pools - probably FC-SW driven - which are front-ended by TCP/IP-based NAS gateways, which are nothing more than highly tuned servers running tightly optimised implementations of Windows/Linux/MacOS/Darwin.



    Just to prove the point, HP's NAS gateway is effectively a locked-down DL380 running a stripped Win2K Server implementation.



    I can forsee the hybrid NAS/SAN concept ultimately developing into multiple Xeon/970 blades driving either a limited quantity (a couple of terabytes) of locally attached storage (facilitated through HT, IB and whatever version of SCSI you like), mid-range (2-8TB) through FC-AL/SCSI - or delivering massive quantities (8TB+) through FC-SW/SCSI.



    Back it all up using Sony's new S-AIT (first generation out now: 500GB native/1.3GB compressed) delivered through an ADIC library and you're away.
  • Reply 12 of 23
    gizzmonicgizzmonic Posts: 511member
    *sniff sniff*



    what's that smell?



    is that coming from...this thread?



    Eww...(checks shoes)
  • Reply 13 of 23
    Quote:

    Originally posted by Gizzmonic

    *sniff sniff*



    what's that smell?



    is that coming from...this thread?



    Eww...(checks shoes)




    Got a valid point, make it or bug out
  • Reply 14 of 23
    Looks like this is where SANs are headed these days iSCSI ...especially with gig-e and 10 gig-e just around the corner...looks to be much cheaper than fibre channel too.



    SANs are definately the direction shared storage is headed...makes much more sense for clusters, backups and redundancy than NAS.
  • Reply 15 of 23
    Quote:

    Originally posted by Colby2000

    Looks like this is where SANs are headed these days iSCSI ...especially with gig-e and 10 gig-e just around the corner...looks to be much cheaper than fibre channel too.



    SANs are definately the direction shared storage is headed...makes much more sense for clusters, backups and redundancy than NAS.




    I think it's just a question of semantics at this point. iSCSI (or SCSI over IP) is pretty much the same thing as SoIP (storage over IP).



    IBM use the terms NAS and iSCSI in their pages to describe their iSCSI product. And some use iSCSI and SoIP in pages to desribe obscure little - but expensive - products that bridge SANs to IP-based networks.



    NAS is a concept that everyone understands, and it makes sense to say to customers that you have a full range of NAS products from the entry-level units (for the 10 seat company) that we currently term NAS to the full-blown NAS/SAN gateway products (for the 2500 seat enterprise campus).
  • Reply 16 of 23
    Yeah, I just tend to look at SAN and NAS a little differently. NAS as pretty much just hard drives hanging off the network, and NAS sitting on its own private data network (fibre channel or ethernet) for shared server storage/clustering. iSCSI/SoIP colud really save some cash in those applications (as opposed to fibre channel...ever look at the price of fc switches? ouch.)



    [edit: hard to type holding a baby!]
  • Reply 17 of 23
    rolandgrolandg Posts: 632member
    Just to keep up with discussion: What exactly are the differences between NAS and SAN?
  • Reply 18 of 23
    Quote:

    Originally posted by RolandG

    Just to keep up with discussion: What exactly are the differences between NAS and SAN?



    NAS = Network Attached Storage - Basically, it's a minimally-configurable box with hard drives and a network card. A lot of them run on a special locked down version of win2k. It lets you add storage to your network without adding hard drives to your server.



    Pro - Cheap.

    Con - Lots of file access floods your LAN.

    Think 'small network', or, buy NAS instead of a new server for the office.



    SAN = Storage Area Network - You attach your servers to a rack of hard drives (like Xserve RAID). Usually these are connected using fiber channel cards and switches (though iSCSI is up-and-coming), creating a private storage network (iSCSI lets the servers 'speak SCSI' to the hard drives over IP). You can share the storage between a number of servers and do things like clustering and application fail-over, and you can divide up the SAN drives into multiple RAID sets, very flexible. In short, it lets you share a bunch of hard drives between many servers.



    Pro - Keeps server to data access off your LAN, FAST, fault tolerance...

    Con - fibre channel switches are wicked expensive (the last SAN we spec'd out from Dell was around $100,000)

    Think 'big'...like datacenter big.



    Feel free to fill it/correct if I missed anything

  • Reply 19 of 23
    hmurchisonhmurchison Posts: 12,441member
    Colby200- Good breakdown. Not that I know enough to dispute anything. I do remember that when NAS and then SAN came out there were new concepts(NAS is pretty simple really. HDs..small processor light OS). SANs thought take a little bit to get your head around but the thing that should remain true is that SANS scale very well. I remember reading that the initial costs of SAN where high but once you reached a certain amount of users cost premium was negligle and in some cases lower than other methods. Don't think the typical Home user needs a SAN but NAS works for a small network just fine.
  • Reply 20 of 23
    I think the argument about SANs being expensive until break-even point is reached is absolutely fair.



    A fully loaded FastStor 900 with 32TB of 146GB disks will set you back around $800,000 list, which is a hell of a lot of cash but "only" $25/gigabyte. But the problem with SANs is that they are front-end loaded with the controller - dual-ported, N+1 power and cooling, and responsible for delivering the RAID functionality - effectively being the membership dues for the act of getting involved.



    Put two of those on the same local FC network with a dozen or more servers and one or more FC-attached tape library, and FC-AL ceases to perform optimally unless you implement the switches that Colby2000 mentions (Brocade are probably the best known brand), but you're talking high four figures for an entry point and sub-20K for higher port densities.



    Add in the cost of host adapters for the servers ($1400 from IBM, which I believe is around par for the course) and the picture is complete.



    However, there are SANs and SAN's: Most manufacturers go for hybrid compromise in which FC is used for host/controller connection and then SCSI used for the controller/disk connection. Other manufacturers (notably EMC) have some products which are pure FC all the way through to the drive controller. However, FC drives are inevitably expensive and invariably behind the capacity curve even more than SCSI.



    One of the most desirable attributes of a SAN is that you can maintain a controller->controller mirror either across a campus using optical fibre or across a restricted distance using ATM. This mirror is maintained without any overhead on the host, so it's a nice way of larger companies maintaining a business continuity position albeit at a cost.



    I agree with Colby2000 about NAS being a potential source of network flooding, however my preferred route for creating a NAS/iSCSI/SoIP infrastructure is to create an independent LAN for mass storage only i.e. a separate pair of Gigabit cards per server running on their own LAN to Gigabit switches that are dedicated to storage. However, I would also be wary of running an OLTP/Datawarehouse-type implementation on a NAS infrastructure.



    I think the point with all of these concepts is to protect investment: the NAS or iSCSI gateway is a great entry point to shared storage that facilitates a migration to FC technology as the business grows, but ensures that investment in the expensive overhead of FC technology (host adapters and switches) is restricted to a minimum.
Sign In or Register to comment.