Blade Runner - Modular Powermac

Posted:
in Future Apple Hardware edited January 2014
Apple's release of the Xserve shows an interest in rack mountable products. Can a 3U blade server containing 3 to 20 CPU "blades" be far behind? If this architecture is in development (or developed) then why not a Tower revision that has the CPU's on these blades. With four slots for CPU blades upgrading would be ordering another blade (or blades). Dual CPU blades would mean 8 CPU towers possible for the pro market.



Just as the new Xserve shows what the MWNY PowerMac will most likely be, a 3U rack blade server might point to a future desktop where adding blades from the server make a killer workstation.
«1

Comments

  • Reply 1 of 25
    What about a mix and match with the clustering idea a high speed Gigawire (pick your nickname) interface on the towers and Powerbooks. A New Powermac case designed to allow a Powerbook to nest alongside and connect via a superfast connection both to have data access and allow clustering. The same connection could be used to plug in a "blade" if needed, or perhaps the machine could have a more horizontal focus and multiple additional "blades" or 'books could nest on top on you could build something along the NeXT cube scale and nest internally. Obviously in the first and second idea the blades are not conventional ones but have some kind of small enclosure.



    Who knows, seems like a crazy idea, but it could be fun .
  • Reply 2 of 25
    macroninmacronin Posts: 1,174member
    Interesting idea, but the extra cost of the blade slots would cut back on Apple's margins, causing them to charge more for new PowerMacs...



    And the idea of having external Blade units that could daisychain together via GigaWire (theoretical folks, and not my idea, so save the flames for another posting...!) is also interesting, but again, the extra cost of the external housing & power supply lower the margins...



    Blades are a great idea for quick and easy expandability, but belong in the server room for now...



    Although, there was a fabled project in the Apple skunkwerks back in the day...



    SkyLab - a highly modified Quadra chassis housing up to 14 CPU cards... 68xxx days, folks... Not a real blade configuration, but interesting...!



    Cheers!



    <img src="graemlins/smokin.gif" border="0" alt="[Chilling]" /> Maya Unlimited for Mac OS X <img src="graemlins/smokin.gif" border="0" alt="[Chilling]" />
  • Reply 3 of 25
    macroninmacronin Posts: 1,174member
    Of course, as I click on the 'Post Reply' button...



    I can see Apple doing a 3U/4U Blade housing for the server/renderfarm markek... Renderfarms using blades seems the most logical, easy to expand and easy to replace fried units, without the cost of replacing an entire server...



    But here is the kicker,,,!



    For a commercial advertising the new Blade servers/renderfarm boxes...



    Ridley Scott doing a walkthrough of the newly installed renderfarm at Escape Studios VFX school in England... (sorry about that World Cup thing chaps!)



    Ridley Scott / Blade Runner / Blade Servers (Renderfarms)...



    Work with me here folks!



    Just a though!



    Cheers!



    <img src="graemlins/smokin.gif" border="0" alt="[Chilling]" /> Maya Unlimited for Mac OS X <img src="graemlins/smokin.gif" border="0" alt="[Chilling]" />
  • Reply 4 of 25
    [quote]Originally posted by MacRonin:

    <strong>For a commercial advertising the new Blade servers/renderfarm boxes...



    Ridley Scott doing a walkthrough of the newly installed renderfarm at Escape Studios VFX school in England... (sorry about that World Cup thing chaps!)



    Ridley Scott / Blade Runner / Blade Servers (Renderfarms)...



    </strong><hr></blockquote>



    Which is all great except for the fact that Escape Studios are using IBM, and that two thirds of their courses are for software that doesn't run on mac...
  • Reply 5 of 25
    Actually in the tradition of greatly insane? rumors I figured Apple would do like ADC and include power in the connector. The blade would be in essence a laptop sized motherboard with Processor and RAM tightly coupled like a video card along with an IC to manage memory and pass data back and forth to the "gigawire" bus, running OSX via Netboot. The base or "host" unit would I think at least need an ATA/133 RAID and prolly a higher end storage product to keep up with the mainboard processors and the "blades" all pumping and dumping data.
  • Reply 6 of 25
    macroninmacronin Posts: 1,174member
    The only reason I mentioned Escape was because of the Ridley Scott (Blade Runner) reference...



    I realize they don't use Macs, now...



    But they will in the future, I am sure... Check the site, they are even listing Shake as an Apple app now... Even though Nothing Real is still a company, just a subsidiary of Apple... I am sure, with the stuff coming through the pipe in the future, folks will recognize that Apple is THE choice for running Shake...



    Besides, it is a hypothetical commercial...!



    <img src="graemlins/smokin.gif" border="0" alt="[Chilling]" /> Maya Unlimited for Mac OS X <img src="graemlins/smokin.gif" border="0" alt="[Chilling]" />
  • Reply 7 of 25
    macroninmacronin Posts: 1,174member
    I don't know, that could add up to a lot of juice running through that GigaWire cable...



    But I like the idea for RenderBlades, especially if you tie it together with the built-in clustering hypothesized in another thread...



    If you go with the thought of these individually packaged blades from the perspective of a thin client, as opposed to a server/render node, toss in workstation class graphics, and NetBoot the whole mess...



    I could see VFX schools using a room full of these for training purposes, all booting off of the teachers machine, which would be a fully functional workstation (you know, with SuperDrive, PCI slots. etc.)...



    Would keep students from futzing with their workstations, and make admin easier for the staff...



    After all, what is easier; updating the OS & apps on hundreds of units, or on a central server, which propagates over to the individual student stations...?!?



    Again, just hypothetical thoughts... Since some folks like to scoff at ideas that are not doable right here and right now!



    Cheers!



    <img src="graemlins/smokin.gif" border="0" alt="[Chilling]" /> Maya Unlimited for Mac OS X <img src="graemlins/smokin.gif" border="0" alt="[Chilling]" />
  • Reply 8 of 25
    aphelionaphelion Posts: 736member
    [quote]Originally posted by MacRonin:

    <strong>Interesting idea, but the extra cost of the blade slots would cut back on Apple's margins, causing them to charge more for new PowerMacs...



    Blades are a great idea for quick and easy expandability, but belong in the server room for now...

    </strong><hr></blockquote>



    The idea is that the R&D cost of development would all be on the "server side" and taking the mobo from the "Blade Runner" server and rotating it into a tower enclosure.



    We need a hot new PowerMac, what could be hotter than a modular tower based on the next Apple server product? And the blades themselves would be priced to compete in the server blade market which would actually make them relatively inexpensive.
  • Reply 9 of 25
    let me say how much i love this idea. yes and lets keep it rather hypothetical, for arguments sake.



    here is some thought candy for you.

    <a href="http://www.totalimpact.com/powerbox.html"; target="_blank">http://www.totalimpact.com/powerbox.html</a>;

    yes it runs under linux, but that makes it very interesting to say the least. imagine a driver port of this to osx.



    also the whole point of blade servers is that they are cost effective, inexpensive solutions.

    <a href="http://sss.lanl.gov/"; target="_blank">http://sss.lanl.gov/</a>;

    this site gets into the down an dirty of it, that Green Destiny is one hell of a machine.



    i doubt well see blades in desktop solutions anytime soon, the market for blades has cooled considerably.

    <a href="http://www.theregister.co.uk/content/61/25908.html"; target="_blank">http://www.theregister.co.uk/content/61/25908.html</a>;

    though blades could very well be the future of computing. i would be very interested in 4RU blade servers that ran under darwin, not just some beowulf clustering software like linda.
  • Reply 10 of 25
    aphelionaphelion Posts: 736member
    [quote]Originally posted by Da sinister:

    <strong>let me say how much i love this idea. yes and lets keep it rather hypothetical, for arguments sake... Also the whole point of blade servers is that they are cost effective, inexpensive solutions... though blades could very well be the future of computing. i would be very interested in 4RU blade servers that ran under darwin, not just some beowulf clustering software like linda.</strong><hr></blockquote>



    Great URL's Da sinister - Thanks for the links.



    I think we will be seeing a blade solution from Apple as the next offering in the server initiative they have already started. It just seemed logical to me that this technology could be applied to a workstation class tower. Maybe not a "PowerMac", and probably not at MWNY '02, but soon enough to coincide with the release of the high end software resulting from their recent purchases.
  • Reply 11 of 25
    A blade concept would however mean clustering complete PCs, including chipsets, some in/out features, memory etc. Rather expensive. (Though I would advocate providing such a clustering capability amoung machines, e.g. via Firewire 2). A more attractive approach would be the concept of multiprocessing through crossbar switches. As you know, Apple is cofounder of the Hypertransport consortium, which adresses exactly this. There is presently no standard for a Hypertransport connector, but its rumored to be worked on. A CPU daughterboard could hence be equipped with the CPU(s), a Hypertransport bridge (like a RapidIO-Hypertransport bridge), a Hypertransport connector (e.g. similar to AGP or CompactPCI) and memory slots. Should not be too expensive to manufacture, and also requires less spacious boards. Such Hypertransport connectors would have the additional benefit of providing opportunities to plug in something else, like a Raycer graphics card, or a 10GE adapter.



    Thyl
  • Reply 12 of 25
    i thought you would like those, the Mpower quad G4 card

    <a href="http://www.totalimpact.com/G3_MP.html"; target="_blank">http://www.totalimpact.com/G3_MP.html</a>;

    has me in a cold sweat. i might go out and get some O'reily tome for unix so i could port the drivers, or maybe ive had too much coffee. ive shown that to a few of the IT guys around the office, its fun to see grown men giggle like school girls. the HyperTransport bridge idea sounds a rather nice solution, better than a 33 Mhz PCI bus anyway. i would wager that a "Blade Runner" from apple isn't such an out there concept. apple looks at the numbers from industry analysts like any other company. if the analysts say blades are a growth sector, then you at least stop and think about getting into it on your way to the water cooler.
  • Reply 13 of 25
    aphelionaphelion Posts: 736member
    [quote]Originally posted by Thyl Engelhardt:

    ... A more attractive approach would be the concept of multiprocessing through crossbar switches. As you know, Apple is cofounder of the Hypertransport consortium, which adresses exactly this. There is presently no standard for a Hypertransport connector, but its rumored to be worked on. A CPU daughterboard could hence be equipped with the CPU(s), a Hypertransport bridge (like a RapidIO-Hypertransport bridge), a Hypertransport connector (e.g. similar to AGP or CompactPCI) and memory slots. Should not be too expensive to manufacture, and also requires less spacious boards. Such Hypertransport connectors would have the additional benefit of providing opportunities to plug in something else, like a Raycer graphics card, or a 10GE adapter.



    Thyl<hr></blockquote>



    The end of page 8 & page 9 of DorsalM's topic "MWNY '02 = Apple's Year" has a very interesting product from Marvel that might make my "Blade Runner" possible even sooner than I thought.



    [quote]Originally posted by sc_markt:

    Not sure where to post this link or even if its relevent but here it is:



    <a href="http://www.marvell.com/Internet/News/Show_News_File/1,2410,387,00.html"; target="_blank">http://www.marvell.com/Internet/News/Show_News_File/1,2410,387,00.html</a>;



    Its a press release about controllers that incorporate an advanced, high-performance 100 Gbps crossbar switch architecture for G3's, G4's, and MIPs processors.



    - Mark<hr></blockquote>





    [quote]Originally posted by wormboy:

    My God... this is great news!.



    quote:



    "Enabling even more applications to take advantage of Motorola's high-performance PowerPC ISA-based host processors, the new Marvell Discovery II devices provide support for our advanced MPX bus protocol," mentioned Bill Dunnigan, Vice President and General Manager of Motorola's Computing Platform Division. "The combination of Motorola's award-winning MPC74XX processors and these new controllers delivers a high performance, high bandwidth solution with compelling price and power dissipation advantages."





    This seems to solve the MPX bus incompatibility with DDR. By offering solutions for both the MPX and a DDR 183 MHz memory controller... wow! I am stocked about this one. Great find!....



    So, in fact, current dual processor machines do in fact use this Marvell controller (GT-64260 is a Discovery I part number).



    The Discovery II series does in fact offer a controller designed for a single processor PPC based system, as well as dual processor systems. I think we will be getting these controllers on new machines announced at Macworld NY.



    Given this, what implications does this have for overall system specification??



    <hr></blockquote>



    So does this $99 part make a "Blade runner" seem more possible?
  • Reply 14 of 25
    $49.00 to $99.00 , i think apple could be thought of as a high enough volume customer to get that price down to 49 semoliens, or less if the teamed up with IBM to buy in BIG volume. who knows though, other than SJ.
  • Reply 15 of 25
    aphelionaphelion Posts: 736member
    [quote]Originally posted by Da sinister:

    <strong>$49.00 to $99.00 , i think apple could be thought of as a high enough volume customer to get that price down to 49 semoliens, or less if the teamed up with IBM to buy in BIG volume. who knows though, other than SJ.</strong><hr></blockquote>



    Hey you read that link in it's entirety ! I figured the $99 price point would be for the one Apple would need to use, but hey if they went to a UMA II and started making blades out the ying yang for a ModularMac as well as the BladeRunner they could bring the price down.
  • Reply 16 of 25
    <a href="http://www.cleansweepsupply.com/pages/skugroup1405.html"; target="_blank">Pic of the New Blade Srver</a>



    I think Apple is going to Black for Pro Machines
  • Reply 17 of 25
    othelloothello Posts: 1,054member
    <img src="graemlins/lol.gif" border="0" alt="[Laughing]" />
  • Reply 18 of 25
    I thought I'd just copy these in so that we all have a frame of reference as to the relative pros and cons: -



    [quote]From <a href="http://www.hypertransport.org/doc_faq.htm[qb"; target="_blank">]www.hypertransport.org/doc_faq.htm<strong></a>



    Question 11:

    At what clock speeds does HyperTransportÂ? technology operate?



    Answer:

    HyperTransportÂ?Â* technology devices are designed to operate at multiple clock speeds from 200MHz up to 800MHz, and utilizes double data rate technology transferring two bits of data per clock cycle, for an effective transfer rate of up to 1,600Mb/sec in each direction. Since transfers can occur in both directions simultaneously, an aggregate transfer rate of 6.4 Gigabytes per second in a 16 bit HyperTransportÂ?Â* I/O Link and an aggregate transfer rate of 12.8 Gigabytes per second in a 32-bit HyperTransportÂ?Â* I/O Link can be achieved. To allow for system design optimization, the clocks of the receive and transmit links may be set at different rates.





    Question 25:

    What are the differences and similarities between Infiniband and HyperTransportÂ? technology?



    Answer:

    HyperTransportÂ?Â* technology is a chip-to-chip interconnect primarily intended for use on a system board within distances of up to 24 inches. Infiniband is primarily a box to box link and can cover distances of up to 17 meters. Infiniband can deliver data at rates up to 4 gigabytes per second into a system and HyperTransportÂ?Â* technology can easily transport this data within the system, unlike traditional busses that cannot handle data rates this fast. Infiniband is not a technology alternative to HyperTransportÂ? .</strong><hr></blockquote>



    [quote]From <a href="http://www.rapidio.org/faq[qb"; target="_blank">]www.rapidio.org/faq<strong></a>



    Q: How does the RapidIO technology compare to InfiniBand?



    A: InfiniBand is optimized for links between server chassis within an enterprise cluster to form a SAN. In contrast, the RapidIO technology replaces the traditional microprocessor and peripheral bus and is optimized for use between chips on a circuit board. You will likely find RapidIO interconnects co-existing inside InfiniBand subsystems. For example, the RapidIO technology might provide the bridge to PCI-X slots inside a single serer. You may also find the RapidIO technology providing concurrency and bandwidth aggregation inside the storage subsystem. We are quite confident that you will find the RapidIO technology inside networking devices that attach to InfiniBand.



    Both InfiniBand and the RapidIO technology reach into the card-to-card communications domain. In this domain where we overlap, InfiniBand provides a more abstracted interface to allow complete decoupling of the subsystems. To accomplish this abstraction, InfiniBand requires modification of legacy software, more transistors to implement, and specialized management software.</strong><hr></blockquote>



    I'm trying to find some interesting (?) info on Infiniband, which I had but have mislaid.



    However, there are some interesting points here.



    1. If you are looking for a signal to travel more than 24 inches (but more likely 18-20 inches), HyperTransport is not your boy.



    2. I can't find any solid data, but the quote implies that RapidIO probably has a similar mission in life.



    3. In neither RapidIO's or HyperTransport's FAQ, neither of them mention the other which implies a certain sensitivity.



    My personal opinion is that. in a high-density blade environment , InfiniBand is probably the interconnect fabric of choice at this time. However the current roadmap ends at a 32X 4 Gbyte/sec interconnect, which I feel may be a constraint as time goes on.



    Also I do feel that the 17 meter/55 foot limits will need to be addressed either by repeater technology or by improving the bus technology, 17 meters can get used up quite quickly if you think that a rack can be 42U tall and 22 inches wide, and that cables will have to go into false floors etc.



    That said, using a switched fabric, you could still do something like the following.



    Bay 9 racks to each other.



    Put an InfiniBand fabric switch in the middle rack, alongside two FC-SW based SAN/RAID controllers each controlling some 24 shelves x 10 drives x 240GB, so that you have some 56TB hanging off each controller or 112TB in all.



    24 shelves x 3 U = 72U, which split over 8 racks (4 each side of the central controllers) is 3 shelves/rack or 9U/rack. Leaves 33U in each rack. Take up 12U by putting in 4 x 3U of Blade Servers (Thus getting 16 shelves). Each shelf connected to central InfiniBand switch, but shelf has internal HyperTransport bus running at 12.8GB/sec. Leaves 21U



    Each shelf has the ability to accomodate up to 8 x 1.75" blade modules, which could be 2-way low-voltage G5 blades (similar to the chip destined for the 2004 iteration of the Powerbook design, running at 1.4 GHz, and 1GB of memory, plus a small, fast internal HD [e.g. 10GB] to handle virtual memory and the booted system image) + an InfiniBand repeater/Shelf Controller (2") + 2 x 1.5" Power Supply (3") = 19"



    By the configurable shelf controller, the CPU blades can be dynamically partitioned to act as SMP blocks of between 2-16 processors, whilst the InfiniBand repeaters will be used to join up to 4 Blade shelves with a single 5U I/O expansion chassis, leaving 16U



    Internally, the expansion chassis will use a HyperTransport main bus bridging to upto 12 PCI-X (or the flavour of the day) slots, to provide FireWire, Ethernet (100 mbit/1 gbit/10 gbit) plus specialist cards for data acquisition, telephony interfaces, etc.



    The repeater mechanism will be important in reducing latency, by removing the need for such I/O chassis systems to be directly connected the core Infiniband switch.



    Use up the remaining 16U with 3 5U UPS units + 1 1U Blanking plate.



    The result is an aggregate 256-processor farm, with 256GB of RAM, 96 slots of PCI-X I/O + 112TB of SAN main data storage. Throw in a mechanism for controlling a coherent single-memory image cache and you have a really neat solution that can be configured for any type of scientific, MIS, visualisation, streaming media or rendering problem.



    And the nice thing is you can do it all in an area of approximately 254 square ft in a room about 12 ft. high, excluding tape backup and network infrastructure of course.



    There should also be some location in the set up where you can cook some food.



    [ 07-04-2002: Message edited by: Mark- Card Carrying FanaticRealist ]</p>
  • Reply 19 of 25
    aphelionaphelion Posts: 736member
    Well that covers the high end pretty well, the only thing I'd like to add to that would be some sort of solid state boot device rather than a hard drive.



    But how would these blades be in a workstation with a fast RAID hard drive array and pro level video subsystem?
  • Reply 20 of 25
    [quote]Originally posted by Aphelion:

    <strong>Well that covers the high end pretty well, the only thing I'd like to add to that would be some sort of solid state boot device rather than a hard drive.



    But how would these blades be in a workstation with a fast RAID hard drive array and pro level video subsystem?</strong><hr></blockquote>



    I don't think it is just a high-end solution.



    The nice thing is the modularity - you could deliver a single 8-blade shelf (with between 8 to 16 processors) with a cut-down non-expandable version of the I/O expansion chassis - simplistically put dual InfiniBand interface, dual 2Gb/sec Fibre Channel, dual Gigabit Ethernet, dual power supply, probably in a 2U Chassis - and then use either SAN technology (either Apple's forthcoming xServer RAID or a third-party product) or some sort of MacOS-optimised network-attached storage.



    As for the workstation concept, I'm not sure it works really.



    My middle-aged memory has a dim recollection of some prototype modular Macintosh in the Sculley era, immediately post-Jobs and I vaguely remember seeing a photo of it some years later. It was so ugly that only its mother could love it.



    I can't help but feel that a better solution would be to leave the 1-4 processor setups to a standard desktop/ltower engineering approach: The weakness of the product would the level of engineering required for a blade setup rendering it uncompetitive in cost terms in the real world; conversely, the strength of the product would be the ability to execute processor upgrades.



    Only my opinion, YMMV.
Sign In or Register to comment.