Why not embrace clustering?

in Future Apple Hardware edited January 2014
I know I must be missing something, because it seems so obvious that Apple could really score big by embracing clustering in their pro lines.

Presently setting up a cluster is a fairly custom type job. Apple has a distinct advantage over wintel offerings in that they could support it in both hardware and in the OS so threaded aps could automatically take advantage of it.

I would think a pro line with Firewire 800 and auto-clustering would be quite attractive to graphics shops, biotech, and other raw power users. Want more power? Just buy another powermac, hook them together with firewire, and your aps run almost twice as fast (well, for highly threaded, parallelizable aps).

Such a move would be in line with the switch to dual processor configs in order to combat the mhz gap. Part of me hopes Apple continues to lag in processor speed so that they have to do something innovative like this. Then when the G5 hits, man they would scream!

Is there a good reason not to do it?


  • Reply 1 of 19
    screedscreed Posts: 1,077member
    Whose to say that 10.3 won't allow Firewire 800 enabled Powermacs to do just that?

  • Reply 2 of 19
    giantgiant Posts: 6,041member
    why not use gigabit ethernet like a normal person?

    [ 01-22-2003: Message edited by: giant ]</p>
  • Reply 3 of 19
    blablablabla Posts: 185member
    Sorry, for my ignorance. What do you need Firewire800 for, when running cluster? I mean.. if you are doing a lot of string-matching (or something) for a biotech problem... do you need firewire800? Do you need firewire400? IMO, no..

    Correct me if Im wrong, but I dont think I am.. Typically you dont sit in the same room as a 40++ node cluster.

    [ 01-22-2003: Message edited by: blabla ]</p>
  • Reply 4 of 19
    airslufairsluf Posts: 1,861member
  • Reply 5 of 19
    [quote]Originally posted by giant:

    <strong>why not use gigabit ethernet like a normal person?</strong><hr></blockquote>

    Oh no, 10 gigabit ethernet!

    *Doing the MOSR dance*

  • Reply 6 of 19
    macroninmacronin Posts: 1,136member
    In line with this, I want to see Apple put out a REAL rendernode...

    Something along the lines of this:

    Eight 1.8GHz PPC970 CPUs

    64GB DDRII SDRAM (Thirty-two 2GB DIMMs)

    Dual 10Gigabit Ethernet ports



    Controlled from a PowerMac workstation (w/XRAID array & dual channel Fibre Channel PCI card); which feeds & collects frames via dual 10Gigabit Ethernet connections...

  • Reply 7 of 19
    So I'm not hearing any argument against embracing clustering. C'mon, there has to be some reason.

    As I understand it you can cluster computers running at different speeds, maybe Apple is worried that if they made OS level auto-clustering available then they would reduce turnover since you could milk more life out of old computers.

    I doubt that, though, they have made my dual G4 faster with each OS X update (thanks quartz extreme).
  • Reply 8 of 19
    amorphamorph Posts: 7,112member
    [quote]Originally posted by MacRonin:

    <strong>In line with this, I want to see Apple put out a REAL rendernode...

    Something along the lines of this:

    Eight 1.8GHz PPC970 CPUs

    64GB DDRII SDRAM (Thirty-two 2GB DIMMs)

    Dual 10Gigabit Ethernet ports




    You want that to be a node?

    This must be for real-time rendering of an accurate-to-the-quark simulation of the Big Bang as a QuickTime VR movie, right?

    As far as I'm concerned, the whole advantage to clustering is that you can get a bunch of DP machines and add them up to get something like that on a much lower budget.

    As for clustering support, it's been rumored that Apple has been hard at work on kernel-level clustering for OS X Server for some time now. There really aren't any arguments against it; there are arguments in favor, namely the AppleSeed project and the eminently clusterable Xserve.

    Not to even mention Rendezvous. Clusters, especially of the Beowulf variety, are a pain to set up. A clusters that built itself as you plugged in Ethernet jacks would be the be-all and end-all of Apple ease-of-use. Heck, you could even run a wireless cluster over Airport Extreme. The bandwidth would be choked somewhat, but that's not a problem for some applications. Besides, the idea of a cluster that you could add to just by dropping a computer in the room and turning it on is just too cool.
  • Reply 9 of 19
    costiquecostique Posts: 1,084member
    [quote]Originally posted by Nordstrodamus:

    <strong>So I'm not hearing any argument against embracing clustering. C'mon, there has to be some reason.</strong><hr></blockquote>

    So, you want a reason, don't you? While unquestionably possible, it may be difficult to find clients. People either are already using some clustering solution (so you have to make them switch and invest a lot of money where they've already invested) or don't really need it (so you have to convince them they do, RDF just may not prove to be enough), or cannot afford it (so you have to offer something cheap and powerful). And how many clients are there in the world? How large is the market in USD? Is it all worth the effort?
  • Reply 10 of 19
    henriokhenriok Posts: 537member
    I work in an advertising firm with about 80 Macs in all flavours. From a beige desktop G3 through dual 1GHz.. We have two guys doing 3D rendering all day.. and when 90% of the workforce just typing in AppleWorks, mailing or surfing we have _a_lot_ of unused computing power for our 3D-guys to use.. if they could.

    We're talking about bying a "cheap" home built AMD-based renderfarm, but it's not really what we want to do. If we could use our Macs as transparent render slaves we could also justify buing new Macs to those who really don't need one. But.. our 3D-guys just can't get enough.
  • Reply 11 of 19
    macroninmacronin Posts: 1,136member
    Get them a rack loaded with my RenderNodes...

    Amorph, I don't like the idea of using Xserves as a renderfarm, to much waste...

    I don't need USB/FireWire/Serial ports, I don't need PCI slots, I don't want a graphics connection... I don't need HDDs on every node, I just need boxes that NetBoot from a control workstation (RenderWrangler), take frame assignments from the same machine, pull them into RAM, process on a bunch of 970s, and dump the finished frame back to the RenderWrangler machine (which has a XRAID attached for all that data)...

    So, CPUs, RAM, controller chips, Ethernet...

    The control workstation could just as easily be the same workstation I use for Maya/Shake/FCP...

    Would add a whole new definition of 'rendering in the background'...!

  • Reply 12 of 19
    costiquecostique Posts: 1,084member
    We won't have a clustering solution from Apple until their marketing staff figures out how to explain the concept.
  • Reply 13 of 19
    ed m.ed m. Posts: 222member
    Amorph mentioned Project: AppleSeed. VERY nice setup.

    My friend Dean Dauger offers this product:

    <a href="http://www.daugerresearch.com/"; target="_blank">http://www.daugerresearch.com/</a>;

    it's what AppleSeed is built on.
  • Reply 14 of 19
    JPL uses 33 XServes for clustering <a href="http://daugerresearch.com/pr/JPLXServeCluster.html"; target="_blank">linky</a>

    - excerpty -

    [quote] The Applied Cluster Computing Group (formerly known as the High-Performance Computing Group) at NASA's Jet Propulsion Laboratory (JPL) recently acquired 33 XServes for the purpose of using them as a parallel computing cluster. Using Pooch, provided by Dauger Research, the JPL group has begun running parallel computing code on their new XServe cluster.


    Pooch, their clustering software, is available for download and your own clustering demo <a href="http://daugerresearch.com/pooch/whatis.html"; target="_blank">here</a>
  • Reply 15 of 19
    Yeah, it will be all about clustering... in time.

    Think Zilla

    For those who don't know Zilla, I'll give you a brief rundown of what it was on the NeXT/OpenSTEP machines:

    Zilla is an OS-level clustering application. This allows any application to use the Zilla Framework to create and manage remote nodes [machines in a cluster], rather than home-brew their own distributed code.

    With Zilla coming out in a near-future OS release [don't think 10.3... too soon] coupled with Rendezvous we will see some pretty interesting situations where all potential nodes will show themselves on a network, without having the master node manually link the remote nodes.

    Couple this with Renderman 11, and you start getting some pretty cool stuff. Remember folks, the latest and greatest graphics cards are programmable. This will allow for HW accellerated Renderman.... With a few nodes and GF3 or better GPUs, we could see Toy Story in realtime.

    The future's so bright, I gotta wear shades. :cool:
  • Reply 16 of 19
    cliveclive Posts: 720member
    [quote]Originally posted by AirSluf:

    <strong>With higher data throughput you can doo cooler things....FW800 is daisy-chainable...</strong><hr></blockquote>

    Err, but Gbit ethernet is actually 25% faster, and available on a much wider range of machines.
  • Reply 17 of 19
    cliveclive Posts: 720member
    [quote]Originally posted by Henriok:

    <strong>If we could use our Macs as transparent render slaves...</strong><hr></blockquote>

    It's been a while since I've done any of that stuff in anger, but doesn't the 3D software have built-in "clustering" ability?

    I'm sure that was in things like Strata years and years ago.
  • Reply 18 of 19
    henriokhenriok Posts: 537member
    [quote]Originally posted by Clive:

    <strong>It's been a while since I've done any of that stuff in anger, but doesn't the 3D software have built-in "clustering" ability?</strong><hr></blockquote>They all do, but they are a pain to setup and manage. We do setup a quite large farm on weekends, but every node must have render client installed, be initialized and configured manually so it's not quite that transparent..
  • Reply 19 of 19
    airslufairsluf Posts: 1,861member
Sign In or Register to comment.