Apple XWorkstation. a scalable desktop supercomputer.

Posted:
in Future Apple Hardware edited January 2014
This is a bit of conjecture.



Traditional PC style hardware design, the sort of hardware design which allows the user some options to upgrade, has been based on a common design theme. A large metal box holding a motherboard. Upgrades involve plugging cards into the slots.



Imagine Apple thinking a bit different. Taking advantage of the XGrid software, a radical approach to engineering - and perhaps the Cell processor...



Imagine a pile of Mac Minis networked together. They would make a fairly powerful little desktop renderfarm. Good but not ideally suited for such work. The disk drives are too slow. The network interface not good enough. CPUs not quite there yet.



Now imagine a PRO version of the same hardware. A mini-rackmount approach. The base module would need USB keyboard. Boot drive - etc.

Expansion of processing capacity would simply involve stacking on more CPU modules. GPU modules might be different. Each unit would hook up in a fast micro network. Perhaps Gigabit ethernet - perhaps something faster.



For scientific, 3D graphics, rendering & compositing applications - such a machine would be unque and perfect. Individuals heading into crunch time could be loaned a few more CPU modules.



Such a machine would be arguably the fastest workstation in the word.



It would also change the way users buy and upgrade. As new CPUs could be added and older ones retired. All without cracking open a case.



The status of a user might be indicated by the height of his stack :-).

The base price of the system would be lower than current Powermacs. But there would be no upper limit on the power (or the price) of the system.



Opinons?



Carni
«1

Comments

  • Reply 1 of 40
    PM Aphelion on this... we had a thread back a few weeks with stuff like this... Aph? Where are you????
  • Reply 2 of 40
    Quote:

    Originally posted by sunilraman

    PM Aphelion on this... we had a thread back a few weeks with stuff like this... Aph? Where are you????



    Yes, found the thread - definitely a bit of overlap - although where I see this going is replacing - or eclipsing the PowerMac line - rather than being something at the consumer end.



    I can't see a 10xG5 stack coming in for less than $5000



    Carni
  • Reply 3 of 40
    beatlejuice ... beatlejuice ... beatlejuice ...



    Blade Runner ~ Modular Powermac



    Perhaps the time and technology has come for this concept.



    IBM has opened up the specifications for their blade center chassis. IBM already makes a J20 970 blade for use in the blade center.



    All Apple needs to do is create a blade design which incorporates the open standards for the IBM blade center chassis. Since IBM manufactures and provides the blade chassis, no further R&D is needed to make a place to put them.



    On it's own, an Apple blade make's perfect sense for high density applications. But what if this same design would fit in a vertical Apple chassis, a modular Powermac? With PCI-X slots and space for a RAID array of SATA drives this could be the Xstation that would be scalable by simply adding more blades.





  • Reply 4 of 40
    cubistcubist Posts: 954member
    The Mac Mini is designed to be a cheap standalone. A stackable blade will be designed differently and will cost substantially more - not as much as an Xserve, but not much less.
  • Reply 5 of 40
    slugheadslughead Posts: 1,169member
    Why not just use a powermac's internals more efficiently?



    If the powermac used fans effectively, and allowed you to use more than 2 hard drives, at least 2 optical drives, and at least 4 PCIs (only allow two 12" cards), it would be a world class workstation.



    I think of it as a workstation as it is now, as there are plenty of workstations that have laughable expandability... hell I've been using a sun as a foot rest for a year now.
  • Reply 6 of 40
    onlookeronlooker Posts: 5,252member
    I have to somewhat agree with the last poster. Putting a bunch of "Mac Mini Like" pieces of hardware together, #1 does not sound an Apple solution. Sounds more like a mess. Better use of space in the PowerMac should be the machine to accommodate for workstation use. Anything over that for typical single workstation use would/should use cluster nodes with the PM utilizing Fibre Channel Cards.

    The ultra high end Blade system is out of reach for me, and everybody in this forum I imagine, but it could be used by scientists, render-farms, and alike I guess.

    Speculation: A revision to the Xserve could be a special order blade Mac in the future, but I can't imagine them selling too many of them. They will probably continue to sell butt loads of XserveRaids as they are freaking awesome, and excellently priced.
  • Reply 7 of 40
    Quote:

    Originally posted by onlooker

    I have to somewhat agree with the last poster. Putting a bunch of "Mac Mini Like" pieces of hardware together, #1 does not sound an Apple solution. Sounds more like a mess. Better use of space in the PowerMac should be the machine to accommodate for workstation use. Anything over that for typical single workstation use would/should use cluster nodes with the PM utilizing Fibre Channel Cards.

    The ultra high end Blade system is out of reach for me, and everybody in this forum I imagine, but it could be used by scientists, render-farms, and alike I guess.

    Speculation: A revision to the Xserve could be a special order blade Mac in the future, but I can't imagine them selling too many of them. They will probably continue to sell butt loads of XserveRaids as they are freaking awesome, and excellently priced.




    The problem with the big tin box approach is that the slots do not provide real expansion potential any more. They don't future proof the system.



    You can't add internal processor power to your system.

    You can only add so much ram.

    You can't upgrade to a new class of processor without scrapping one machine and buying another.



    Imagine how attractive it would be to buy a single 2.5GHz now - add another 2.5GHz in six months and then plug in a 3.0GHz six months after that. Such flexibility is not feasible with the tin-box approach, but with a desktop cluster it would be.



    If a base system with this stackable configuration was similarly priced to the same spec tin box Powermac. Which would you buy?



    I disagree about this not being an Apple-like solution. With a beautiful module design and clever connectors - this machine would make Intel workstations look stone-aged. Moreover it important to note that XP as an OS simply couldn't take advantage of adding CPUs like this.



    There is one fly in the ointment - which is the prodigous cooling requirements of the 970. Until the processor of that class can be shoehorned into something an inch or two high this isn't going to happen.



    Carni
  • Reply 8 of 40
    I think some of you are missing the point of a blade. A blade is a complete computer that fits in a slot in a chassis. IBM already makes such a chassis and they have published the specifications for third parties to create blades that conform to it's form factor. It was a MAJOR thing for IBM to open up their blade center chassis.



    IBM already makes the J20 970 based blade. Apple could just buy them from IBM and load OSX on them.



    My idea is to use that blade design and build an xStation which would be a vertical case that accepts these blades. Say you buy it with two 2 GHz 970FX blade, but then Apple releases a 3 GHz 980 (next generation) blade. Just buy one of those and add it in your modular tower. Thern say IBM devlops a Cell based blade. Buy one of those and add it to the two blades you have in your modular Mac. Repeat as needed.



    That's the "Blade Runner ~ Modular Mac"
  • Reply 9 of 40
    I think what Carniphage is talking about is a system like the hard drives in the xserve but with proccessors and graphics cards instead. Is that what you are thinking?



    Macaddict16
  • Reply 10 of 40
    onlookeronlooker Posts: 5,252member
    Quote:

    Originally posted by Macaddict16

    I think what Carniphage is talking about is a system like the hard drives in the xserve but with proccessors and graphics cards instead. Is that what you are thinking?



    Macaddict16




    I'm pretty sure thats what he's thinking, but it would cannibalize just about everything Apple has. That's why I think it sounds more like something you see PC upstarts do straight out of school. It's a good idea, and I've seen it before, but Apple wouldn't do it.
  • Reply 11 of 40
    slugheadslughead Posts: 1,169member
    Quote:

    Originally posted by Carniphage

    Imagine how attractive it would be to buy a single 2.5GHz now - add another 2.5GHz in six months and then plug in a 3.0GHz six months after that. Such flexibility is not feasible with the tin-box approach, but with a desktop cluster it would be.



    OK, now imagine how great it would be to just upgrade the processor... oh wait, Apple doesn't let you.



    I hardly see the efficiency of adding more computers instead of upgrading the one you have.



    You also know that adding another computer to a cluster doesn't add a linear amount of power, right?



    The more computers you have in a cluster, the less efficient the cluster becomes. (note that efficiency and efficacy are DIFFERENT)



    A much smarter way of doing things is the SLI method for GPUs and standards for CPUs (like the PC world does).



    My friend bought an opteron system that supports dual processor... but he only bought 1 processor, next year, if he wants to upgrade, he can just add another, or exchange the proc he has for something faster.



    He can keep his case, mobo, ram, PSU, drives, AGP/PCI's, etc. That's what efficiency is, not buying a whole damn new computer and putting it in some slot.
  • Reply 12 of 40
    Quote:

    You also know that adding another computer to a cluster doesn't add a linear amount of power, right?



    The more computers you have in a cluster, the less efficient the cluster becomes. (note that efficiency and efficacy are DIFFERENT)




    (tell that to Virginia Tech)



    It depends on what you are doing.



    If you are sitting in Pixar, rendering hair with Renderman - then actually the task scales well onto multiple CPUs.

    If you are sitting at Weta, compositing in Shake, then that scales pretty much linearly too.

    In fact almost *all* power hungry applications benefit splendidly from multiple CPUs. These applications actually prefer it if the processors *do not* share memory.



    There are things that don't scale well - straight-line applications like games.

    And yes a lot of productivity applications don't scale that well either. But frankly, a single 1.5GHz G4 is adequate for most productivity apps (apart from Word)



    My interest is in workstation performance : cramming as much usable CPU & GPU power as possible into a usable compact format. In case no one has noticed. The megahurtz number on the CPUs is not going up as fast as it used to. So the only way to realize more power is more processors. Wintel machines cannot go down this path as easily as Apple. XGrid and Tiger pave the path for Apple.



    Rumour has it that Apple is working on a workstation class machine. I could be all wrong about this. So what do others see as the likely configuration of an X-Station?



    Carni
  • Reply 13 of 40
    dobbydobby Posts: 797member
    I would say the Dual 2.5 G5 is a perfect example of a workstation class machine. Very powerfull yet still scaleable (mem/HD better gpu).

    You could you better this without increasing the cost and complexability?



    You don't want to better the G5 cos you can't without increasing huge cost.

    You need more apps for XGrid and the network to handle it.

    10 or 100Gigabit from your G5 to XGrid would allow for much fast data transfer and allow HUGE apps to run over it.

    This would allow multiple people to benefit from the performance.



    Dobby.
  • Reply 14 of 40
    onlookeronlooker Posts: 5,252member
    Quote:

    Originally posted by dobby

    I would say the Dual 2.5 G5 is a perfect example of a workstation class machine. Very powerfull yet still scaleable (mem/HD better gpu).

    You could you better this without increasing the cost and complexability?



    You don't want to better the G5 cos you can't without increasing huge cost.

    You need more apps for XGrid and the network to handle it.

    10 or 100Gigabit from your G5 to XGrid would allow for much fast data transfer and allow HUGE apps to run over it.

    This would allow multiple people to benefit from the performance.



    Dobby.




    I guess it could be depending on what you do, but it's not made for, or capable of doing everything with respectable performance like a BOXX, or an Alienware workstation class computer.
  • Reply 15 of 40
    dobbydobby Posts: 797member
    Is a dual intel or AMD BOXX really that much faster than a dualG5? Depends on the software I suppose. But I doubt it would be that much faster (they seem very well priced tho).



    Dobby.
  • Reply 16 of 40
    slugheadslughead Posts: 1,169member
    Quote:

    Originally posted by Carniphage

    It depends on what you are doing.



    If you are sitting in Pixar, rendering hair with Renderman - then actually the task scales well onto multiple CPUs.

    If you are sitting at Weta, compositing in Shake, then that scales pretty much linearly too.

    In fact almost *all* power hungry applications benefit splendidly from multiple CPUs. These applications actually prefer it if the processors *do not* share memory.




    You're missing the point.



    I have a 10 node cluster to my left and a 1000 node cluster on my right.. is the 1000 node cluster 100 times faster at ANY application? The answer is no. In fact, it's probably only 50-75 times faster at best.



    Yes, some tasks were build for clusters, but no, clusters are not efficient.



    Yes, clusters are necessary for many tasks, but no, they aren't better than a single, multithreaded CPU with the same theoretical FLOPS, unless, of course, your program is designed to handicap them or the mainboard can't handle it.



    Generally speaking, in multiple processor systems, the farther apart the processors are physically, the more inefficient the system is. That's not to say that they aren't effective or necessary, just not efficient.



    That's why dual cores and cells are so nifty.
  • Reply 17 of 40
    hobbithobbit Posts: 532member
    Quote:

    Originally posted by dobby

    Is a dual intel or AMD BOXX really that much faster than a dualG5? Depends on the software I suppose. But I doubt it would be that much faster (they seem very well priced tho).



    I can answer this question from Alias Maya's point of view.





    The problem is not that a G5 in itself is slower. If anything it is likely a bit faster in some calculations.



    However, the real problem is that code can (and is) highly optimized on certain applications - where it makes sense to eek out every ounce of performance. And a renderer/raytracer is such an application.



    And it is here where the Mac loses out.



    Tweaking code takes a lot of time.



    And time = money.



    Hence tweaking is only done where it makes most business sense.



    And in Maya's terms that means Intel and Opteron processors. While the Mac has a reputable 20-25% market share of new Maya licenses sold, it's not 75-80%...





    I've been told by Alias sales people time and again that Alias (and Mental Images) invest a lot of money tweaking performance on Intel/AMD CPUs (actually more on Opteron than Pentium/Xeon) - while not nearly as much on G4/G5 processors.



    This is the only reason why Maya is slower on Macs. And unfortunately the difference is quite a bit. From my personal timings we're talking 20-40% slower when dealing with high-end G5s vs. high-end Opterons. And that is a lot when waiting for renders to finish. While the Mac is a very nice platform for Maya as the OS goes - in regards to render time it's the wrong choice. (Also the lack of a pro graphics card doesn't help.)



    But to be fair, Alias does tweak Maya on the Mac constantly, at least a bit, and so far every version of Maya for the Mac has been faster than the previous one.
  • Reply 18 of 40
    onlookeronlooker Posts: 5,252member
    Quote:

    Originally posted by hobBIT

    I can answer this question from Alias Maya's point of view.









    It has more to do with the GPU than the CPU. Also the chassis itself is hardly a match comparatively. All things 3D are sitting peacefully in PC land. As 3D workstations are concerned PC world sets the standards. Apple has not yet tried to meet it, and I doubt they will.
  • Reply 19 of 40
    dobbydobby Posts: 797member
    So Apple making a XWorkstation (as per thread) is pretty pointless if software developers aren't even taking the time to write/optimize existing software for the G5 (with or without altivec) although it is superior hardware.

    Quite sad really.



    Dobby.
  • Reply 20 of 40
    Quote:

    Originally posted by onlooker

    It has more to do with the GPU than the CPU.



    Depends on what you're using for a renderer. Most people use Mental Ray because it's the most accurrate, and that's done entirely on the CPU. The GPU's pretty much only useful in preview mode (unless you're Pixar and can afford the technical people who can coordinate movie quality hardware rasterization).



    Quote:

    Originally posted by dobby

    if software developers aren't even taking the time to write/optimize existing software for the G5 (with or without altivec) although it is superior hardware.



    Apple has been working to put in some auto vectorization into the GCC compiler, so that the developer's wouldn't *have* to do this. I have no idea how effective this is in reality, though.
Sign In or Register to comment.