Mac clusters?

Posted:
in Future Apple Hardware edited January 2014
I was wondering if a consumer-level and consumer-priced Mac cluster would be something technically feasible. Let me explain : imagine a cluster that would be as easy to build and maintain as a LEGO game (or even easier). Should you need to increase the speed of your system, just buy a new node, for, say, 400$. No need to buy a whole new Mac, backup your data, sell the old computer, etc... Should you temporarily need to use two computers instead of one at home, then just unplug some of the nodes of your cluster to create two smaller clusters.



I don't see Apple going to that kind of stuff, even though the Virginia Tech cluster if very publicized and technologies such as xGrid exist at Apple. Anyway, what do you think of this idea?

Comments

  • Reply 1 of 19
    beigeuserbeigeuser Posts: 371member
    I believe that Apple already announced that most of its pro-apps will have clustering support. Which means that the consumer-clustering is not that far behind.



    But don't expect your processing speed to double just because you doubled the number of computers. It doesn't work that way.
  • Reply 2 of 19
    Quote:

    Originally posted by BeigeUser

    But don't expect your processing speed to double just because you doubled the number of computers. It doesn't work that way.



    Double, of course not, but in certain areas (image processing, algorithms running on 3d voxel spaces, etc...), clusters offer a very nice processing speed compared to single machines.
  • Reply 3 of 19
    beigeuserbeigeuser Posts: 371member
    Hey buddy. I just realized that you are writing from the same country that I'm in.



    Anyway, the key limiting factor for consumer clustering is the network speed. Virginia tech uses fiber optics to connect all the computers. Powermacs use Gigabit ethernet. Consumer macs only have 100Mbps ethernet (or 10Mbps ethernet for older macs). If you have a wireless 802.11b network, the speed can be less than 4Mbps.



    You've probably read how the G4's memory bandwidth is too narrow and how it limits performance. Well, ethernet is much, much slower than the G4s bandwidth. So imagine how slow it would be to do process anything over a ethernet network.



    I'm not saying that clustering is useless for the home. As you can tell by the success of distributed computing (which works even with dial-up connections), some calculations can still benefit from clustering regardless of connection speed.



    Clustering will benefit only certain types of operations. Most consumer apps will see no improvement. But pro-apps will run 10-160% faster depending on the app.



    BTW, you'd probably want to cluster only those computers with Gigabit ethernet or faster (When xGrid goes mainstream.)
  • Reply 4 of 19
    Quote:

    Originally posted by BeigeUser

    Hey buddy. I just realized that you are writing from the same country that I'm in.





    Indeed! I just realized that too! I'm French, currently intern at Kyudai.

    Anyway, I have no idea how much gigabit ethernet chips cost, but I'm sure it's not that expensive right? And I'm sure gigabit ethernet should be okay for home clustering. I mean, for example, for image processing grid computing, imagine you have 4 cluster nodes and one huge image. Just split the image in 4 quarters, then send each quarter to one node, and there's no need to send much data during the processing phase...

    For games, each node could render one quarter of the screen, etc...
  • Reply 5 of 19
    tednditedndi Posts: 1,921member
    What about using an spdif cable to link the blocks in the cluster?? If you look at the airport express there is an optical out? Could this be used with some version of 802.11x to link the nodes. Or can the optical out be used to stream video into the rumored monitors??



    Apple has always done things that are way ahead of the curve. This is the 20th year of the Mac.



    Remember why 1984 was not like 1984?? Steve and the gang at apple will not let this year pass without changing the way we compute! Too much bragging rights for the next 20 years.



    Photoshop, Word, DVD authoring, were not even applications or even possible when the original mac came out. The reasons that mac fans are cultish and loyal is that we have seen the possibilities and have witnessed how easy a powerful computer with the right software can change the way you work or think. Windows XP is still just about what Mac OS 6.x was.



    Whatever comes out may not be perfect (remember, it took 20 years for OS X to be released)



    Steve J. loves this company. It is in his DNA he started it building kit computers "for the rest of us"



    20th year will not pass without a paradigm shift in perception and potential.



    Not faith, just experience.
  • Reply 6 of 19
    beigeuserbeigeuser Posts: 371member
    Quote:

    Originally posted by TednDi

    What about using an spdif cable to link the blocks in the cluster??



    I don't know if SPDIF follows network protocols. I'm not an expert on that connection but from what it seems, SPDIF is mainly a one-way connection to only one particular device.
  • Reply 7 of 19
    beigeuserbeigeuser Posts: 371member
    Quote:

    Originally posted by The One to Rescue

    For games, each node could render one quarter of the screen, etc...



    Since games need to render the screen at 30 frames per second, I'd assume that the network transfer needs to be really fast to accomodate the screen redraws. I'm not sure if even Gigabit ethernet is fast enough.



    But I suppose that the second computer can compute the AI and other details while the main computer focuses on rendering. Of course that will require a complete rewrite of the software and until the Mac increases its marketshare, we will have to live with direct Windows ports that has no clustering support.



    Mac games cannot benefit from clustering until Windows supports game clustering. At which point, Dell will claim that they invented it. So basically it's another Mac technology which will go unrecognized by the average consumer.
  • Reply 8 of 19
    tednditedndi Posts: 1,921member
    Quote:

    Originally posted by BeigeUser

    I don't know if SPDIF follows network protocols. I'm not an expert on that connection but from what it seems, SPDIF is mainly a one-way connection to only one particular device.



    since when has Apple followed the other guys. The 802.11x can handle the network traffic.



    the optical (perhaps not an spdif) could be a hidden fiber optic port with 2 way communication. The Big Mac Cluster uses fiber optic connections between the g5's slow it down a bit and you might have a usable consumer cluster!!



    Imagine the applications that HAVN'T been written yet for the advanced OS with clustering. GAMES, 3d, advanced AI, holography? who knows? 1984 was the year of the Macintosh 2004 20 years later Apple must do something completly different.



    Apple has always been about being on or ahead of the cutting edge. Hense the lawsuits years ago against M$ for copying the mac look and feel in exhange M$ agreed to provide mac support in it's applications. If apple announces the clustering and files the patent on the technology on the same day then it is light years ahead of the windows crowd. M$ will have either to licence the technology or reverse engineer it and face a prepared Apple legal team using clusters to do legal reserch!)



    Change the paradigm. What couldn't you do with a supercomputer? Especially one that has the same look and feel of the computer that you are using.



    Every cutting edge gamer will develop for the Mac platform and as we all know games drive the industry. Every gamer will go or want to go and get a mac. Imagine the LAN parties where every node in the LAN is part of the cluster. Imagine the games then? add VR headsets (yes you will have the processing power then) and you have immersive gaming.



    ooh!!!



    cool as hell.



    yes there are no apps currently developed



    hense wwDc!!!!!!



    go forth and DEVELOP says STEVE!



    NOTE: I know nothing but I am willing to think big thoughts and Evangelize if Apple is willing to pay me!! hint hint!!
  • Reply 9 of 19
    gabidgabid Posts: 477member
    Great minds think alike. Here is a thread I started last year with a similar idea. Don't worry, I'm not trying to kill this thread. I just thought you might like to see what some people made of the idea last year.



    And all of this talk of Japan makes me miss it! I was only there for a couple of weeks last summer, but I really fell in love with the place.
  • Reply 10 of 19
    tednditedndi Posts: 1,921member
    YES YES YES



    now after reading that thread (Moderator(s) could you paste in the cited thread before this one so that we can keep this discussion going????)



    ok, add REMOTE desktop 2 and your powerbook can be anywhere reading e-mail or whatever and when you connect to the net using 80211.g or the Airport Express or a wired connection. You can access your cluster/grid you can remotely operate the HUGE processor and memory intensive processes remotely with the @home cluster serving the finished content to your powerbook (of low power)



    Steve said 3ghz by summer he didn't say how that was to be achieved (4 1ghz computers?)



    ooh chills are going down my spine!!



    plug in any old mac and get a boost!!



    ebay sales of old macs will rocket and apple will be selling new units out the door.



    2 years from now....



    on a software box minimum requirements 4 node cluster



    so,

    you can buy 2 slower processor units or the new kick ass 4 processor unit depending upon your requirements.



    only problem is heat and electricity. but, if the units sleep when not in direct use ie. intellegent energy usage

    not difficult.



    home automation, Massive High Def content, ihologram???



    remember why M$ made all that cash is by having the OS that business ran. Not the best OS. If apple can convince business that they don't need the new supercomputer (Banks, Financial houses, Big Business) they can buy all new desktops and get MORE power.



    Virginia tech for 5million built the #3 supercomputer on the planet. That is cost savings!!!!



    Industry and IT pros will stand up and take notice!!



    oh and buy the way it can also run windows using virtual PC. So your old apps (Office etc) will still be available



    You want a 21st century computer and OS buy Apple



    or you can stick with last century's technology....



    the ghz race is over MAC wins!!



    kick ASS!
  • Reply 11 of 19
    ipodandimacipodandimac Posts: 3,273member
    A while back in my MUG we tested xGrid at Purdue, and were crunching DNA sequences at 18GHz! It was awesome, and I meant to start a thread about it, but I didnt have time lol (it was gonna be long). Basically, this IS the way of the future. There's nothing cooler than standing in the middle of a room on Airport with a Powerbook running at 18GHz.



    The potential is endless, and I think computing in the future will evolve into having a PDA like device that is always tapped into some Cluster, allowing tons of processing power no matter the location. All you need to do is plug in your huge display (for video editing or whatever) and use the PDA like thing as the "tower."



    And you know what, in 10 years CNN will report this as "revolutionary" when some PC company STARTS thinking about how to use an xGrid style app.
  • Reply 12 of 19
    tednditedndi Posts: 1,921member
    I think that is why Virginia Tech got the first 1100 dual g5's. Apple is on the verge of redefining the industry.

    Steve doesnt like to be last. Steve wants to be first. First to deliver and First in the industry.



    WWDC will have a ton of surprises. All of the hardware will be out of the way before the conference that leaves his keynote ONLY for the software.



    Oh and it would help if he had a really cool game up his sleeve to start clustering with. Perhaps the next version of half life or Serious Sam cluster enabled.



    Lan parties will drive the sales.



    I wonder if later they will release the client for windows.



    use the crappy intel chips only for adding more computational processes? Prob not.



    I had another thought..

    could one devoted cluster g5 handle the entire video process? another could handle the program etc.??



    please say yes!
  • Reply 13 of 19
    beigeuserbeigeuser Posts: 371member
    Sorry TednDi. Developers don't make games for platforms because the platform is cool or superior. They make games for machines that can make them more money. Macs has games now because there are enough macs out there to justify the cost of porting. But the developers don't think that there are enough units to justify developing a whole new game engine just for the Mac.



    For example, Macs had dual-processors and Altivec for a long time now. But developers use that technology only as an afterthought to provide a teeny bit of optimization. Nobody reprograms their games to take full-advantage of Apple-only technologies. The same will happen for clustering.



    As I mentioned above clustering will benefit only certain apps and almost all of it will be pro apps.
  • Reply 14 of 19
    I'm not sure that apps (games for example) should be rewritten entirely to take advantage of clustering. I have heard of some Alienware 2-videocard-based PC where both cards were connected to the same screen and each of them rendered half the screen. And it seemed that the games didn't even have to be rewritten at all. 3dfx did the same back in the good old days. This can be called clustering to a certain extent, videocard clustering.

    Still, communication speed would be a bottleneck between the nodes, and the tons of cables behind the cluster would be rather un-Apple-ish, but hey... who knows what Apple engineers can do!



    Gabid, thanks for the link to your thread, nice pieces of info. By the way, Japan is awesome indeed, but working in a Japanese lab when you are not Japanese is quite painful (no mean to harm you BeigeUser).
  • Reply 15 of 19
    beigeuserbeigeuser Posts: 371member
    Quote:

    Originally posted by The One to Rescue

    I'm not sure that apps (games for example) should be rewritten entirely to take advantage of clustering. I have heard of some Alienware 2-videocard-based PC where both cards were connected to the same screen and each of them rendered half the screen...3dfx did the same back in the good old days...

    ...By the way, Japan is awesome indeed, but working in a Japanese lab when you are not Japanese is quite painful (no mean to harm you BeigeUser).




    It goes back to the network connection again. The Voodoo5 had 2 (or 4) GPUs and they rendered alternating frames in parallel. It was very functional except that the competition had the same performance with only one GPU with less heat and lower power consumption.



    Anyway, the Voodoo5 worked because all the GPUs were on the same card with a known high-speed bus. All the GPUs were all running at the same speed too.



    In a clustering environment, programmers need to deal with an unknown number of nodes running at various speeds connected with unknown and varying speed connections. It takes precise timing to render alternating frames and it is near impossible to get precision from an uncontrolled cluster environment. Add to the fact that the network connections are probably going to be too slow and the fact that this is initially a Mac-only technology. I doubt that any game developers will challenge this.



    As I mentioned before, clustering will probably be used as a afterthought optimization for AI or other details.



    BTW. I live in Japan but I am an American.
  • Reply 16 of 19
    Quote:

    Originally posted by BeigeUser

    It goes back to the network connection again. The Voodoo5 had 2 (or 4) GPUs and they rendered alternating frames in parallel. It was very functional except that the competition had the same performance with only one GPU with less heat and lower power consumption.



    Anyway, the Voodoo5 worked because all the GPUs were on the same card with a known high-speed bus. All the GPUs were all running at the same speed too.



    In a clustering environment, programmers need to deal with an unknown number of nodes running at various speeds connected with unknown and varying speed connections. It takes precise timing to render alternating frames and it is near impossible to get precision from an uncontrolled cluster environment. Add to the fact that the network connections are probably going to be too slow and the fact that this is initially a Mac-only technology. I doubt that any game developers will challenge this.



    As I mentioned before, clustering will probably be used as a afterthought optimization for AI or other details.



    BTW. I live in Japan but I am an American.




    The Alienware solution, though, is based on two separate graphic cards, not rendering alternate frames, but alternate lines of the screen, which reduces the synchronization problem (even if there is a small lag between the cards, a line is very likely to be almost the same as it was in the previous frame).

    Anyway, I know that back in France, in my engineering school, there's a pretty big cluster running several huge real-time algorithms, most of them requiring synchronization between the nodes, and they claim that the synchronization issue has really been solved easily (though I don't know whether it's been solved by programming tips or cluster configuration tips... I don't work in this department).
  • Reply 17 of 19
    beigeuserbeigeuser Posts: 371member
    In a controlled environment like a University, the system administrator can make sure that certain system requirements are met and fine-tune the details for optimum performance. Home networks don't have that luxury.



    The same situation for the Alienware computer. It is a lot easier to sync two GPUs than to sync a home network.



    We do know, however, that Apple wouldn't even think of making xGrid unless they knew that they could make it productive and trouble-free. I just don't think that it can handle games... yet. Games won't take-off until Microsoft enters the market.
  • Reply 18 of 19
    ipodandimacipodandimac Posts: 3,273member
    Quote:

    Originally posted by BeigeUser

    In a controlled environment like a University, the system administrator can make sure that certain system requirements are met and fine-tune the details for optimum performance.



    if youre talking about one computer pulling processing power away from another, it's not really like that as far as i know. In the demo i was talking about a few posts ago, xGrid showed all accessible computers (all of our labs ), and then you could see which ones were idle. If they had been idle for 10 minutes or so, then xGrid would see that and use it as part of the "cluster." It never took processing power away from an active computer.
  • Reply 19 of 19
    beigeuserbeigeuser Posts: 371member
    Quote:

    Originally posted by ipodandimac

    if youre talking about one computer pulling processing power away from another, it's not really like that as far as i know. In the demo i was talking about a few posts ago, xGrid showed all accessible computers (all of our labs ), and then you could see which ones were idle. If they had been idle for 10 minutes or so, then xGrid would see that and use it as part of the "cluster." It never took processing power away from an active computer.



    I'm not sure that we are on the same wavelength here but let me try to explain my thoughts.



    There is no doubt that xgrid will be effective in locating idle computers to access the unused cycles. There is also no doubt that xgrid will benefit a lot of pro apps. My point was that xgrid+home network combo will not be very effective with games unless developers spend substantial time in reprogramming the code.



    Games require consistent processing power which will be difficult to extract from a home network consisting of everything from LCIIIs to G5s connected in a mixed ethernet/wireless network. You won't want to play games that randomly speed up and slow down depending on network conditions.



    Other stuff like photoshop/final cut pro/maya doesn't need to be real time. Clustering will speed up whatever it can but it's not a big deal if it doesn't.



    Of course, some clever programming can probably work around that to some degree. But I just doubt that developers will put in that much effort for a feature that is only available on Macs.



    Maybe the fine-tuning can be done at the OS level but version 1.0 of xgrid is not likely to be that mature. In the meantime, Microsoft will probably create their own clustering architecture and all the developers will program for the Microsoft standard which will probably be incompatible with xgrid.



    Macs can never beat PCs in terms of gaming unless the Mac marketshare significantly goes up.
Sign In or Register to comment.