Clustering for the rest of us?

Jump to First Reply
Posted:
in Future Apple Hardware edited January 2014
I was just reading about the G5 supercomputer and was thinking about the criticism that 2/3 of the present G5s are single processor models and an idea popped into my head: since it seems that the G5 can give you a nice amount of speed if join a few of them together, might it not be possible for Apple to sell what would ammount to essentially a G5 chip on a basic circuit board that plugs into a standard desktop to up the number of processors the desktop has access to?



What I'm picturing is something, say, oh just a bit bigger than the G5 heatsink that connects via gigabit ethernet to the desktop. That's it. No other ports, just gigabit ethernet, an external power supply (so that it can be small by avoiding something else to cool), a die-shrunk G5, whatever controller chips are needed and a fan. A little processor box you can hold in your hand.



It wouldn't cut into sales of full computers since it wouldn't be able to plug into anything (drives, keybaord, etc...) and would essentially give people a way to add procesors to their existing machines. And think about it: you could thus make PowerBooks multi-processor beasts by selling these alongside them.



No technical or programming training here, but I just thought I'd put the idea out there and see what people think.
«1

Comments

  • Reply 1 of 21
    I think that it would be really neat if the 1.6 and 1.8 were Dual capable



    whenever I see the inside of a 1.6 or 1.8 I think it looks so lonely, but it also looks like it is just asking to have another processor plunked down there.



    it would be very neat if you could BTO 1.6,1.8 as dual, OR after your single processor purchase, for maybe $250-$300 you could buy a second processor module that would fit in that dead space.



    but the mobo doesn't support it I guess \
     0Likes 0Dislikes 0Informatives
  • Reply 2 of 21
    gabidgabid Posts: 477member
    Quote:

    Originally posted by Wrong Robot





    but the mobo doesn't support it I guess \




    Exactly! That's why it would be great if you somehow just add processors externally.
     0Likes 0Dislikes 0Informatives
  • Reply 3 of 21
    an old Macworld story on multiprocessor daughtercards



    there is a company that gangs four PowerPC chips on a daughtercard for use (on fabric backplane, IIRC)

    <rummages for links>



    not as useful in systems with older, limited bus speeds.
     0Likes 0Dislikes 0Informatives
  • Reply 4 of 21
    corbucorbu Posts: 40member
    I want to be able to use the processing power of my home computer when I am at work. It's just sitting at home unused right? Why can't I put it to work on a big rendering or something. Or for that mater why can't I grab some power from the guy in the next cube that called in sick? That is where I hope clustering will take us eventually.
     0Likes 0Dislikes 0Informatives
  • Reply 5 of 21
    gabidgabid Posts: 477member
    Quote:

    Originally posted by curiousuburb

    an old Macworld story on multiprocessor daughtercards



    there is a company that gangs four PowerPC chips on a daughtercard for use (on fabric backplane, IIRC)

    <rummages for links>



    not as useful in systems with older, limited bus speeds.




    Again, good idea but how would you add such a card to a G5? If they don't change the board, you can't.



    Actually, the more I think about it, the more I'm intrigued by this idea: something that looks like a portable hard drive but is actually an extra processor. Though I'm still very curious as to if this is even technically possible.
     0Likes 0Dislikes 0Informatives
  • Reply 6 of 21
    I had thought something like this before, what I termed a "processor brick". There are several technologies floating around that I think could come together to change how we add processor power.



    The technologies I am thinking of are TCP/IP over Firewire, Rendezvous, and clustering/grid computing. It might be possible to design in ensence a "desktop" blade server, just a processor, small hard drive, and ram . When connecting this "brick" to your main machine, the two would discover each other via Rendezvous, and automatically set up distributed processor services. Even without the bricks, I suspect that xGrid is basically part of this, a plug and play cluster. Just rack everything up, connect everything via Firewire on the fibre standard, sit back and watch as the entire system configures itself.



    Now, I am no computer science major, and I may be missing some obvious hole in this theory. I guess I want to to confirm my knowledge of what clustering means. There seems to be several different types/ways to utilize large groups of small servers to act as one large system. The first time I heard about this was render farms like Pixar has. My understanding was that each of those machines were a full unit, with hard drives, memory, processors, etc. When rendering, files would be sent out over the network with processing instructions. With balde servers, you basically lose the hard drive right? Is this what is know as clustering? The processors are linked at a high enough speed to in essence act as one massive processor.



    Any basic correctioins would be appreciated, I am trying to wrap my mind around these concepts.
     0Likes 0Dislikes 0Informatives
  • Reply 7 of 21
    gabidgabid Posts: 477member
    Quote:

    Originally posted by blue2kdave

    It might be possible to design in ensence a "desktop" blade server, just a processor, small hard drive, and ram . When connecting this "brick" to your main machine, the two would discover each other via Rendezvous, and automatically set up distributed processor services.



    I'm glad to see that I'm not the only one who thinks this makes some kind of sense!



    But would the PowerBrick/xBrick even need the RAM or hard drive? Where are the techies when we need them ?
     0Likes 0Dislikes 0Informatives
  • Reply 8 of 21
    chagichagi Posts: 284member
    Quote:

    Originally posted by Gabid

    I'm glad to see that I'm not the only one who thinks this makes some kind of sense!



    But would the PowerBrick/xBrick even need the RAM or hard drive? Where are the techies when we need them ?




    Assuming that you're referring to PCI slot co-processor cards - RAM? Yes. Hard drive? No.



    Wired recently reported on a company planning on doing exactly what you're talking about, but with proprietary chips.



    http://forums.appleinsider.com/showt...threadid=32166



    It's worth mentioning that the PCI bus could be a real bottleneck to implementing this approach, as all of the PCI cards installed in a computer share the same bus (bandwidth).
     0Likes 0Dislikes 0Informatives
  • Reply 9 of 21
    johnqjohnq Posts: 2,763member
    Quote:

    Originally posted by corbu

    I want to be able to use the processing power of my home computer when I am at work. It's just sitting at home unused right? Why can't I put it to work on a big rendering or something. Or for that mater why can't I grab some power from the guy in the next cube that called in sick? That is where I hope clustering will take us eventually.



    Well, clustering isn't going to be meaningful 1. between two computers (it won't be twice as fast) and 2. going over T1 from work, over the internet, through your DSL connection at home (and possibly squeezed through a wireless basestation) to your home computer. And then back up the chain to your office. And so on.



    The "guy in the next cube" situation would fare a lot better though.
     0Likes 0Dislikes 0Informatives
  • Reply 10 of 21
    dmband0026dmband0026 Posts: 2,345member
    Quote:

    Originally posted by Wrong Robot

    I think that it would be really neat if the 1.6 and 1.8 were Dual capable



    I think in the future all of the G5s will be dual, or at least capable. I don't see any reason for them to build the case the way they did and not have future plans for a majority of the line to be dual. Right now only one of the models is taking advantage of the two spaces for processors. In short, that aint right.

    In the future there will be a mini tower (after we see the .009 G5. The pro line (big tower) will go to all duals with one dual config being offered in the mini line. Till than, we'll see two of the G5s in the current line go to duals while one remains a single. We're gonna see 1.8, 2.0 and 2.5ghz before the .009 bumps it up to 3.0 in a dual.



    Just my predictions.
     0Likes 0Dislikes 0Informatives
  • Reply 11 of 21
    wmfwmf Posts: 1,164member
    Not gonna happen. The market is too small, too few apps are threaded, transparent clustering is still in the research phase, etc.
     0Likes 0Dislikes 0Informatives
  • Reply 12 of 21
    powerdocpowerdoc Posts: 8,123member
    Apple, by purpose chose to disallow the single to be upgraded to dual. They remove the pin connector. Considering the price of a pin collector : less than 1 $, you will understand that Apple is the ennemie of the upgrade of Apple computer. If you want a more powerful G5 buy an another one : don't imagine that for 300 $ more, you can have 80 % more power ...



    Perhaps it's a question of live for Apple to disallow this, but for us consumers it's sad
     0Likes 0Dislikes 0Informatives
  • Reply 13 of 21
    The company that produces Yellow Dog Linux sells, or at least did sell, boxes like this. They look like an external FireWire HD. They stack and have a couple ports on the back, so you couldn probably use them as computers, not sure though.



    I also think XGrid will be heavily dynamic to the point where even the decision for choosing proper clustering and grid algorithms will be done by the computer. Depnding on whether you set up a cluster or a grid, latency times, collision reports, anything that will effect the speed or RIO of breaking up processes and sending them elsewhere.



    Clustering is dedicated machines, which is not what we want. We want the grid, the sharing from the cubicle next door. That's where cost effectiveness comes in, you can use the machines for other stuff and only lose what you don't need, left over cycles.



    Though, Workgroup Manager, may allow you to group computers and say "Cluster" and be done. Computers in a workgroupcan be resricted to just their workgroup. There are lots of variables to consider. Apple needs to make it easy to set up, but highly user configurable; they need easy administration of computers ang groups, while letting things be turned on and off at the local machine; it has to "just work" and still give you options, cuz everyone likes options.
     0Likes 0Dislikes 0Informatives
  • Reply 14 of 21
    Quote:

    Originally posted by corbu

    I want to be able to use the processing power of my home computer when I am at work. It's just sitting at home unused right? Why can't I put it to work on a big rendering or something. Or for that mater why can't I grab some power from the guy in the next cube that called in sick? That is where I hope clustering will take us eventually.



    Isn't this exactly what the NeXT computers did? I thought they were all about remotely sharing processor time.
     0Likes 0Dislikes 0Informatives
  • Reply 15 of 21
    johnqjohnq Posts: 2,763member
    Quote:

    Originally posted by israces

    Isn't this exactly what the NeXT computers did? I thought they were all about remotely sharing processor time.



    No, NeXT was merely a multiuser OS, same as we have now. Yes you could remotely share login and share the processor time with othter users. It meant multiple users could log onto the same machine via commandline and run processes simultaneously with other users. It's what all unix OSes can do. But that has nothing to do with clustering. (Perhaps someone dabbled with clustering using NeXT, I'm not saying that was impossible.)



    But I don't even think you could do a simultaneous logon in NeXT via the GUI. Not sure.



    But anyway what you are thinking about is simple unix multiuser capabilities, in fact what you are talking about is the opposite of clustering. Who needs 30 users on one box. You really want one user running an app over 30 boxes. That's true clustering.
     0Likes 0Dislikes 0Informatives
  • Reply 16 of 21
    In the previous topics on XGrid when the mailing list initially came out, someone mentioned Dr. Crandall and the Apple Advanced Computation Group. Crandall came from NeXT where he worked on Zilla.app. A screensaver like process sharing app, is perhaps the best short description. Run Zilla.app and set you computer up to run processes for other people. They connect to it and tell it what to run and it does, ONLY when it's not in use or designated as always available.



    Look here
     0Likes 0Dislikes 0Informatives
  • Reply 17 of 21
    kickahakickaha Posts: 8,760member
    Quote:

    Originally posted by johnq

    No, NeXT was merely a multiuser OS, same as we have now. Yes you could remotely share login and share the processor time with othter users. It meant multiple users could log onto the same machine via commandline and run processes simultaneously with other users. It's what all unix OSes can do. But that has nothing to do with clustering. (Perhaps someone dabbled with clustering using NeXT, I'm not saying that was impossible.)



    But I don't even think you could do a simultaneous logon in NeXT via the GUI. Not sure.



    But anyway what you are thinking about is simple unix multiuser capabilities, in fact what you are talking about is the opposite of clustering. Who needs 30 users on one box. You really want one user running an app over 30 boxes. That's true clustering.




    macserverX is right...



    And predating *that* was RenderMan. Yup, also written on a NeXT, and it would scan for other copies of RM on a network, and request them to do portions of the job.



    At the time, it was one of the revolutionary 'It just works' technologies that pushed NeXT into the limelight.



    Zilla was the same idea abstracted out for any application.



    And then there were Distributed Objects, which broke down tasks into remote object invocations, not just batch jobs...



    *sigh* Ah, the forgotten technologies...
     0Likes 0Dislikes 0Informatives
  • Reply 18 of 21
    amorphamorph Posts: 7,112member
    Quote:

    Originally posted by Kickaha

    *sigh* Ah, the forgotten technologies...



    Somehow I doubt that Avie's forgotten them, though.



    Apple clearly seems to be headed back this way, only this time with wireless networking, gigabit Ethernet, FireWire and Rendezvous all available to them. Add NeXT's 10-years-ahead technology to Apple's ease of use and price scale, and they'll be right back in the limelight again. Especially in higher ed.
     0Likes 0Dislikes 0Informatives
  • Reply 19 of 21
    Someone in one of the G5 XServe threads said they didn't think PCI-X was coming with them.



    Amorph, you mentioned all those awesome out-of-box interconnects, but Virginia Tech used InfiniBand. Well, where'd the drivers come from, for one, and InfiniBand is an awesome technology. Not really for grid services but for clustering it provides excellent features. The reason I bring PCI-X and InfiniBand up is that IB depends on PCI-X and to lose it either in a cluster solution, would greatly diminish potential benefits.



    That link in my previous post, provides lots of good information on the kinds of things Apple is doing in this area.
     0Likes 0Dislikes 0Informatives
  • Reply 20 of 21
    amorphamorph Posts: 7,112member
    There have been Fibre Channel cards and drivers for Macs for years now. Network bandwidth is not a recent requirement for a platform that's used to sling Photoshop files around. So it would not surprise me to hear that there are drivers for an InfiniBand card as well.



    That's great for a supercomputer, but not everyone needs or can afford a supercomputer. The great thing about the technologies I mentioned is that they're just about everywhere (on Macs at least) and they perform well enough for small, simple clusters and distributed-computing networks made up of whatever happens to be lying around. The setup doesn't have to be optimal, it just has to be useful - in fact, the whole genius of it is that it would require $0 investment to harness a whole bunch of computational power that would other wise just be depreciating.



    Apple could even have school projects using carts full of iBooks for rendering. No, of course it's not as good as a dedicated render farm. But you already have them, and they're just sitting there doing nothing.
     0Likes 0Dislikes 0Informatives
Sign In or Register to comment.