Sub 500$ machine - xGrid

Posted:
in Future Apple Hardware edited January 2014
I've been expecting Apple to do a rack box, very cheap, ever since I realized xGrid was going to be built into Tiger.



Think of schools, homes, everywhere. We can all start to do our own home clusters on a small scale.



Want more power? just buy the new pizza box xGrid booster







This will be interesting to watch......

Comments

  • Reply 1 of 16
    Quote:

    Originally posted by Zab The Fab

    I've been expecting Apple to do a rack box, very cheap, ever since I realized xGrid was going to be built into Tiger.



    Think of schools, homes, everywhere. We can all start to do our own home clusters on a small scale.



    Want more power? just buy the new pizza box xGrid booster







    This will be interesting to watch......




    bingo
  • Reply 2 of 16
    But, a pizzabox would take up too much space if you really needed a grid... An Xserve on the other hand... But that is waaay to expensive.
  • Reply 3 of 16
    Quote:

    Originally posted by T'hain Esh Kelch

    But, a pizzabox would take up too much space if you really needed a grid... An Xserve on the other hand... But that is waaay to expensive.



    Well exactly, I was referring to the rumor of the new 500$ machine said to look just like the xServe, which to me is also like a pizza box



    Pizza's "served"





    Zab The Fab
  • Reply 4 of 16
    This idea is actually very old - look what Apple designed in the mid 1980's :: a modular desktop Mac!!



    link to modular mac
  • Reply 5 of 16
    wmfwmf Posts: 1,164member
    Quote:

    Originally posted by Zab The Fab

    Think of schools, homes, everywhere. We can all start to do our own home clusters on a small scale.



    The applications that are used in schools, homes, everywhere don't support clustering. And a dual 2.5 GHz G5 is cheaper than the equivalent rack of iCheap boxes. And the people who need more than a dual 2.5 can buy Xserves.
  • Reply 6 of 16
    tednditedndi Posts: 1,921member
    Quote:

    Originally posted by wmf

    The applications that are used in schools, homes, everywhere don't support clustering. And a dual 2.5 GHz G5 is cheaper than the equivalent rack of iCheap boxes. And the people who need more than a dual 2.5 can buy Xserves.



    The applications could be built if the software capability and capacity were there. it would render the whole mhz speed debate moot.



    I like it though I don't know if it is at all possible.
  • Reply 7 of 16
    To further what wmf was saying, most of the applications that people use not only are not built with clustering (XGrid forms clusters) in mind, but they would not benefit at all from clustering. The only general-use applications that would see any benefit at all from clustering would be video encoding programs (iDVD and less so iMovie). And these programs are far more likely to have XGrid-like structures built in to them than actually use XGrid.



    XGrid is primarily a way of delivering the programs needed to execute a job to the client computers (secondarily a way of managing those clients and coordinating data). The way that codecs are licensed virtually prohibits their distribution in an XGrid-style manner.
  • Reply 8 of 16
    wel, that's sad news. Ofcause that was said of multi processors before OS X came out and made sure the operating system took advantage of both processors. I have no idea if this could be invented for xGrid as well, but Steve likes to do what everybody says can't be done.......



    Here's hoping he will at this too because it would be sooooo cool to show our PC friends this tech wouldn't it? he he lol









    Zab The Fab
  • Reply 9 of 16
    You don't seem to get it... most applications would not benefit from clustering. Making them into cluster applications would slow them down. Would slowing things down make them cooler? Do you really think that is a good idea?
  • Reply 10 of 16
    Quote:

    Originally posted by Karl Kuehn

    You don't seem to get it... most applications would not benefit from clustering. Making them into cluster applications would slow them down. Would slowing things down make them cooler? Do you really think that is a good idea?



    This is true. But imagine a company of a thousand people. Probably 95% don't have an application that needs clustering. But the other 5% can now access the entire company as one big super computer, built right in.
  • Reply 11 of 16
    Except your numbers are way off... we are really talking about 1 person in 10,000 will benefit directly from this. And other projects to do this sort of thing have been out for a while (Project Condor would be an example). Apple's XGrid is probably the most visually appealing of them, and is a strong contender, but not something revolutionary.



    Most of the people who will be running XGrids will probably do it primarily on dedicated machines, with a little crossover to some computers that run in the department that needs the cluster. Remember, the computers not only need to have XGrid on them, but they need to have it configured so that they belong to a specific cluster (and the cluster has to recognize them). There are a lot of security risks otherwise.



    Oh... and the majority of the time just adding low-end computers is a waste of time. The amount of time that you take up coordinating low-end computers hurts the efficiency of better ones, so your overall speed is slower. Even if the rumored 1Ghz G4 computer comes out for only $500 it will still be cost-effective (not to mention power, space, maintenance, and setup) to buy dual 2.33 XServes.
  • Reply 12 of 16
    Hum, I wonder how five or six of these would compare in maya rendering to a single dual G5? Oh I'm sorry, I was wondering out loud again.
  • Reply 13 of 16
    Dear Karl



    Let me try to explain what it was that I was trying to say. If Apple could develop technology that would enable the following: you hook a firewire or whatever to another machine and the two machines "melt" together to "one" processor on the lowest of levels. If they could really make several processors simply merge into only one processor on some deeeeeep system level, so deep that the machine in all aspects would only "see" 1 processor, just a faster one.



    Now forgive me, I don't know what the hell I'm talking about here, it's just a theory from someone who doesn't know the first thing about programming and what have you, and I know how frustrating it can be to explain something to others who do not know a fraction of your own knowledge about a subject.

    It's like when people try to argue that it really was Bin Laden who attacked Manhatten and all they've ever researched it is listening to the evening news (against me spending a good part of 3 years investigating it why do people always assume they know everything about everything, it's not possible, you choose the aereas you want to know about, and then you listen to those who knows the things you don't yourself, easy.) .



    So, be paitient with me



    Zab The Fab
  • Reply 14 of 16
    Zab: A lot of very smart people have had that idea, but the devil is in the details.



    The first detail is that you can't make two processors look like one, the same way that two people working together are not the same thing as a person who is twice as fast. Two people running the same race don't get to the finnish line twice as fast... just twice as often.



    So lets start talking about a dual processor machine. For most tasks in programming you have to follow an order of instructions. As an illustration: 1 + ( 2 * 3 ). The math part of this is actually 2 procedures (I am ignoring the decoding, load, and store procedures that make this point even stronger), first you have to do the multiplication, then the addition. There is no way you can have one processor do the multiplication, and (at the same time) another do the addition. One event has to happen, and then the other. This over-simplified example accurately describes the vast majority of the tasks that we ask of our computers.



    So, some people may point out that we can do the multiplication on one processor, and then the addition on the other. However, it turns out that moving of the data and the instructions to the other processor, and all of the extra work that both processors have to do to arrange this (especially in a meaningful way) turn out to be many orders of magnitude more expensive (in computer resources) than simply doing the job on a single processor. I said I was simplifying things... but all of the details make it increasingly difficult.



    Now, lets add in the fact that the second processor is not in the same box, it does not share the same memory pool, the same hard disk, or the same data transfer busses. Instead, it shares an incredibly slow (to a processor) connection to the first processor. So for any job that has to be done all of the data has to be moved over a network connection, and all of the messages to keep the program in sync also have to travel that connection. That maintenance cost is greater than the total processing needed for many programs.



    So, the solution is to try and break programs into small chunks so that you can send the chunk to the other computer, and have it manage everything about that chunk, and only report back the results. This only becomes cost effective when it will take a while (usually minutes) to get those results. This does not describe most programs. It is also really difficult to do.



    Now, there is a lot of research work going into this sort of idea right now... but it is very much in its infancy, and is not ready for use in a commercial OS. The biggest push in the research right now is to take care of efficiently moving data around so that it is available to the processor that needs it in a timely fashion. This research is being done under the name Non-Uniform Memory Architectures (NUMA), and has been going on for some time. It turns out to be a nasty problem, and most of the solutions have been very tightly bound to a particular problem (that is a general solution that works in most cases has not been found).



    Once you have that problem licked, then there are a few other problems: efficient ways of determining the cost/benefit numbers of moving a process to another computer (while load balancing), insuring that the proper software is available on the other computer (often handled through NUMA... but not always), being able to handle computers that can crash (how do you recover a job... since the crashed computer was probably not the one requesting the job), how do you handle network failures... etc.



    Executive Summary: look for this in 10-20 years.
  • Reply 15 of 16
    Perhaps my earlier post was not direct enough to solicit a response so let me make it more direct. Does anyone think that several of these machine clustered together would be a cost effective solution for higher end 3d animation for artists on a budget when compared to the g5 options available. I would think yes, but certainly there are more knowledgeable people in these forums that could offer a more informed opinion.



    Thanks

    Tim
  • Reply 16 of 16
    Quote:

    Originally posted by timmy o'tool

    Perhaps my earlier post was not direct enough to solicit a response so let me make it more direct. Does anyone think that several of these machine clustered together would be a cost effective solution for higher end 3d animation for artists on a budget when compared to the g5 options available. I would think yes, but certainly there are more knowledgeable people in these forums that could offer a more informed opinion.



    The XServes are probably going to remain a much better cost/benefit ratio, specifically the Cluster Node version. The $3000 dual 2.3 Ghz G5 model will easily beat 6 Mac Mini's. There really isn't much doubt about that.
Sign In or Register to comment.