No they are definately running OSX (Chaosmint) and i would think it's Panther, but that's unconfirmed
I would have selected Darwin instead of the full OSX. Darwin is free, and I don't see why all the eye-candy layers in the full OSX would be useful on machines that don't even have monitors connected to them.
I would have selected Darwin instead of the full OSX. Darwin is free, and I don't see why all the eye-candy layers in the full OSX would be useful on machines that don't even have monitors connected to them.
If you wan't to use anything besides the BSB subsystem like, let's say, Cocoa, Carbon or some other nice APIs you must use the entire Mac OS X. With help from Apple you'll probably be able to trim OSX down quite considerably.
I know it sounds super duper, but as I see Xbench tests of dual G4 and Dual G5, the G4 has 5,10 Gflops altivec basic and G5 just 4gflops. So theoretical the G4 would have a maximum of 15,9324 Tflops I guess in the same Vtech cluster config ,so the possibility for a supercomputer was their all along.
But still Hoera for Apple.
The "altivec" unit (VMX, whatever) in the 970 is not as good as the one in the G4. It was supposed to be just as good, but I forget exactly why it's not. IIRC it has something to do with simultaneous reads and writes. Of course, the Integer and FPU capabilities of the G5 are insane. So what you'd see in a G4 cluster would be a mediocre average and a high peak. The G5 cluster should have a better average.
Just a question, why wouldn't Pixar build a cheap cluster like this for video rendering? It's cheap and apparently pretty damn fast.
Pixar probably uses x-serve or some other form of Apple servers. I don't think that a cluster would be practical for the work they do there. They are able to render over the network there. That means that if they have an hours worth of video to render and 60 computers on the network they can send a minute of video to each computer for rendering and than have it send back and put together to reach a final product. This is more practical for them rather than having "one" large computer do all the work.
Pixar probably uses x-serve or some other form of Apple servers. I don't think that a cluster would be practical for the work they do there. They are able to render over the network there. That means that if they have an hours worth of video to render and 60 computers on the network they can send a minute of video to each computer for rendering and than have it send back and put together to reach a final product. This is more practical for them rather than having "one" large computer do all the work.
Actually Pixar uses an IBM supplied Xeon blade server running linux for its rendering.
MacNN reader Dave Shroeder writes: "The University of Texas just rolled out a $38M Dell/Linux cluster that will achieve 3.7 Tflops, not even yet at full capacity. Compare that to Virginia Tech's $5.2M Apple/Mac OS X cluster that achieves 17.6 Tflops, constructed in 3 months. Cost per Tflop: Dell - $10.3 million; Apple - $295,000. It's almost shameful."
Notice how even though $3M in funding came from Virginia, the Austin, TX rag linked in the above item scrupulously avoid mentioning Virginia Tech's own little number. Wouldn't want to piss off "one of the Austin area's largest corporate employers," now would you? Not in the same article where Local Hero Mike Dell crows - er, avers... or maybe mentions: "The same servers we sell one or two at a time to our business customers are now being used in a huge cluster to achieve high-performance results."
He sounds almost incredulous.
Another gem from the article: Unlike VT, UT had to enlist the help of Cray to get their cluster up and running. VT only had to enlist students.
No, I think they are running a sort of Linux. Mac OS X is not ready for that kind of clustering.
Not only did they not run Linux, they had to port over all their own homegrown tools that they originally wrote for Linux and even with this extra effort is was cheaper and easier for them to use Mac OS X on G5 than it was to use any of the competitors.
That is shamefull. ATAT mentioned that the current cluster king, the NEC machine, cost $350 million to achieve 35.9 trillion calculations a second with 5,104 processors. The Apple cluster cost $5.2 million to achieve 17.9 trilion calculations. That is 70 times the cost to achieve double the performance. Think what they could achieve with $20 million! The other thing that people mentioned is that when the G5 are replaced with faster machines they old machines can still be usefull as regular desktop machines! Brillant!
I would have selected Darwin instead of the full OSX. Darwin is free, and I don't see why all the eye-candy layers in the full OSX would be useful on machines that don't even have monitors connected to them.
While Darwin is free, Mac OS X should have come with each of the 1100 G5's they purchased.
I would have selected Darwin instead of the full OSX. Darwin is free, and I don't see why all the eye-candy layers in the full OSX would be useful on machines that don't even have monitors connected to them.
Mac OS X has a headless mode you know.
all that eye candy crap wont take any CPU if its not being used.
Comments
Originally posted by Placebo
Panther will make this even faster!
as far as i know they are running panther...
Originally posted by Krassy
as far as i know they are running panther...
No, I think they are running a sort of Linux. Mac OS X is not ready for that kind of clustering.
Originally posted by Mac Force
No, I think they are running a sort of Linux. Mac OS X is not ready for that kind of clustering.
No they are definately running OSX (Chaosmint) and i would think it's Panther, but that's unconfirmed
Originally posted by Henriok
No they are definately running OSX (Chaosmint) and i would think it's Panther, but that's unconfirmed
I would have selected Darwin instead of the full OSX. Darwin is free, and I don't see why all the eye-candy layers in the full OSX would be useful on machines that don't even have monitors connected to them.
Originally posted by Tidris
I would have selected Darwin instead of the full OSX. Darwin is free, and I don't see why all the eye-candy layers in the full OSX would be useful on machines that don't even have monitors connected to them.
If you wan't to use anything besides the BSB subsystem like, let's say, Cocoa, Carbon or some other nice APIs you must use the entire Mac OS X. With help from Apple you'll probably be able to trim OSX down quite considerably.
Originally posted by fieldor
I know it sounds super duper, but as I see Xbench tests of dual G4 and Dual G5, the G4 has 5,10 Gflops altivec basic and G5 just 4gflops. So theoretical the G4 would have a maximum of 15,9324 Tflops I guess in the same Vtech cluster config ,so the possibility for a supercomputer was their all along.
But still Hoera for Apple.
The "altivec" unit (VMX, whatever) in the 970 is not as good as the one in the G4. It was supposed to be just as good, but I forget exactly why it's not. IIRC it has something to do with simultaneous reads and writes. Of course, the Integer and FPU capabilities of the G5 are insane. So what you'd see in a G4 cluster would be a mediocre average and a high peak. The G5 cluster should have a better average.
Originally posted by Henriok
No they are definately running OSX (Chaosmint) and i would think it's Panther, but that's unconfirmed
Q: Are there limitations on the level of complexity that the system can be used for?
A: Not yet fully considered, but no web servers and no hosting an irc network
Originally posted by ast3r3x
Just a question, why wouldn't Pixar build a cheap cluster like this for video rendering? It's cheap and apparently pretty damn fast.
Pixar probably uses x-serve or some other form of Apple servers. I don't think that a cluster would be practical for the work they do there. They are able to render over the network there. That means that if they have an hours worth of video to render and 60 computers on the network they can send a minute of video to each computer for rendering and than have it send back and put together to reach a final product. This is more practical for them rather than having "one" large computer do all the work.
Originally posted by DMBand0026
Pixar probably uses x-serve or some other form of Apple servers. I don't think that a cluster would be practical for the work they do there. They are able to render over the network there. That means that if they have an hours worth of video to render and 60 computers on the network they can send a minute of video to each computer for rendering and than have it send back and put together to reach a final product. This is more practical for them rather than having "one" large computer do all the work.
Actually Pixar uses an IBM supplied Xeon blade server running linux for its rendering.
Dell cluster costs 30x more than Apple's
MacNN reader Dave Shroeder writes: "The University of Texas just rolled out a $38M Dell/Linux cluster that will achieve 3.7 Tflops, not even yet at full capacity. Compare that to Virginia Tech's $5.2M Apple/Mac OS X cluster that achieves 17.6 Tflops, constructed in 3 months. Cost per Tflop: Dell - $10.3 million; Apple - $295,000. It's almost shameful."
Originally posted by Scott
This is interesting. I wonder if this simple analysis is true.
Dell cluster costs 30x more than Apple's
Ow. Ow. Ow.
Notice how even though $3M in funding came from Virginia, the Austin, TX rag linked in the above item scrupulously avoid mentioning Virginia Tech's own little number. Wouldn't want to piss off "one of the Austin area's largest corporate employers," now would you? Not in the same article where Local Hero Mike Dell crows - er, avers... or maybe mentions: "The same servers we sell one or two at a time to our business customers are now being used in a huge cluster to achieve high-performance results."
He sounds almost incredulous.
Another gem from the article: Unlike VT, UT had to enlist the help of Cray to get their cluster up and running. VT only had to enlist students.
Originally posted by Mac Force
No, I think they are running a sort of Linux. Mac OS X is not ready for that kind of clustering.
Not only did they not run Linux, they had to port over all their own homegrown tools that they originally wrote for Linux and even with this extra effort is was cheaper and easier for them to use Mac OS X on G5 than it was to use any of the competitors.
Originally posted by Scott
This is interesting. I wonder if this simple analysis is true.
Dell cluster costs 30x more than Apple's
That is shamefull. ATAT mentioned that the current cluster king, the NEC machine, cost $350 million to achieve 35.9 trillion calculations a second with 5,104 processors. The Apple cluster cost $5.2 million to achieve 17.9 trilion calculations. That is 70 times the cost to achieve double the performance. Think what they could achieve with $20 million! The other thing that people mentioned is that when the G5 are replaced with faster machines they old machines can still be usefull as regular desktop machines! Brillant!
Originally posted by Tidris
I would have selected Darwin instead of the full OSX. Darwin is free, and I don't see why all the eye-candy layers in the full OSX would be useful on machines that don't even have monitors connected to them.
While Darwin is free, Mac OS X should have come with each of the 1100 G5's they purchased.
Originally posted by Scott
This is interesting. I wonder if this simple analysis is true.
Dell cluster costs 30x more than Apple's
I was going to send this to slashdot as an anonymous blurb but someone beat me to it... doh!
not just the machines.
Time to go looking...
Originally posted by Tidris
I would have selected Darwin instead of the full OSX. Darwin is free, and I don't see why all the eye-candy layers in the full OSX would be useful on machines that don't even have monitors connected to them.
Mac OS X has a headless mode you know.
all that eye candy crap wont take any CPU if its not being used.