Virginia cluster enters competition in 2 days [Merged]

2

Comments

  • Reply 21 of 57
    placeboplacebo Posts: 5,767member
    Panther will make this even faster!
     0Likes 0Dislikes 0Informatives
  • Reply 22 of 57
    krassykrassy Posts: 595member
    Quote:

    Originally posted by Placebo

    Panther will make this even faster!



    as far as i know they are running panther...
     0Likes 0Dislikes 0Informatives
  • Reply 23 of 57
    Quote:

    Originally posted by Krassy

    as far as i know they are running panther...



    No, I think they are running a sort of Linux. Mac OS X is not ready for that kind of clustering.
     0Likes 0Dislikes 0Informatives
  • Reply 24 of 57
    henriokhenriok Posts: 537member
    Quote:

    Originally posted by Mac Force

    No, I think they are running a sort of Linux. Mac OS X is not ready for that kind of clustering.



    No they are definately running OSX (Chaosmint) and i would think it's Panther, but that's unconfirmed
     0Likes 0Dislikes 0Informatives
  • Reply 25 of 57
    tidristidris Posts: 214member
    Quote:

    Originally posted by Henriok

    No they are definately running OSX (Chaosmint) and i would think it's Panther, but that's unconfirmed



    I would have selected Darwin instead of the full OSX. Darwin is free, and I don't see why all the eye-candy layers in the full OSX would be useful on machines that don't even have monitors connected to them.
     0Likes 0Dislikes 0Informatives
  • Reply 26 of 57
    henriokhenriok Posts: 537member
    Quote:

    Originally posted by Tidris

    I would have selected Darwin instead of the full OSX. Darwin is free, and I don't see why all the eye-candy layers in the full OSX would be useful on machines that don't even have monitors connected to them.



    If you wan't to use anything besides the BSB subsystem like, let's say, Cocoa, Carbon or some other nice APIs you must use the entire Mac OS X. With help from Apple you'll probably be able to trim OSX down quite considerably.
     0Likes 0Dislikes 0Informatives
  • Reply 27 of 57
    Quote:

    Originally posted by fieldor

    I know it sounds super duper, but as I see Xbench tests of dual G4 and Dual G5, the G4 has 5,10 Gflops altivec basic and G5 just 4gflops. So theoretical the G4 would have a maximum of 15,9324 Tflops I guess in the same Vtech cluster config ,so the possibility for a supercomputer was their all along.

    But still Hoera for Apple.




    The "altivec" unit (VMX, whatever) in the 970 is not as good as the one in the G4. It was supposed to be just as good, but I forget exactly why it's not. IIRC it has something to do with simultaneous reads and writes. Of course, the Integer and FPU capabilities of the G5 are insane. So what you'd see in a G4 cluster would be a mediocre average and a high peak. The G5 cluster should have a better average.
     0Likes 0Dislikes 0Informatives
  • Reply 28 of 57
    etharethar Posts: 111member
    Quote:

    Originally posted by Henriok

    No they are definately running OSX (Chaosmint) and i would think it's Panther, but that's unconfirmed



    Q: Are there limitations on the level of complexity that the system can be used for?



    A: Not yet fully considered, but no web servers and no hosting an irc network



     0Likes 0Dislikes 0Informatives
  • Reply 29 of 57
    dmband0026dmband0026 Posts: 2,345member
    Quote:

    Originally posted by ast3r3x

    Just a question, why wouldn't Pixar build a cheap cluster like this for video rendering? It's cheap and apparently pretty damn fast.



    Pixar probably uses x-serve or some other form of Apple servers. I don't think that a cluster would be practical for the work they do there. They are able to render over the network there. That means that if they have an hours worth of video to render and 60 computers on the network they can send a minute of video to each computer for rendering and than have it send back and put together to reach a final product. This is more practical for them rather than having "one" large computer do all the work.
     0Likes 0Dislikes 0Informatives
  • Reply 30 of 57
    telomartelomar Posts: 1,804member
    Quote:

    Originally posted by DMBand0026

    Pixar probably uses x-serve or some other form of Apple servers. I don't think that a cluster would be practical for the work they do there. They are able to render over the network there. That means that if they have an hours worth of video to render and 60 computers on the network they can send a minute of video to each computer for rendering and than have it send back and put together to reach a final product. This is more practical for them rather than having "one" large computer do all the work.



    Actually Pixar uses an IBM supplied Xeon blade server running linux for its rendering.
     0Likes 0Dislikes 0Informatives
  • Reply 31 of 57
    scottscott Posts: 7,431member
    Yes they do. G5 Xserve may give it a run for it's money though.
     0Likes 0Dislikes 0Informatives
  • Reply 32 of 57
    scottscott Posts: 7,431member
    This is interesting. I wonder if this simple analysis is true.



    Dell cluster costs 30x more than Apple's

    Quote:

    MacNN reader Dave Shroeder writes: "The University of Texas just rolled out a $38M Dell/Linux cluster that will achieve 3.7 Tflops, not even yet at full capacity. Compare that to Virginia Tech's $5.2M Apple/Mac OS X cluster that achieves 17.6 Tflops, constructed in 3 months. Cost per Tflop: Dell - $10.3 million; Apple - $295,000. It's almost shameful."



     0Likes 0Dislikes 0Informatives
  • Reply 33 of 57
    amorphamorph Posts: 7,112member
    Quote:

    Originally posted by Scott

    This is interesting. I wonder if this simple analysis is true.



    Dell cluster costs 30x more than Apple's




    Ow. Ow. Ow.



    Notice how even though $3M in funding came from Virginia, the Austin, TX rag linked in the above item scrupulously avoid mentioning Virginia Tech's own little number. Wouldn't want to piss off "one of the Austin area's largest corporate employers," now would you? Not in the same article where Local Hero Mike Dell crows - er, avers... or maybe mentions: "The same servers we sell one or two at a time to our business customers are now being used in a huge cluster to achieve high-performance results."



    He sounds almost incredulous.



    Another gem from the article: Unlike VT, UT had to enlist the help of Cray to get their cluster up and running. VT only had to enlist students.
     0Likes 0Dislikes 0Informatives
  • Reply 34 of 57
    Quote:

    Originally posted by Mac Force

    No, I think they are running a sort of Linux. Mac OS X is not ready for that kind of clustering.



    Not only did they not run Linux, they had to port over all their own homegrown tools that they originally wrote for Linux and even with this extra effort is was cheaper and easier for them to use Mac OS X on G5 than it was to use any of the competitors.
     0Likes 0Dislikes 0Informatives
  • Reply 35 of 57
    anandanand Posts: 285member
    Quote:

    Originally posted by Scott

    This is interesting. I wonder if this simple analysis is true.



    Dell cluster costs 30x more than Apple's






    That is shamefull. ATAT mentioned that the current cluster king, the NEC machine, cost $350 million to achieve 35.9 trillion calculations a second with 5,104 processors. The Apple cluster cost $5.2 million to achieve 17.9 trilion calculations. That is 70 times the cost to achieve double the performance. Think what they could achieve with $20 million! The other thing that people mentioned is that when the G5 are replaced with faster machines they old machines can still be usefull as regular desktop machines! Brillant!
     0Likes 0Dislikes 0Informatives
  • Reply 36 of 57
    rickagrickag Posts: 1,626member
    Quote:

    Originally posted by Tidris

    I would have selected Darwin instead of the full OSX. Darwin is free, and I don't see why all the eye-candy layers in the full OSX would be useful on machines that don't even have monitors connected to them.



    While Darwin is free, Mac OS X should have come with each of the 1100 G5's they purchased.
     0Likes 0Dislikes 0Informatives
  • Reply 37 of 57
    Quote:

    Originally posted by Scott

    This is interesting. I wonder if this simple analysis is true.



    Dell cluster costs 30x more than Apple's




    I was going to send this to slashdot as an anonymous blurb but someone beat me to it... doh!
     0Likes 0Dislikes 0Informatives
  • Reply 38 of 57
    alcimedesalcimedes Posts: 5,486member
    that also included the costs for a building etc. IIRC.



    not just the machines.
     0Likes 0Dislikes 0Informatives
  • Reply 39 of 57
    amorphamorph Posts: 7,112member
    Do we have a figure for how much the building cost at Va. Tech? I know they had to design and build a pretty sophisticated cooling system, racks, etc.



    Time to go looking...
     0Likes 0Dislikes 0Informatives
  • Reply 40 of 57
    applenutapplenut Posts: 5,768member
    Quote:

    Originally posted by Tidris

    I would have selected Darwin instead of the full OSX. Darwin is free, and I don't see why all the eye-candy layers in the full OSX would be useful on machines that don't even have monitors connected to them.



    Mac OS X has a headless mode you know.



    all that eye candy crap wont take any CPU if its not being used.
     0Likes 0Dislikes 0Informatives
Sign In or Register to comment.