Apple XWorkstation. a scalable desktop supercomputer.

2»

Comments

  • Reply 21 of 40
    vinney57vinney57 Posts: 1,162member
    Quote:

    Originally posted by dobby

    So Apple making a XWorkstation (as per thread) is pretty pointless if software developers aren't even taking the time to write/optimize existing software for the G5 (with or without altivec) although it is superior hardware.

    Quite sad really.



    Dobby.






    Not at all. Remember that Apple have very respectable numbers in the pro markets in the 25 - 60% region. Remember also that Apple also make the most popular Video Editing software, the most popular Music Creation software, the most popular DVD Authoring software, the most popular Film Compositing software and a bunch of other excellent stuff that is of course optimised for Macs. An ÜberPowerMac just makes the whole Apple Pro environment more appealing.



    My guess would be dual dual-core Power5 variants in a 3/4U rack enclosure. There is just no 'Appleness' to be added to an off-the-shelf blade system, nor is there the size of market for such a thing.
  • Reply 22 of 40
    Quote:

    Originally posted by slughead

    You're missing the point.



    I have a 10 node cluster to my left and a 1000 node cluster on my right.. is the 1000 node cluster 100 times faster at ANY application? The answer is no. In fact, it's probably only 50-75 times faster at best.







    I agree that level of power is unecessary in workstation and would be typically massively under-utilized - but I think you're missing my point. A (mere) 8 node cluster would be something that Apple should offer and more importantly could offer. And I am quite certain there would be demand.



    Something like this could be offered in a conventional box - or a scalable format. But the scalable format woud make it more attractive to purchasers and posssibly more attractive to Apple's marketing machine.



    One more possibility of a slice/stack/pile format machine is an Intel/AMD slice - a "hardware version of VPC" with native windows applications on the OS X desktop (shudder) might overcome the resistance of some Wintel based users. And allow them to make the switch.



    Carni
  • Reply 23 of 40
    Quote:

    Originally posted by onlooker

    I have to somewhat agree with the last poster. Putting a bunch of "Mac Mini Like" pieces of hardware together, #1 does not sound an Apple solution.



    That's right. And another thing - what market are you going after? Businesses wouldn't want a cluster of minis - look at all the points of failure. Mess of wires. If your going to get 10 of them, it would be cheaper and more effecient to go with the blades.



    You are really only targeting Geeks with this who can afford a pro solution but still want to tell their buddies that they have a render farm at home.



    It would be great if Apple devised a solution letting you harness the power of extra computers you have, but I don't think they need a custom solution for this area.
  • Reply 24 of 40
    onlookeronlooker Posts: 5,252member
    Quote:

    Originally posted by gregmightdothat

    Depends on what you're using for a renderer. Most people use Mental Ray because it's the most accurrate, and that's done entirely on the CPU. The GPU's pretty much only useful in preview mode (unless you're Pixar and can afford the technical people who can coordinate movie quality hardware rasterization).



    I'm thinking more in terms of workspace rendering. That's what I see as the bigger benefits of having a Quadro (and SLI). (besides an accurate preview render) Opening scene files with millions of polygons, and trying to get around, manipulate, or get any kind of movement in a working environment is difficult to say the least. Most your scene can be handled with a pro 3D graphics card with lots of ram, plus your system ram, and virtual mem helps, but with what is available now it's not as efficient a setup as it could be.

    Once you get your scene opened, then you start opening hyper-shade, and other windows your system is likely to crash. Without it (depending on the amount of polys in your file) just trying to get movement so you can throw everything on layers can wind up in a crash. I always put everything on layers, but more times than not when I re-open those scene files, the layers are there, but nothing is on them, and they need re-assignment.

    But say your talking about a single extremely high poly model, or nurbs, or sub-d model. What layers?

    Anyway. I see your point. The CPU is the mental ray render engine so to speak, and I'm very curious to see what how renderman for maya 1.0 turns out, but that's all in another thread, in another forum, for another time.
  • Reply 25 of 40
    None of the production quality rendering applications use GPU rendering *at all*.



    Not Mental Ray, Not Lightwave, not EI or the toy renderers that ship with Max and Maya. and certainly not Renderman. In many ways Pixar are *behind* the curve.



    The only product that currently does that is NVidia's gelato. Which is shipped only as an API. And as far as I know - no one has produced much with it.



    Its not the case with 2D with hardware rendering of some kind making its way into Shake, After Effects and others.



    Carni
  • Reply 26 of 40
    Quote:

    Originally posted by Carniphage

    None of the production quality rendering applications use GPU rendering *at all*.



    Not Mental Ray, Not Lightwave, not EI or the toy renderers that ship with Max and Maya. and certainly not Renderman. In many ways Pixar are *behind* the curve.



    The only product that currently does that is NVidia's gelato. Which is shipped only as an API. And as far as I know - no one has produced much with it.



    Its not the case with 2D with hardware rendering of some kind making its way into Shake, After Effects and others.



    Carni




    i've heard some people complain about *accuracy* issues with GPU-based rendering, even in Preview mode....



    "toy renderers" that ship with Max and Maya... you didn't include Lightwave, obviously you like the LW renderer? hmm... i do so warm and soft and glowy and stuff



    ...i may not have made much sense but this thread also does not make much sense it suddenly jumped into GPU rendering from Mac Mini clustering ??? ...
  • Reply 27 of 40
    onlookeronlooker Posts: 5,252member
    Quote:

    Originally posted by Carniphage

    None of the production quality rendering applications use GPU rendering *at all*.



    Not Mental Ray, Not Lightwave, not EI or the toy renderers that ship with Max and Maya. and certainly not Renderman. In many ways Pixar are *behind* the curve.



    The only product that currently does that is NVidia's gelato. Which is shipped only as an API. And as far as I know - no one has produced much with it.



    Its not the case with 2D with hardware rendering of some kind making its way into Shake, After Effects and others.



    Carni




    Who and what are you replying to? And what are you talking about. You can't render particles with software renders.

    And there are cards that do complete hardware rendering. Probably the best pro 3D card available for the G5 PowerMac is a hardware render card from ARTVPS.

    http://www.artvps.com/news.ihtml?pag...cle&pressid=69





    The times in Maya, and Mental Ray for mere previews are +100x faster in some cases. http://www.artvps.com/comparisons.ihtml?groupid=3
  • Reply 28 of 40
    Quote:

    Originally posted by onlooker

    [B]Who and what are you replying to? And what are you talking about. You can't render particles with software renders.

    And there are cards that do complete hardware rendering. Probably the best pro 3D card available for the G5 PowerMac is a hardware render card from ARTVPS.



    You..



    You seem to be under the impression that the film & TV world use GPUs to do rendering. And keep posting to that effect. Although there is some hardware out there - no one uses it yet. The technology is still in its infacny and not integrated into rendering pipelines. The most comprehensive solution is Gelato and its still very much in Beta.



    I am surprised by your "you can't render particles with software renders". I must have hallucinated all those Lightwave explosions.



    Carni
  • Reply 29 of 40
    You guys are missing Carni's point. Use your imaginations! - Carni is not suggesting a pile of minis, that would suck. The idea is a base module (prehaps like the xServe, but with real graphics) with modular (ie no wires, some kind of standard connecter) proccessing and storge blocks. Adding power or storage would be like adding expansion-bay drives to a laptop. Apple could even lease them to you.

    Granted the whole thing wouldn't be as optimally fast as a one chassis machine with the same components but It would rock for someone like me who needs scalable power and a moderate entry fee.

    john
  • Reply 30 of 40
    Quote:

    Originally posted by the cool gut

    That's right. And another thing - what market are you going after? Businesses wouldn't want a cluster of minis - look at all the points of failure. Mess of wires. If your going to get 10 of them, it would be cheaper and more effecient to go with the blades.





    The cluster of minis is "an analogy" more than an actual description.

    This *is* a blade design. But a compact blade format for intended for desktop rather than rackmount use - so for "mini-mac" read "compact blade" if that makes you happier. A dream format would be a 8"x8"x2" unit for each module. The G5's thermal characteristics may say otherwise.



    Any logical hardware design for a desktop cluster machine would require each element/slice/blade module to snap together with a fibre-channel connector - or something better. No spaghetti.



    I can't speak for all businesses. But from my business point of view - a machine in which the processor power can be expanded as easily as adding a firewire disk drive is certainly a much safer investment than the current generation of Powermacs.



    Is there a market - yes - and doubly so if the Powermac is also retired. The niche for this kind of product starts with the current Powermac market and heads upwards - and ends with something like a 6 cpu x dual core cluster.



    The point of scalablity is a simple one. Everyone needs a different level of power. Who needs somewhere between a single 1.8GHz G5 and 12 times that power? Answer - almost everyone.



    The market is Pro Audio, Pro Edit, Pro Compositing, Pro Rendering. With FCP & Shake making big inroads into movie production something faster than a Powermac would be very well received.



    Carni.
  • Reply 31 of 40
    onlookeronlooker Posts: 5,252member
    Quote:

    Originally posted by Carniphage

    You..



    You seem to be under the impression that the film & TV world use GPUs to do rendering.



    Carni




    At what point did I say that?



    Did you not read what my post said. I never even referred to TV, or Film, it was referring to workflow, and screen renders. Like the kind that you see while your typing, but from within a 3D application while moving large scenes with massive amounts of polygons. Start reading what people are writing before you go jumping in their face.



    Quote:

    Originally posted by Carniphage





    I am surprised by your "you can't render particles with software renders". I must have hallucinated all those Lightwave explosions.



    Carni




    My mistake. I was referring to Maya because that was the application in topic. I never bothered talking about lightwave, because the hobBIT from The shire brought up Maya, not lightwave. I guess that's my freaking problem too. I'm just trying to follow the application under discussion, and it also happens to be the one I use. So freaking back off!



    If you want some reference here is what I was saying.



    Quote:

    Originally posted by onlooker

    I'm thinking more in terms of workspace rendering. That's what I see as the bigger benefits of having a Quadro (and SLI). (besides an accurate preview render) Opening scene files with millions of polygons, and trying to get around, manipulate, or get any kind of movement in a working environment is difficult to say the least. Most your scene can be handled with a pro 3D graphics card with lots of ram, plus your system ram, and virtual mem helps, but with what is available now it's not as efficient a setup as it could be.

    Once you get your scene opened, then you start opening hyper-shade, and other windows your system is likely to crash. Without it (depending on the amount of polys in your file) just trying to get movement so you can throw everything on layers can wind up in a crash. I always put everything on layers, but more times than not when I re-open those scene files, the layers are there, but nothing is on them, and they need re-assignment.

    But say your talking about a single extremely high poly model, or nurbs, or sub-d model. What layers?

    Anyway. I see your point. The CPU is the mental ray render engine so to speak, and I'm very curious to see what how renderman for maya 1.0 turns out, but that's all in another thread, in another forum, for another time.




  • Reply 32 of 40
    Quote:

    Originally posted by onlooker

    At what point did I say that?



    Did you not read what my post said. I never even referred to TV, or Film, it was referring to workflow, and screen renders. Like the kind that you see while your typing, but from within a 3D application while moving large scenes with massive amounts of polygons. Start reading what people are writing before you go jumping in their face.







    Forgive me. I thought your previous post sounded very face jumpy.



    So you are referring to real-time display rather than production rendering?



    I must have gained the wrong impression when you suggested Pixar using GPU rendering and then going on to post a link to a hardware ray-tracing card.



    Carni
  • Reply 33 of 40
    Pixar, I believe, has been slowly moving their renderer to hardware. For speed, they're renderer uses the same technique that GPU's use. The advanced lighting and such is pre-calculated, and they do lots of compositing to enhance stuff.
  • Reply 34 of 40
    Quote:

    Originally posted by gregmightdothat

    Pixar, I believe, has been slowly moving their renderer to hardware. For speed, they're renderer uses the same technique that GPU's use. The advanced lighting and such is pre-calculated, and they do lots of compositing to enhance stuff.



    Thats news to me.

    Last time I was there, they eschewed compositing - and were still in two minds about inverse kinematics!



    They'll be having asset control next!



    Carni
  • Reply 35 of 40
    slugheadslughead Posts: 1,169member
    Quote:

    Originally posted by Carniphage

    Thats news to me.

    Last time I was there, they eschewed compositing - and were still in two minds about inverse kinematics!



    They'll be having asset control next!



    Carni




    did you just make those words up?
  • Reply 36 of 40
    Quote:

    Originally posted by slughead

    did you just make those words up?



    um... what words? 'eschewed' 'asset control' ??
  • Reply 37 of 40
    Quote:

    Originally posted by Carniphage



    (pixar hardware rendering....)

    Thats news to me.

    Last time I was there, they eschewed compositing - and were still in two minds about inverse kinematics!

    ...

    Carni




    remember that stevie J said that pixar is the most technologically advanced CREATIVE company. so on a project-to-project basis depending on the story, direction, characters, etc... they'll probably consider various options



    there's also probably a r&d playhouse dept. some techniques of which may end up in the final cut, some may not...



    how was your pixar vist/stay/gig ?
  • Reply 38 of 40
    forgive my rapid-fire posts,



    but maybe in 5 years time i think we'll start to see some significant hardware-based rendering going on in production workflows... GPUs will be advanced enough at that stage that initial hardware renders can then be handed off to software for prettying up/ final passes/ etc...



    remember that there should still be space for software renderers because movies and animated films will want to not just look like a computer game



    i mean, previsualization for movies, games, tv, etc... that's surely all hardware-based now...



    interesting times ahead....
  • Reply 39 of 40
    Quote:

    Originally posted by sunilraman

    forgive my rapid-fire posts,



    but maybe in 5 years time i think we'll start to see some significant hardware-based rendering going on in production workflows... GPUs will be advanced enough at that stage that initial hardware renders can then be handed off to software for prettying up/ final passes/ etc...



    remember that there should still be space for software renderers because movies and animated films will want to not just look like a computer game



    interesting times ahead....




    Oh I totally agree. I wrote a hardware-based radiosity solver a few years ago (before floating point buffers) and I am astonished that not that the hardware has caught-up it is not used extensively now.



    I heard that Dreamworks are making a inroads.



    There is not much possibility of hardware rendering looking like video games. The implementations are typically the self-same algorithms executed either partially or totally on a GPU rather than on a CPU. They should look the same.



    This is the best example I could find (*warning* it's not all that impressive)



    http://www.tweakfilms.com/main/movies/walrus.mov





    Carni
  • Reply 40 of 40
    Quote:

    Originally posted by Carniphage

    Oh I totally agree. I wrote a hardware-based radiosity solver a few years ago (before floating point buffers) and I am astonished that not that the hardware has caught-up it is not used extensively now.



    I heard that Dreamworks are making a inroads.



    There is not much possibility of hardware rendering looking like video games. The implementations are typically the self-same algorithms executed either partially or totally on a GPU rather than on a CPU. They should look the same.



    This is the best example I could find (*warning* it's not all that impressive)



    http://www.tweakfilms.com/main/movies/walrus.mov





    Carni




    hey cool just to clarify, i meant that let's say 5-10 years from now, a video game, ie. primarily real-time hardware rendering could look pretty much like FINDING NEMO does today



    but of course if you are producing a CG-based movie 5-10 years from now you will want to *set yourself apart* by that time by going one up on video games, with whatever strategy to make things look even more impressive than your 'standard' video game...



    whatever techniques are used, visual effects and CG-movie people will probably always have the upper hand because they have the luxury of not having to deliver a fully-fleshed environment in real-time
Sign In or Register to comment.