Nvidia, Apple, Comdex?

Posted:
in Future Apple Hardware edited January 2014
Over the summer the CEO of Nvidia commented about them an Apple having something to announce around this time...well nothing has been announced so far...the NV30 is due to be announced at comdex...could the NV30 have anything to do with a future Apple product? The only thing that comes to mind is an Xserve renderfarm which would contain more than one NV30 ie dual AGP slots...or perhaps whatever they had to announce got postponed or cancelled because of the economy

Comments

  • Reply 1 of 15
    [quote]Originally posted by Producer:

    <strong>The only thing that comes to mind is an Xserve renderfarm which would contain more than one NV30 ie dual AGP slots...</strong><hr></blockquote>

    Why would you want dual graphix cards in a renderfarm? <img src="confused.gif" border="0">
  • Reply 2 of 15
    I believe with the Nvidias new NV30 you will actually be able to do your final rendering on the graphics card (or multiple graphics card). My knowledge is not in this area so hopefully someone will chime in....also take this into account: <a href="http://www.exluna.com/"; target="_blank">Exluna</a>
  • Reply 3 of 15
    ryukyuryukyu Posts: 450member
    Could be that Apple is going to use nVidia's mobo chipset.
  • Reply 4 of 15
    but in what product?
  • Reply 5 of 15
    henriokhenriok Posts: 537member
    nVidia nForce in Macs?

    And why on earth hasn't anyone tried using GPUs for final rendering to disk before?
  • Reply 6 of 15
    programmerprogrammer Posts: 3,467member
    The Radeon9700 and nv30 are the first GPUs (at the consumer level, at least) which support floating point frame buffer formats. This means they can do multi-pass rendering without losing precision between passes, just like a CPU-based renderer. They can also use much larger vertex and pixel shader programs, and the programs are now more flexible and expressive. I've been saying it for quite a while now: these new boards are monsters.



    The main question about the Xserve and nv30 is not whether it makes sense, but whether the nv30 runs too hot for the Xserve... even one of them. Two (or more) of them will really push the envelope.



    When the rendering programs are updated to use the new GPUs (and OpenGL 1.4 in Jaguar supports this) we ought to see an tremendous leap in Mac 3D rendering performance. The PC will leap too, but since both use the same GPUs this should benefit the Mac far more, largely making up for the G4's short comings. Even better, Apple is being aggressive about optimizing MacOS X's OpenGL implementation in terms of performance and features.



    There'll be the usual bunch of naysayers complaining about the lack of double precision floating point in the GPUs but frankly, in the vast majority of cases, its not actually needed -- they just haven't done the work to figure out why their code is squandering precision.
  • Reply 7 of 15
    programmerprogrammer Posts: 3,467member
    [quote]Originally posted by ryukyu:

    <strong>Could be that Apple is going to use nVidia's mobo chipset.</strong><hr></blockquote>



    No.
  • Reply 8 of 15
    strobestrobe Posts: 369member
    [quote]Originally posted by Programmer:

    <strong>The Radeon9700 and nv30 are the first GPUs (at the consumer level, at least) which support floating point frame buffer formats.</strong><hr></blockquote>



    What about rendering bezier paths?



    Also, what happened to OpenGL 2.0?
  • Reply 9 of 15
    [quote]Originally posted by Programmer:

    <strong>The main question about the Xserve and nv30 is not whether it makes sense, but whether the nv30 runs too hot for the Xserve... even one of them. Two (or more) of them will really push the envelope. </strong><hr></blockquote>



    Well, I have an Xserve (dual gig) and the blowers are quite capable at cooling the system. The blower over the PCI/AGP slot is doing almost nothing on mine based on how stable the temperature is on my system.



    Compared to the 1U Xeon systems, there's a *lot* less heat being moved from the Xserve. Assuming it can tolerate the same steady-state internal temps as the Xeons (not necessarily a safe assumption), I'd say the system could move about 3x the heat that it does now.



    The two 1GHz G4s push, what, about 50 watts? Would a single nv30 dissipate more than that - or is it one of those plug-in-the-wall beasts?
  • Reply 10 of 15
    would it be that crazy for apple to farm out chipset design and production to nvidia like for future imacs and emacs? if it saved them cash. if nvidia will develop it for athons sell complete MB's with geforce 4mx's 10/100 ddr usb2.0 firewire etc for $110 retail they might be able to do it profiatably for apple. as it is now I think the farm out chipset production to smalller companys.
  • Reply 11 of 15
    matsumatsu Posts: 6,558member
    Programmer, you bring up an interesting point: GPU tech may do more to levelthe playing field than CPU tech. So while the PPC970 is needed, the performance gap in the future might be drawn by those best positioned to take advantage of GPU enhancements both in the OS and for more serious 'work' by other software (not games).



    Makes me wonder when we'll see a card that handles (or I should say, actually uses) 8X AGP bandwidth, and how long before MoBo tech has to move on to the next new internal bus, PCI-EXpress or whatever ???
  • Reply 12 of 15
    1 more day <img src="graemlins/smokin.gif" border="0" alt="[Chilling]" />
  • Reply 13 of 15
    ed m.ed m. Posts: 222member
    [[[Even better, Apple is being aggressive about optimizing MacOS X's OpenGL implementation in terms of performance and features.]]]



    From what I've been told is that they are only at the beginning of the optimizations. Supposedly there is far more that they can do.



    [[[in the vast majority of cases, its not actually needed -- they just haven't done the work to figure out why their code is squandering precision. ]]]



    Ahhhh! I've posted this before on these forums. (Just do a search for my past posts)

    Essentially Programmer is right. Double precision is not NEEDED EVERYWHERE. It's likely that it isn't even needed at all. Besides, we wouldn't want to have this feature in our SIMD unit UNLESS VMX/AltiVec was bumped to 256-bit. Again, this was also mentioned in one of my prior posts. The early G5 discussions perhaps. The point is that double precision is just used to cover up a lot of lazy/bad code that just happens to work, but only because double precision allows for more error.. Most programmers are depending on the hardware to makeup for the shortcomings of their code. It's likely that there are some excellent single-precision algorithms that produce results as well as double-precision clacs. However, the story might be a little different for CAD applications where the precision is actually needed. It make you stop and think though.



    On another note... I wonder if Apple has found a way to harness the power of "multiple" graphics/video cards on the same system the same way multiple processors are currently utilized. Anyway, It's just another thought. Could something like this be done?



    --

    Ed\tM.
  • Reply 14 of 15
    [quote]Originally posted by Ed M.:

    <strong>

    On another note... I wonder if Apple has found a way to harness the power of "multiple" graphics/video cards on the same system the same way multiple processors are currently utilized. Anyway, It's just another thought. Could something like this be done?

    </strong><hr></blockquote>



    This is definately possible...Back in the day I had a Monster II 3D card in my PC. It was a 3D only add-in card with something like 8MB RAM. It hooked into the regular vid card with an external VGA cable. Anyway, point being, you could buy a second Monster card and tie them together with another VGA cable and get, effectively, double the 3D performance...if only it could be done through an 8x AGP BUS.



    :cool:
  • Reply 15 of 15
    [quote]Originally posted by Ed M.:

    <strong>... I wonder if Apple has found a way to harness the power of "multiple" graphics/video cards on the same system the same way multiple processors are currently utilized. Anyway, It's just another thought. Could something like this be done?



    --

    Ed\tM.</strong><hr></blockquote>



    I would imagine that it would be better to use 1 video card with multiple GPU's on board. That way you would only be using the PCI bus to communicate with the system (remember that PCI only supports 1 AGP slot) and the GPU's can communicate with each other on the same card.
Sign In or Register to comment.