or Connect
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › NVIDIA unveils $1800 Quadro FX 4800 card for Apple's Mac Pro
New Posts  All Forums:Forum Nav:

NVIDIA unveils $1800 Quadro FX 4800 card for Apple's Mac Pro - Page 2

post #41 of 64
Quote:
Originally Posted by waffle911 View Post

I too am surprised that Nvidia is releasing the 4800 but not the 5800. Perhaps there is some business sense behind it, or perhaps the comment that more is to come is an indication that they're working on it.

But still. $1800? That's an Apple tax of $230 over the PC equivalent part over at Newegg.

And no way was the 8800GT a "high-end" card. The 8800GT was to the 8800GTX what the Radeon HD 4870 512MB is to a factory overclocked 4870 1GB, or better yet, the new 4890. If I can fault Apple on anything, it's graphics options and prices.

AMD dropped the MSRP of the 512MB HD4870 to $149 before Apple released the new Mac Pro making the 512MB HD4870 a decidedly mid-range GPU on release. The 1GB HD4870 was $199 and the HD4890 wasn't released yet. In comparison, I'm pretty sure that when Apple first introduced the 8800GT, the PC version was still retailing above $200 making it a performance class card. At the very least, it would have been nice for Apple to have used the 1GB version of the HD4870, especially when the extra video memory is useful for OpenCL, seeing as how nVidia's dedicated TESLA GPGPUs have at least 1.5GB up to 6GB of video memory on a single card.
post #42 of 64
Quote:
Originally Posted by Dlux View Post

At least for now. Presumably Apple will update the 30" to use MDP, in which case NVIDIA may offer an equivalent version of this card with MDP ports instead. In theory it should even be less expensive to do so.

I'm guessing that this card was into development while Apple was secretly making mDP, but besides that the monitors you'd want to run with this card, 30" ACDs or 3rd-party monitors with DL-DVI, this is a smart choice. But can we expect both DL-DVI and mDP in the future on cards like this or do you think Apple will update their 30" (or bigger) ACDs with both DL-DVI for legacy Mac Pros and for new ones?
Dick Applebaum on whether the iPad is a personal computer: "BTW, I am posting this from my iPad pc while sitting on the throne... personal enough for you?"
Reply
Dick Applebaum on whether the iPad is a personal computer: "BTW, I am posting this from my iPad pc while sitting on the throne... personal enough for you?"
Reply
post #43 of 64
Quote:
Originally Posted by marokero View Post

Lack of two full PCI Express 2.0 x16? I thought that was already standard in the 2008 Mac Pro.

This is a good step forwards, but as already mentioned, it would have been even better if Nvidia had released an FX5800 mac edition as well. If Nvidia can release a dual platform ready GPU, how hard is it to make available firmware to some of the current GPUs so they can also be used in OS X, not just in Windows? I just put an EVGA GTX 295 in my MP, it's awesome and all, but if I want to run OS X I have to swap it for my old 8800GT.

The 2009 Mac Pro does indeed have two 16 electrical lane slots!

It also has two 4x slots.

But the first slot is a double wide, for those big cards, while all the others are single width slots.
post #44 of 64
It would be useful to read this review fron Arstechnica before dissing the 4870.

ON THE MAC this is a very good card for 3D programs. It even beats the 5600 in a nmber of areas, and sometimes by a good deal.

A lot of card performance is driver related. Hopefully, 10.5.7 will also give us updated drivers as Apple will often do with a point update. The newest drivers are much better.

Read from graphics and 3D.

http://arstechnica.com/apple/reviews...o-review.ars/4
post #45 of 64
Quote:
Originally Posted by melgross View Post

It would be useful to read this review fron Arstechnica before dissing the 4870.

ON THE MAC this is a very good card for 3D programs. It even beats the 5600 in a nmber of areas, and sometimes by a good deal.

A lot of card performance is driver related. Hopefully, 10.5.7 will also give us updated drivers as Apple will often do with a point update. The newest drivers are much better.

Read from graphics and 3D.

http://arstechnica.com/apple/reviews...o-review.ars/4

I read the Ars article when it was first released and commented to the author about his writing that poor Mac game ports were responsible for poor game performance between OS X and Windows, when in fact Call of Duty 4 that he mentions, has similar performance between OS X and Windows on the HD3870 as tested by Barefeats, and Brad Oliver from Aspyr later supported this in his own comments to the author.

http://www.barefeats.com/harper22.html

Drivers as you mention play a crucial role and nVidia drivers in particular often don't perform well in OS X as witnessed by the large drop in performance in Call of Duty 4 in OS X compared to Windows, while the HD3870 is fairly comparable between OSes.

http://www.barefeats.com/harper21.html

nVidia GPUs also have poor Core Image acceleration with the HD3870 still beating the 8800GT with current 10.5.6 drivers even though the 8800GT is theoretically more powerful. In fact, the HD2600XT was faster than the 8800GT in Core Image acceleration when they were first introduced together. Makes you wonder why Apple focused so much publicity on using nVidia GPUs.

All this just goes to show that some of the HD4870's lead over the FX 5600 may well be better AMD drivers.

http://arstechnica.com/apple/reviews...o-review.ars/6

And the Ars Technica article also supports my comments that Apple should have at least went with the 1GB version of the HD4870. It's not just for games, but 512MB is a limitation in current applications that use OpenGL acceleration, like Mudbox which throws up a memory warning with the 512MB HD4870. Memory limitations will probably become more pronounced and common with high-end OpenCL applications. When Apple is charging $349 for a 512MB HD4870 that AMD has a MSRP of $149, they could have double the video memory if they wanted to. Clearly, for whatever reason they didn't, which isn't unreasonable since there are obviously lots of things they could have done. Still, that leaves the Mac Pro without a high-end consumer GPU, although it does have a professional GPU now.
post #46 of 64
Quote:
Originally Posted by ltcommander.data View Post

I read the Ars article when it was first released and commented to the author about his writing that poor Mac game ports were responsible for poor game performance between OS X and Windows, when in fact Call of Duty 4 that he mentions, has similar performance between OS X and Windows on the HD3870 as tested by Barefeats, and Brad Oliver from Aspyr later supported this in his own comments to the author.

http://www.barefeats.com/harper22.html

Drivers as you mention play a crucial role and nVidia drivers in particular often don't perform well in OS X as witnessed by the large drop in performance in Call of Duty 4 in OS X compared to Windows, while the HD3870 is fairly comparable between OSes.

http://www.barefeats.com/harper21.html

nVidia GPUs also have poor Core Image acceleration with the HD3870 still beating the 8800GT with current 10.5.6 drivers even though the 8800GT is theoretically more powerful. In fact, the HD2600XT was faster than the 8800GT in Core Image acceleration when they were first introduced together. Makes you wonder why Apple focused so much publicity on using nVidia GPUs.

All this just goes to show that some of the HD4870's lead over the FX 5600 may well be better AMD drivers.

http://arstechnica.com/apple/reviews...o-review.ars/6

And the Ars Technica article also supports my comments that Apple should have at least went with the 1GB version of the HD4870. It's not just for games, but 512MB is a limitation in current applications that use OpenGL acceleration, like Mudbox which throws up a memory warning with the 512MB HD4870. Memory limitations will probably become more pronounced and common with high-end OpenCL applications. When Apple is charging $349 for a 512MB HD4870 that AMD has a MSRP of $149, they could have double the video memory if they wanted to. Clearly, for whatever reason they didn't so the Mac Pro is left without a high-end consumer GPU, although it does have a professional GPU now.

The 4870 hardware is also more powerful than the hardware in the 4800, so that's something as well.

I'd like to see the new 4890 1GB card. That seems to give about another 15% increase in performance over the 4870, on average.

But, I've noticed for several years, that ATI's game cards tend to do better in video decoding quality, and 3D pro app performance than Nvidia's game cards, while Nvidia's do better at games.

There is a difference in the way the companies approach the design of the cards, as well as the drivers.

If I had one card to use on both kinds of apps, I would choose the ATI. That's why I'm happy Apple chose the 4870 instead of the 285.

So far, the performance seems pretty good to me. Motion works well enough, and Archicad is smoother that I've ever had it run.

Of course, I'd really like to see a 2 GB 4890x2 card.

But, that's just a dream.

Some people are flashing some cards for use with the Mac Pro, and I hope to find out how well they work shortly.
post #47 of 64
Quote:
Originally Posted by melgross View Post

I'd like to see the new 4890 1GB card. That seems to give about another 15% increase in performance over the 4870, on average.

It's actually kind of interesting that the GTX 275 seems to be faster than the HD 4890 on average so I think AMD might have shot too low by clocking the core at 850MHz and should have instead standardized on 900MHz as they have on their official OC versions. It probably wouldn't have been too difficult seeing the many RV790 cores seem to clock well to 1GHz.

On that note, it'd be great if AMD released a 1GB HD4890 Mac & PC Edition as a official stock super overclocked version. They are already supporting stock overclocking of the core to 900MHz. The Mac & PC Edition could be overclocked to 1GHz core and 1.2GHz memory, up from 850MHz core and 975MHz memory which is perfectly doable based on results from review websites. The advantage to AMD is that they can claim to have the fastest single GPU graphics card, since this configuration would be faster on average than the GTX 285. At the same time, the higher price premium of Mac GPUs means that AMD can still claim that they aren't directly overlapping the HD4870 X2, which would still be faster and be priced similarly in the PC sphere, but unavailable for Macs. And Apple would of course get a fast consumer GPU, with sufficient performance difference with the 512MB HD4870 to justify it's existence, without having to support Crossfire. I guess this can be my dream for the next little while until DX11 GPUs arrive.

Quote:
Originally Posted by melgross View Post

Of course, I'd really like to see a 2 GB 4890x2 card.

But, that's just a dream.

Personally, I don't mind that Apple doesn't bother with SLI or Crossfire drivers. The fact is that SLI and Crossfire require custom profiles for each game or application otherwise you are just as likely to see performance decrease compared to a single GPU. Seeing that Apple, AMD, and nVidia already seem to have trouble optimizing their drivers for one GPU, it's probably better that they don't further diffuse their manpower creating SLI and Crossfire profiles for all the various OS X games and applications and for each generation of GPU that arrives. It's just simpler to use the fastest single GPU graphics cards in the targeted price point rather than focusing on dual or more GPU options and I'm sure most users prefer it this way as well.

With Grand Central and OpenCL, multiple GPUs will still have uses serving as separate processing targets. Like 4 GT120 could be dispatched 4 different tasks rather than trying to SLI or Crossfire them together to act like a single mega-GPU and give it one task. Treating them as separate GPUs if probably more efficient for GPGPU usage, although not as beneficial to games. Although games can still run the graphics on say the HD4870 and run physics on a GT120 so you still get multi-GPU parallelism even if it isn't SLI or Crossfire.
post #48 of 64
The part that has me interested is the 192 parallel processing cores on this card and use with CUDA for acceleration of Windows. Obviously I could care less about windows, but what I'm not experienced enough to know is: What kind of effect would that have on a mac with grand central and open CL?

Am I off base when I want to know how many mhz those cores are with the notion that it would be a major upgrade when it comes time to put most of their power to a CPU intensive task? I can imagine After Effects going much more quickly when rendering a final output. That would seem like more of a leap than a jump, so I have to be mistaken about something.
post #49 of 64
While the 192 cores are pretty nice to work with in CUDA, I just dont know how much Grand Central will actually use them. It seems odd to me that most of their mid-to-high-end solutions are using ATI products, which will be much more difficult to support in Grand Central. ATI uses a competing standard to OpenCL and as such would seem a pain to integrate into a general purpose computation library. This leads me to think the graphics cards will do little in Grand Central across the board. They wouldnt want the people who forked out the extra cash for the 4850 on the imac and the 4870 on the Pro to actually have a slower user experience. At the same time I cant see doing the massive amount of extra work to support maybe 2% of the installed base.
post #50 of 64
Quote:
Originally Posted by markb View Post

While the 192 cores are pretty nice to work with in CUDA, I just dont know how much Grand Central will actually use them. It seems odd to me that most of their mid-to-high-end solutions are using ATI products, which will be much more difficult to support in Grand Central. ATI uses a competing standard to OpenCL and as such would seem a pain to integrate into a general purpose computation library. This leads me to think the graphics cards will do little in Grand Central across the board. They wouldnt want the people who forked out the extra cash for the 4850 on the imac and the 4870 on the Pro to actually have a slower user experience. At the same time I cant see doing the massive amount of extra work to support maybe 2% of the installed base.

What are you talking about? You say 192 cores in nVidia GPUs are good for CUDA yet you say ATI uses a competing standard to OpenCL?

ATI is working just as hard to support OpenCL as nVidia, if not harder. As a matter of fact, nVidia is still continuing to promote their own proprietary C for CUDA language whereas ATI has made statements hoping that the industry moves toward standards such as OpenCL. Both of this is sensible on each of their parts since C for CUDA is relatively successful, so nVidia might as well promote it, while ATI's Brook+ language hasn't really taken off, so it's in their best interest if developers moved over to cross-compatible OpenCL and not C for CUDA. Based on positioning, ATI has more invested in OpenCL's success than nVidia.

http://www.amd.com/us-en/Corporate/V...130759,00.html

ATI just recently demonstrated their work with Havok porting parts of the Havok physics engine to OpenCL. This of course means that Havok physics can be GPU accelerated not only on ATI GPUs but theoretically also nVidia, Intel, PowerVR or anyone else that supports OpenCL. In comparison, nVidia's PhysX engine is written in proprietary C for CUDA.
post #51 of 64
Quote:
What are you talking about?

I have several apps written to take advantage of CUDA on the mac. It works. Ati/AMD might have done a good deal of catch up, but I was unimpressed with the Brook+ implementation, their hardware, and their forward vision for GPGPUs. Unless they have radically changed their tune 6 months ago, NVidia should be mile ahead of them. They can pay all the lip service they want to it, and a port of a game engine is nice demo but only that.

If AMD delivers a great OpenCL driver I would be surprised but, good for them. Even then, I still see it being a massive support issue to deal with both Nvidia and AMD in this incredibly new field.
post #52 of 64
Quote:
Originally Posted by markb View Post

If AMD delivers a great OpenCL driver I would be surprised but, good for them. Even then, I still see it being a massive support issue to deal with both Nvidia and AMD in this incredibly new field.

I thought the entire point of Apple going to all the effort to develop OpenCL and submit it to Khronos for ratification, and push it through in record time, was to deal with both nVidia and AMD GPUs? If Apple didn't want to deal with AMD they could have just integrated C for CUDA support into Snow Leopard. It would certainly have saved Apple a lot of time and effort since C for CUDA is already developed. Instead, C for CUDA isn't officially supported by Apple, although nVidia does provide their own driver, but OpenCL will be.
post #53 of 64
Interesting that Apple didn't feel the need to include the mini DisplayPort Apple obviously have a reason for that.

I suspect that there will be a lot of potential buyers out there who already have 30" Cinema Displays sitting in their studios. The problems associated with driving a Dual-Link 30" Cinema Display via mini DisplayPort and Apple's purpose built adapter, are well documented. In fact, I believe it's almost impossible to drive a non-Apple Dual-Link 30" display via mini DisplayPort.

So perhaps Apple are using Dual-Link DVI over mini-DisplayPort simply for compatibility reasons?

So that might explain why Apple didn't go down a mini DisplayPort ONLY route, but it doesn't explain why Apple chose not to include any at all?

The fact that this card doesn't have any mini DisplayPorts whatsoever, might suggest that we won't be seeing a new 30" display with mini DisplayPort from Apple any time soon... perhaps Apple has realised that a 30" version of the glossy 24" LED Cinema Display would be a step back in many respects for professional users, and are simply continuing production of the legacy model?
OK, can I have my matte Apple display, now?
Reply
OK, can I have my matte Apple display, now?
Reply
post #54 of 64
Quote:
Originally Posted by Messiah View Post

Interesting that Apple didn't feel the need to include the mini DisplayPort Apple obviously have a reason for that.

I suspect that there will be a lot of potential buyers out there who already have 30" Cinema Displays sitting in their studios. The problems associated with driving a Dual-Link 30" Cinema Display via mini DisplayPort and Apple's purpose built adapter, are well documented. In fact, I believe it's almost impossible to drive a non-Apple Dual-Link 30" display via mini DisplayPort.

So perhaps Apple are using Dual-Link DVI over mini-DisplayPort simply for compatibility reasons?

So that might explain why Apple didn't go down a mini DisplayPort ONLY route, but it doesn't explain why Apple chose not to include any at all?

The fact that this card doesn't have any mini DisplayPorts whatsoever, might suggest that we won't be seeing a new 30" display with mini DisplayPort from Apple any time soon... perhaps Apple has realised that a 30" version of the glossy 24" LED Cinema Display would be a step back in many respects for professional users, and are simply continuing production of the legacy model?


This is not an Apple OEM card, it's a retail boxed NVIDIA card.
post #55 of 64
Is everyone posting here actually READING the article before posting? Because it doesn't look that way.

As we have noted several times now, this is NOT a card being produced by Apple.

This is a card being produced by NVIDIA!
post #56 of 64
Quote:
Originally Posted by ltcommander.data View Post

I thought the entire point of Apple going to all the effort to develop OpenCL and submit it to Khronos for ratification, and push it through in record time, was to deal with both nVidia and AMD GPUs? If Apple didn't want to deal with AMD they could have just integrated C for CUDA support into Snow Leopard. It would certainly have saved Apple a lot of time and effort since C for CUDA is already developed. Instead, C for CUDA isn't officially supported by Apple, although nVidia does provide their own driver, but OpenCL will be.

Exactly.
post #57 of 64
Quote:
Originally Posted by melgross View Post

Is everyone posting here actually READING the article before posting? Because it doesn't look that way.

As we have noted several times now, this is NOT a card being produced by Apple.

This is a card being produced by NVIDIA!

Don't throw my ass in the mix.
post #58 of 64
Quote:
Originally Posted by markb View Post

I have several apps written to take advantage of CUDA on the mac. It works. Ati/AMD might have done a good deal of catch up, but I was unimpressed with the Brook+ implementation, their hardware, and their forward vision for GPGPUs. Unless they have radically changed their tune 6 months ago, NVidia should be mile ahead of them. They can pay all the lip service they want to it, and a port of a game engine is nice demo but only that.

If AMD delivers a great OpenCL driver I would be surprised but, good for them. Even then, I still see it being a massive support issue to deal with both Nvidia and AMD in this incredibly new field.

You're out-of-the-loop.

Latest offerings from Stream Processing:

http://ati.amd.com/technology/stream...ream_9270.html

Quote:
\t
NEW! – AMD FireStream™ 9270
(Available Q1 of 2009)

Open Systems \t
  • Familiar 32 and 64 bit Linux and Microsoft® Windows® environments
  • OpenCL®-ready technology
  • High level tools from multiple 3rd party developers

When that list Apple OS X will be when Apple works with AMD/ATi to include FireStream GPUs as part of their BTO options.

It seems rational that the AMD/ATi Radeon HD 4890 driver with OpenCL 1.x will have the entire driver developed by Apple until later when AMD and Nvidia move to Displayport/DVI-I combos as standard.

Of course I could be wrong, but knowing Dean Reece and how he thinks I imagine he's being very pragmatic as the head and originator of I/O Kit since he first developed it during the NeXT/Apple merger and he got the chance to re-architect his model first developed at NeXT.

But I'll let AMD's blog speak for itself on what it thinks about CUDA vs. OpenCL.

http://blogs.amd.com/nigeldessau/tag/opencl/
post #59 of 64
Quote:
Originally Posted by mdriftmeyer View Post

Don't throw my ass in the mix.

Well, I was trying to not mention names, but I was referring to Messiah for one, though there have been a few others.
post #60 of 64
Quote:
Originally Posted by melgross View Post

Well, I was trying to not mention names, but I was referring to Messiah for one, though there have been a few others.

post #61 of 64
Quote:
Originally Posted by AppleInsider View Post

"With its sophisticated GPU architecture and industry leading features, the Quadro FX 4800 delivers a substantial boost in graphics performance and capabilities, allowing users to continue to push the boundaries of realism and performance in markets such as: architecture, content creation, science and medicine," NVIDIA said.

Architecture...science....medicine....bah! The real demand for this card will be driven by hard-core gamers!
post #62 of 64
Quote:
Originally Posted by s.metcalf View Post

Architecture...science....medicine....bah! The real demand for this card will be driven by hard-core gamers!

Like hell Gamers are the driving force of the FX line. Engineers [ME, ChemE, Materials, Geological, IEEE, ASCE, etc], CAD Designers, Bio-Sciences, and much more.

These are Consumer Line Cards. They are heavy on the OpenGL/OpenCL angle.
post #63 of 64
Most people have no idea of how much frustration us 3D people suffer with screen redraws. All those textures and polygons are really taxing. And now that we have 64 bit hardware/software we make the scenes more complicated again with bigger textures and more polygons, which just blows away regular video cards. Having 3D artists sitting by and waiting for just the editor view to render because the graphics card is overloaded really sucks.

An $1800 graphics card for 3D Rendering and Animation will save hours of production time every week.

I agree with many, this card isn't going to speed up your photoshop or inDesign noticeably, but for guys like me this is a HUGE plus.

Steve
post #64 of 64
Quote:
Originally Posted by Archiform 3D View Post

Most people have no idea of how much frustration us 3D people suffer with screen redraws. All those textures and polygons are really taxing. And now that we have 64 bit hardware/software we make the scenes more complicated again with bigger textures and more polygons, which just blows away regular video cards. Having 3D artists sitting by and waiting for just the editor view to render because the graphics card is overloaded really sucks.

An $1800 graphics card for 3D Rendering and Animation will save hours of production time every week.

I agree with many, this card isn't going to speed up your photoshop or inDesign noticeably, but for guys like me this is a HUGE plus.

Steve

I really wonder. In tests, the 4870 did about as well, and in many tests, better.

This is just one. I have to leave, so I don't have tie to get the others, but they're about the same.

http://www.barefeats.com/nehal10.html
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Future Apple Hardware
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › NVIDIA unveils $1800 Quadro FX 4800 card for Apple's Mac Pro