iMac GPU: 9400M vs. 2400XT
I'm trying to decide between the current 2.66GHz 20" iMac (NVIDIA Geforce 9400M for graphics), and the previous generation 2.4GHz 20" (ATI Radeon HD 2400XT). The difference in price here is quite substantial, so unless there's compelling reason to go for the newly introduced one I'll put my money on the previous version.
I've already checked out the one and only benchmark I've come across so far, but it take the GPU into consideration.
So I'm wondering if I would be losing out on much by buying the HD2400XT equipped iMac over the 9400M, or if there isn't much improvement anyway?
As far as I understand, Snow Leopard with its OpenCL will take advantage of the computer's graphic processor for other things than just graphics, meaning a more powerful GPU is in order. Would the two computers mentioned here make much difference in that respect?
I've already checked out the one and only benchmark I've come across so far, but it take the GPU into consideration.
So I'm wondering if I would be losing out on much by buying the HD2400XT equipped iMac over the 9400M, or if there isn't much improvement anyway?
As far as I understand, Snow Leopard with its OpenCL will take advantage of the computer's graphic processor for other things than just graphics, meaning a more powerful GPU is in order. Would the two computers mentioned here make much difference in that respect?
Comments
I benchmarked my 2009 Mini with 4GB RAM, 9400M and I get the average Cinebench graphics test at 3869.
This would suggest the 9400M is slightly slower than the 2400XT. It's not the first time Apple have downgraded the GPU in a new model.
As for OpenCL, the 2400XT is compatible with AMD's Stream SDK so I would assume that it's also OpenCL compatible. I can't say for certain though. The 9400M definitely is.
My tests:
Cinebench OpenGL test
MacBook 2ghz 9400M
~4000
iMac 20" 2.4ghz 2400XT
~4900
Xbench
MacBook 2ghz 9400M
Quartz: 157 OpenGL: 138
iMac 20" 2.4ghz 2400XT
Quartz: 170 OpenGL: 180
As for OpenCL and Snow Leopard, I think it's another year before we really see the benefits of that... And we don't know if the 2000-series ATI will be supported or not, it could be, it may not be.
Apple however claims graphics performance gains in the new iMac systems, when they are compared to the old ones, even with the 9400M.
Given they say how then I doubt they are lying because it can be replicated:
Testing conducted by Apple in February 2009 using preproduction 2.66GHz Intel Core 2 Duo?based 20-inch iMac units with NVIDIA GeForce 9400M (256MB VRAM shared). 20-inch iMac systems with 2.4GHz Intel Core 2 Duo and ATI Radeon HD 2400 XT (128MB VRAM) were shipping units. Call of Duty 4 v1.7.1 tested using Timedemoambush, Timedemobog, and Timedemopipeline with standard graphics quality at 1280x800. Quake 4 v 1.3 tested using Demo001.netdemo and high graphics quality at 1280x800. Performance tests are conducted using specific computer systems and reflect the approximate performance of iMac.
Cinebench is a different benchmark as is XBench. In many ways, using CoD, Quake and Motion is better than a synthetic benchmark.
Plus I got some oddball results with XBench that I wasn't sure was right.
The 9400M is definitely slower than the 2400XT.
My tests:
Cinebench OpenGL test
MacBook 2ghz 9400M
~4000
iMac 20" 2.4ghz 2400XT
~4900
Xbench
MacBook 2ghz 9400M
Quartz: 157 OpenGL: 138
iMac 20" 2.4ghz 2400XT
Quartz: 170 OpenGL: 180
As for OpenCL and Snow Leopard, I think it's another year before we really see the benefits of that... And we don't know if the 2000-series ATI will be supported or not, it could be, it may not be.
Different tests seem to provide different conclusions. If you look at Macworld's benchmarks for Quake 4 framerates, the new mini comes in about 30% faster than the previous iMac with the 2400XT video card (~39 vs ~30 fps). OTOH, if you look at rendering in Cinema 4D, the previous iMac is a little faster than the new mini (1:04 vs 0:54).
Apple however claims graphics performance gains in the new iMac systems, when they are compared to the old ones, even with the 9400M.
Given they say how then I doubt they are lying because it can be replicated:
Cinebench is a different benchmark as is XBench. In many ways, using CoD, Quake and Motion is better than a synthetic benchmark.
Plus I got some oddball results with XBench that I wasn't sure was right.
Apple has done a naughty hack here. They're testing at 1280x800 resolutions with *TWO* gaming apps.
1280x800
The 20" runs at 1680x1050
The 24" runs at 1920x1200
Run the 9400M vs 2400XT at those resolutions, testing AA, DX10 and DX9 and OpenGL on Mac with a wider range of games.
My gut feeling is the 2400XT will win out.
But I would love to see proper benchmarks. I agree just Cinebench and XBench only is not ideal for testing.
Apple has done a naughty hack here. They're testing at 1280x800 resolutions with *TWO* gaming apps.
1280x800
The 20" runs at 1680x1050
The 24" runs at 1920x1200
Run the 9400M vs 2400XT at those resolutions, testing AA, DX10 and DX9 and OpenGL on Mac with a wider range of games.
My gut feeling is the 2400XT will win out.
But I would love to see proper benchmarks. I agree just Cinebench and XBench only is not ideal for testing.
But they ran 1280x800 on both units, so it's still an even comparison.
But they ran 1280x800 on both units, so it's still an even comparison.
Not exactly. Firstly, a card can perform at lower resolutions equally well or better than another card. However, at higher resolutions, one card tends to outpace the other.
Secondly, 1280x800 is nowhere near the resolution of the 20" iMac (1680x1050) and 24" iMac (1920x1200). When benchmarking any GPU across these range of resolutions, you really see differences between one card and other.
This is why all decent GPU benchmarking sites will always present 1280x1024, 1680x1050 and 1920x1200, even 2560x1600 ~ to get a more complete comparison of cards.
Could it be that 1280x800 was where 9400M beat the 2400XT but the 2400XT beat the 9400M at 1680x1050 and above? That's why Apple only used 1280x800 benchmarks?
Finally, for integrated GPU to drive 3D graphics at 1680x1050 and above, vs the ability of a discrete card with dedicated VRAM, again, I'm leaning towards the discrete card.
What somebody needs to do is just take a game like C&C3, Spore, run it on the old iMac 2400XT and run it on the new iMac 9400M.
Regardless, the previous 24" entry level with 2600Pro 256MB clearly (almost certainly) whips the 24" 9400M ... Notice there are no benchmarks or comparisons published by Apple with regard to this particular matchup.
Different tests seem to provide different conclusions. If you look at Macworld's benchmarks for Quake 4 framerates, the new mini comes in about 30% faster than the previous iMac with the 2400XT video card (~39 vs ~30 fps). OTOH, if you look at rendering in Cinema 4D, the previous iMac is a little faster than the new mini (1:04 vs 0:54).
The latest Mac Mini resolution tested by Macworld was 1024x768. And only Quake4, UT2004 (old game). Not other games.
I could be wrong about the 2400XT doing better at native 20" and 24" resolutions... but like I said Spore and C&C3 on Mac, and Left4Dead, Crysis, Grid on Bootcamp would yield better information. Tested at 1680x1050 and 1920x1200 resolutions.
There is also the possibility that Nvidia is better at OpenGL (could be FUD) than ATI... so since all games on OS X are OpenGL... Hmm so maybe 9400M does outpace the 2400XT on OS X games but maybe not WinXP games. We need more info.
I have question
I have imac 20 " is posible to desoldering CHIP BGA from Low Profile ATi Radeon HD 2400XT PCI-E 256MB DMS59, and soldering in imac ?? Or this is 2 different chip CARD ??