I'm not happy about this at all.
IF the 6800 Ultra supports full Pro 3D drivers then surely Apple would mention that and it would be marketed as a Quadro of 'pro' card.
It would be quite easy to re-badge and activate via OS X as a Quadro card and driver.
Apple can't be 2300 Open GL points behind PC card equivalent. Hell, Nvidia cards are better at Open GL than Ati cards, aren't they? Don't Nvidia have ex-SGI staff pumping out their excellent GL drivers?
Surely Apple could draw on that to have excellent GL implementation..?
It's not the cpu. Dual 2.5s see off everything(!) in rendering bar ridiculously overclocked Xeons and Quad Xeons...and the G5 is often giving away upto 1.4 gig(!) on each cpu.
It's not the physics test either. Here the dual 2.5 gig and 1.25 gig bus thrape the opposition AGAIN!
But how, HOW can the 'same' card be over 100% slower?
1. Bad Apple GL drivers?
(When people have said Apple have a solid, if not outstanding, GL support?)
2. Panther is slower than Windows XP at Open GL
(do we seriously believe that considering how snail like Explorer is versus Safari?)
Software is WHAT Apple does. They are just about the best. Even IF they are slower, they are always stylish and there's no way Apple is over 100% slower than the opposition in anything..? So I don't see the above, in theory, being the bug bear.
3. Cinebench. Beta. PC optimised.
(Okay. But can that explain the over 100% crippling of a Mac graphic card which is essentially the same as the PC card? Especially when the CPU and bandwidth do excellently well in the other tests...UNLESS, UNLESS the GL acceleration is the last bit of bench that is not optimised...can anybody get the email address of Maxxon/Cinebench test authors so I can email them about this? I feel so strongly about this. This issue of Mac graphic cards being slower than PC counterparts have p*********** me off for years...)
4. Games. It's been noted, consistently, that Mac graphic card do not perform as well as the PC equiv' on 'hot' games.
(Well, for a long while, the 'PC ports' have been CPU bound on the Mac with the lame G4 trying to power the ATI cards onward. BUT, SURELY with the G5, at 2.5 gig (easily equivalent of a 3-3.2 gig Pentium 4!), and there's two of them, on 1.25 gig bandwidth, using an industry standard AGPx8 pipe...SURELY this is now a NON-ISSUE..?!?!
What is causing the problem?
IF it IS a software problem related to Apple giving a full GL implementation which slows Apple's GL down or appear slower...then...erm...this IS a 3D test and Apple's GL should shine in this very bench....
...perhaps they should split the GL calls into simple implementation for games and precise calls for 3D pro modelling apps.
However, the ATI x800 XT card presumably has simple GL? ie not full Fire or Quadro drivers? So why, WHY does it do so well in a 3D GL stress test? Modern consumer cards DO fuller GL implentations than in the past where cards such as the Voodoo had 'mini GL' for games. But times have moved on since then. The gap has closed significantly. The consumer cards are now ridiculously powered and have fuller GL drivers.
So much so that Nvidia once proudly posted benches on their site a while back showing how one of their Geforce cards blew a Wildcat out the water. Was a few years back though.
Are there any benches on Quadro vs 6800 Ultra?
Mac vs PC ATI 9800sXT?
Accelerate your Mac? Do they have any dirt to dish on the Mac's dirty secret?
Nothing to do with fast writes not being supported on the Mac? I thought the announcement of the Nvidia Geforce 2MX hurdled the issue of fast writes way back...
Still scratching my head...I can't see what it is...
In games, Macs used to be CPU bound.
Are G5 rigs STILL cpu bound? Do G5s now achieve parity with the same card in games?
If not, we know it's not just the Cinebench beta that is fishy.
And nobody said it was an Endian issue.
Where's the bottleneck?
Lemon Bon Bon