Who ever said we have pro cards someone has been pulling your leg, and are probably joking. There are some important differences in screen rendering capabilities. if they are as fast as Quadro I would like to see a HUGE poly Scene comparison next to an Alienware with similar processor specs, and a Quadro FX 4000 just moving a 7+ million poly dragon model in a mountain environment around the screen in Maya like it was a poly plane.
That actually would not contradict the argument that they're essentially the same card. That argument does acknowledge a difference in real use, but attributes it to two things:
1) A real, complete, precise OpenGL implementation as opposed to the speed-optimized "Quake subset" shipped with PC consumer cards;
2) A deal between the card vendors and the 3D software vendors where the 3D software only assumes Quadro/FireGL-level capability if the card identifies itself as a Quadro/FireGL. This is to discourage people from flashing high-end consumer cards. Unfortunately, the side effect is that even if the GeForces in PowerMacs are functionally equivalent to Quadros, and even if Apple's OpenGL implementation is functionally equivalent to a Quadro's, Maya and its ilk will assume that they're running on gelded hardware and disable a lot of their optimized code.
So, in other words, your test would show a huge difference in performance, but that doesn't change the argument. If you could get a version of your 3D app of choice that assumed it was running on pro hardware, and then logged a difference in performance, you might have a point. But the GPU vendors wouldn't want anyone getting ahold of that software, or running that test, for obvious reasons.
[edit:] As to benchmarking:
The main thing you're paying for with the "pro" 3D cards is not speed. That hasn't been true for some years now. What you're paying for is completeness of implementation of OpenGL, and precision. It's quite possible for gamer cards to bench faster because they take shortcuts. With polygons flying by, nobody playing games really cares about isolated rendering errors. But if you're rendering a high-resolution 3D scene, you absolutely do care about precision. A preview of a scene that doesn't actually show you what the scene will look like after a final render isn't worth much, is it?
Also, as I mention above gamer cards tend to support what John Carmack has derided as the "Quake subset" of OpenGL. The functions commonly used in games appear, and they're optimized for speed, but anything that isn't used isn't included. Since games have absolutely fixed requirements, this is OK (although Carmack chafes against it, because it restricts him to the set of functions that everyone else is using, rather than letting him use the entire OpenGL API). Since 3D apps are far more open-ended in what demands they will make of OpenGL, it's not acceptable there. And this is one major complaint with any benchmark that compares consumer cards (and drivers) to pro cards (and drivers): One of the two major reasons to go with a pro card is to get a driver that supports functions that consumer drivers don't support at all — and which, therefor, cannot be benchmarked. The subset of OpenGL that can be benchmarked will most likely be faster on the consumer cards, because their drivers are optimized for speed over precision. But, again, since you're buying a Quadro (and driver) for its rendering precision, this is a moot point. There are Camaros that go faster than some Ferraris, too.
As I was saying. I still think a Pro 3D Card is needed regardless.
I had a dream last night that I woke up and this was available in PCie on a Mac PURE3D Not that I'm interested in this card. it was just a nice change to see that someone thought us a worthy 3D environment.
Anyway back to the next PowerMac. As was title of thread.
My list of Apple musts haves is following.
PCIe x2, PCI-X x2, or 3
3+GHz (IBM has to prove itself soon, and they owe Apple 1 big time)
Pro 3D card availability.
And for once - Apple should announce availability that day on all models.
As I was saying. I still think a Pro 3D Card is needed regardless.
It would be more disruptive (in a good way) for Apple to convince the software vendors to enable the "pro card" code on their platform unconditionally. But if you really want to spend $3K for the same card and an OpenGL driver that you won't use (because Apple ships OpenGL already) that's an acceptable second best solution.
Quote:
My list of Apple musts haves is following. PCIe x2, PCI-X x2, or 3
3+GHz (IBM has to prove itself soon, and they owe Apple 1 big time)
Pro 3D card availability.
And for once - Apple should announce availability that day on all models.
THe 4th one would be really refreshing.
PCIe will come when there are enough cards that support it, which, according to all the hype, will happen Real Soon Now(TM). Right now it's only appearing in high-end cards, and then only as an option.
3GHz G5s will probably appear when IBM sorts out the issues that are making the 970fx burn out at high voltages (meaning, 1.3v and up), when temperatures exceed 100°C.
Pro 3D card availability is desirable; the removal of the entirely artificial barrier preventing 3D apps from running as well as they could on Apple's current hardware is more desirable.
Same day availability of announced hardware? Now you're definitely smoking something. Sure would be nice, though...
Not that this is saying a lot.... and kudos for Moto (they got no respect for this one).
Powerbooks, iBooks, and eMacs were available right away (until eMac machines got rare which wasn't moto's fault). I think we all know who is to blame for the g5 shortages \
I hope next revision of the 970 or 975... the process will yield many more cpus. From the sounds of it they better figure something out. And why does apple keep announcing new machines with g5 procs when they damn well know they can't supply that many? It has happened with every g5 computer so far... Rev A PM, XServe, Rev B PM, and soon to be iMac!
And why does apple keep announcing new machines with g5 procs when they damn well know they can't supply that many? It has happened with every g5 computer so far... Rev A PM, XServe, Rev B PM, and soon to be iMac!
What else are they going to use? The G4 had no business in a PowerMac, and its presence in the Xserve confined that machine to specialized, embarrassingly-parallel compute-intensive tasks.
Meanwhile, all else being equal, yields improve over time.
This is actually one reason I'm pulling for Freescale: I would just love to see a two-supplier CPU market as lively as the two-supplier GPU market that Apple currently enjoys. I can't really remember when Apple had a choice of high-performance CPUs, but it would certainly be welcome...
What I meant to say is why does apple keep announcing these machines too early... when they know they can't get enough supply in a timely fashion. Do what they did with the ibook, pb, and emac and stock up on the chips before shipping. It would be the wise thing to do. It just hurts them more when they can't deliver.
What I meant to say is why does apple keep announcing these machines too early... when they know they can't get enough supply in a timely fashion. Do what they did with the ibook, pb, and emac and stock up on the chips before shipping. It would be the wise thing to do. It just hurts them more when they can't deliver.
it hurts them more when they have nothing new to show and people decide to buy a windows computer.
the usaul buying cycle of computers is 2-4 years. that's something you have to deal with.
...Time to bury the hatchet (I give your three guesses where...) and give Freescale a chance to prove themselves IF they can deliver...
I would rather see IBM and Freescale come back to the joint development table with the PowerPC like they were back with the 603/604/G3 days. We had fast development without constrained supplies of chips. Of course there was also more incentive for Moto and IBM to develop faster chips at a pace that kept up with Inetl since there were more (large) customers than just Apple. I think that 2 suppliers is the answer for Apple, but idealy IBM and Motorolla would be making compatible chips with a similar feature set so that Apple can adequatly optamize the OS for the chips with minimal development time.
It would be more disruptive (in a good way) for Apple to convince the software vendors to enable the "pro card" code on their platform unconditionally. But if you really want to spend $3K for the same card and an OpenGL driver that you won't use (because Apple ships OpenGL already) that's an acceptable second best solution.
Apples OpenGL driver isn't necessarily the Quadro Firmware either. Who is saying that the Quadro firmware in the 6800 anyway? Apple? I've never read that.
But back to what you were saying about 3D software vendors enabling the code - what are the chances of Apple getting Nvidia, and Alias to enable the Quadro Drivers, and firmware for the Mac 6800 ULTRA DDL? I'm thinking it's slim.
Also to mention about the ATI 800xt thing. (in refrence to UT2K4 Performance with 9800xt's) From what information I've gathered about ATI, and OpenGL is that most PC Maya users contend that ATI openGL sucks. (at least with Maya) The Majority of Maya pro's have highly recommend the quadro.
Last mention: is that the fastest Quadro isn't the 3400 it's the 4400. The 3400 is the 3000 PCIe version, and it's still using the previous GPU. Pretty good being that Nvidia doubled performance with the new NV40 GPU for the 4000, and 4400 which is the PCIe version of the 4000 (AGP).
But according to toms hardware article that I'm about to put a link to. (not openGL specific) The ATI Pro cards are becoming contenders. But the FireGL V7100 still will not come close to matching the 4000, or 4400 with the latest NV40 GPU. It's about 2x as fast as the 3400.
This is actually one reason I'm pulling for Freescale: I would just love to see a two-supplier CPU market as lively as the two-supplier GPU market that Apple currently enjoys.
Ditto, but I wonder how feasible this is when they're selling less than a million units each quarter with no real growth. It seems like it would require tremendous R&D for such a small market.
Apples OpenGL driver isn't necessarily the Quadro Firmware either. Who is saying that the Quadro firmware in the 6800 anyway? Apple? I've never read that.
Um? Firmware and drivers are two completely different things, so I don't really understand this. Firmware is low-level programmable logic that runs on the card itself. The driver is OS-specific software that runs on the PC. Firmware would control whether the card is little- or big-endian (on NVIDIA parts at least), which parts of the GPU are installed or enabled (they're configurable for OEM needs), and what the name and part number of the card are. The driver is the interface between the card and the operating system.
On PCs, the Quadro driver consists of the same basic driver that ships with the GeForce, a correct and complete (within reason) implementation of OpenGL, and the guarantee that the card and driver have been tested with various expensive pieces of hardware and software, and have been found to be compatible with them.
On Macs, the GeForce driver consists of the basic PC driver, which has been been ported to OS X by Apple engineers. OpenGL is built into the OS, so part of the port involves hooking the back of OS X's OpenGL engine to the driver itself. The only significant differences between this arrangement and a Quadro on the PC are: That the PC driver is certified by NVIDIA to work with sundry pieces of nice kit, and; if asked, the driver reports that the Mac card identifies itself as a "GeForce," not a "Quadro". The all-important OpenGL support is, if not identical, functionally equivalent.
Quote:
But back to what you were saying about 3D software vendors enabling the code - what are the chances of Apple getting Nvidia, and Alias to enable the Quadro Drivers, and firmware for the Mac 6800 ULTRA DDL? I'm thinking it's slim.
Basically, what it comes down to is, how badly do the 3D app vendors want the Mac market, what are the odds of NVIDIA bringing the Quadro to the Mac market, and just how persuasive can Steve be? The middle question is particularly thorny, because NVIDIA (say) has to answer for themselves whether they want to give that all-important certification of compatibility to an implementation that they don't entirely control (because the OpenGL implementation is Apple's, and so is the port of the driver), or whether they want to take on the considerable work and expense involved in porting their driver and OpenGL implementation over themselves — and whether Apple would consider balkanization of Mac OpenGL support acceptable. Frankly, if I were in their shoes (and unless I'm missing something really big), I wouldn't bring the Quadro to the Mac either, at least not until demand reached enough of a pitch to make the considerable risk and investment worthwhile.
That leaves the 3D vendors. I think, given the reluctance of NVIDIA and ATI to commit their high-end solutions to the Mac market, and given a proven and significant revenue stream from Mac sales, that they would consider this course of action. They still risk pissing off the GPU vendors if the result is a significant chunk taken out of PC 3D workstation sales, but that adjustment just might have to be made. Also, 3D customers might have to swallow a lack of guaranteed compatibility (unless Apple or the 3D vendors are willing to take up the role of certifier) which some may be reluctant or unwilling to do.
Quote:
Also to mention about the ATI 800xt thing. (in refrence to UT2K4 Performance with 9800xt's) From what information I've gathered about ATI, and OpenGL is that most PC Maya users contend that ATI openGL sucks. (at least with Maya) The Majority of Maya pro's have highly recommend the quadro.
I'll ask someone else with more low-level familiarity with the GPUs chime in here. I've heard that's actually a lack of hardware support in the GPU, not a driver problem. But ATI does their own Mac drivers, and their drivers for the Mac are as solid as their drivers for the PC are iffy, so I don't know how much any of this would carry over if I'm wrong and it is in fact a software problem.
If it's specifically a problem with ATI's OpenGL implementation, well, any software running on OS X will use Apple's OpenGL implementation, so that would become a non-issue on the Mac (unless Apple's implementation comes up short, in which case it would just be a pervasive problem instead of something specific to ATI).
On PCs, the Quadro driver consists of the same basic driver that ships with the GeForce, a correct and complete (within reason) implementation of OpenGL, and the guarantee that the card and driver have been tested with various expensive pieces of hardware and software, and have been found to be compatible with them.
On Macs, the GeForce driver consists of the basic PC driver, which has been been ported to OS X by Apple engineers. OpenGL is built into the OS, so part of the port involves hooking the back of OS X's OpenGL engine to the driver itself. The only significant differences between this arrangement and a Quadro on the PC are: That the PC driver is certified by NVIDIA to work with sundry pieces of nice kit, and; if asked, the driver reports that the Mac card identifies itself as a "GeForce," not a "Quadro". The all-important OpenGL support is, if not identical, functionally equivalent.
Where do you get this info anyway? You can PM me if you want.
2000(!) Open GL points and over beind an x800XT card on the pc.
The Mac's best card is projected at 1,700 on a DUAL 2.5 gig?
WHAT THE HELLL IS GOING ON!?!?!?
It MUST be a beta driver problem in Cinebench. Only bad software can cripple a machine like that.
The dual 2.5 does fine in render and physics.
WHY do Macs graphic card fall over 100% behind their PC counterparts? THAT'S OUTRAGEOUS!!!!!!!!!!!!!!!
Scratches head. IF it was 10% I could grumble and take it.
Same card. Dual cpus powering behind it. Apple can hardly be crap at writing GL drivers. They worked with Nvidia on the 6800ultra!!! There's former sgi GL experience right there.
I can understand that, and why lemon insano is freakin', but wasn't apple the first to use OGL outside of SGI? I think they should have some serious experience with OGL. Were also forgetting the guys from raycer graphics. Those guys were supposed to be the OGL dev's from heaven. Didn't they design the Quarts engine?
I think that ATI thing is an ATI issue with Mac's, but cinni-biache beta is an awful test.
I can understand that, and why lemon insano is freakin', but wasn't apple the first to use OGL outside of SGI? I think they should have some serious experience with OGL. Were also forgetting the guys from raycer graphics. Those guys were supposed to be the OGL dev's from heaven. Didn't they design the Quarts engine?
I doubt they were the first after SGI, but they were certainly early adopters; they were the first PC vendor to ship OpenGL AFAIK.
However, as I mentioned above, a correct and complete OpenGL implementation that is designed for accuracy is going to lose speed benchmarks to a limited implementation optimized for speed at screen resolution, with the difficult routines left off. But if you're actually doing 3D work, you will prefer the former implementation no matter how much slower it is rendering the "Quake subset," because it is accurate and complete.
As for the sources of the information I provided, I can only say that it's all public, and it's been gleaned over a few years of reading information on the subject all over the place. The truth is out there.
Comments
I love the feel of my cold aluminum power button on my rev b powermac in the morning
Originally posted by onlooker
Don't be hate'n.
Who ever said we have pro cards someone has been pulling your leg, and are probably joking. There are some important differences in screen rendering capabilities. if they are as fast as Quadro I would like to see a HUGE poly Scene comparison next to an Alienware with similar processor specs, and a Quadro FX 4000 just moving a 7+ million poly dragon model in a mountain environment around the screen in Maya like it was a poly plane.
That actually would not contradict the argument that they're essentially the same card. That argument does acknowledge a difference in real use, but attributes it to two things:
1) A real, complete, precise OpenGL implementation as opposed to the speed-optimized "Quake subset" shipped with PC consumer cards;
2) A deal between the card vendors and the 3D software vendors where the 3D software only assumes Quadro/FireGL-level capability if the card identifies itself as a Quadro/FireGL. This is to discourage people from flashing high-end consumer cards. Unfortunately, the side effect is that even if the GeForces in PowerMacs are functionally equivalent to Quadros, and even if Apple's OpenGL implementation is functionally equivalent to a Quadro's, Maya and its ilk will assume that they're running on gelded hardware and disable a lot of their optimized code.
So, in other words, your test would show a huge difference in performance, but that doesn't change the argument. If you could get a version of your 3D app of choice that assumed it was running on pro hardware, and then logged a difference in performance, you might have a point. But the GPU vendors wouldn't want anyone getting ahold of that software, or running that test, for obvious reasons.
[edit:] As to benchmarking:
The main thing you're paying for with the "pro" 3D cards is not speed. That hasn't been true for some years now. What you're paying for is completeness of implementation of OpenGL, and precision. It's quite possible for gamer cards to bench faster because they take shortcuts. With polygons flying by, nobody playing games really cares about isolated rendering errors. But if you're rendering a high-resolution 3D scene, you absolutely do care about precision. A preview of a scene that doesn't actually show you what the scene will look like after a final render isn't worth much, is it?
Also, as I mention above gamer cards tend to support what John Carmack has derided as the "Quake subset" of OpenGL. The functions commonly used in games appear, and they're optimized for speed, but anything that isn't used isn't included. Since games have absolutely fixed requirements, this is OK (although Carmack chafes against it, because it restricts him to the set of functions that everyone else is using, rather than letting him use the entire OpenGL API). Since 3D apps are far more open-ended in what demands they will make of OpenGL, it's not acceptable there. And this is one major complaint with any benchmark that compares consumer cards (and drivers) to pro cards (and drivers): One of the two major reasons to go with a pro card is to get a driver that supports functions that consumer drivers don't support at all — and which, therefor, cannot be benchmarked. The subset of OpenGL that can be benchmarked will most likely be faster on the consumer cards, because their drivers are optimized for speed over precision. But, again, since you're buying a Quadro (and driver) for its rendering precision, this is a moot point. There are Camaros that go faster than some Ferraris, too.
*shrugs*
I had a dream last night that I woke up and this was available in PCie on a Mac PURE3D Not that I'm interested in this card. it was just a nice change to see that someone thought us a worthy 3D environment.
Anyway back to the next PowerMac. As was title of thread.
My list of Apple musts haves is following.
- PCIe x2, PCI-X x2, or 3
- 3+GHz (IBM has to prove itself soon, and they owe Apple 1 big time)
- Pro 3D card availability.
- And for once - Apple should announce availability that day on all models.
THe 4th one would be really refreshing.Originally posted by onlooker
As I was saying. I still think a Pro 3D Card is needed regardless.
It would be more disruptive (in a good way) for Apple to convince the software vendors to enable the "pro card" code on their platform unconditionally. But if you really want to spend $3K for the same card and an OpenGL driver that you won't use (because Apple ships OpenGL already) that's an acceptable second best solution.
My list of Apple musts haves is following. PCIe x2, PCI-X x2, or 3
3+GHz (IBM has to prove itself soon, and they owe Apple 1 big time)
Pro 3D card availability.
And for once - Apple should announce availability that day on all models.
THe 4th one would be really refreshing.
PCIe will come when there are enough cards that support it, which, according to all the hype, will happen Real Soon Now(TM). Right now it's only appearing in high-end cards, and then only as an option.
3GHz G5s will probably appear when IBM sorts out the issues that are making the 970fx burn out at high voltages (meaning, 1.3v and up), when temperatures exceed 100°C.
Pro 3D card availability is desirable; the removal of the entirely artificial barrier preventing 3D apps from running as well as they could on Apple's current hardware is more desirable.
Same day availability of announced hardware? Now you're definitely smoking something.
Powerbooks, iBooks, and eMacs were available right away (until eMac machines got rare which wasn't moto's fault). I think we all know who is to blame for the g5 shortages
I hope next revision of the 970 or 975... the process will yield many more cpus. From the sounds of it they better figure something out. And why does apple keep announcing new machines with g5 procs when they damn well know they can't supply that many? It has happened with every g5 computer so far... Rev A PM, XServe, Rev B PM, and soon to be iMac!
Originally posted by emig647
And why does apple keep announcing new machines with g5 procs when they damn well know they can't supply that many? It has happened with every g5 computer so far... Rev A PM, XServe, Rev B PM, and soon to be iMac!
What else are they going to use? The G4 had no business in a PowerMac, and its presence in the Xserve confined that machine to specialized, embarrassingly-parallel compute-intensive tasks.
Meanwhile, all else being equal, yields improve over time.
This is actually one reason I'm pulling for Freescale: I would just love to see a two-supplier CPU market as lively as the two-supplier GPU market that Apple currently enjoys. I can't really remember when Apple had a choice of high-performance CPUs, but it would certainly be welcome...
The G4 had no business in a PowerMac,
You feeling okay, Amorph? Not running a temperature or anything?
I am more inclined to side with your long held belief that Apple should have two healthy cpu suppliers.
Hmmm. What's in a name. I hope Freescale can shake the shackles of neglect.
A dual core G4 in a Powerbook would be most welcome over a 1.5 G4.
And would run over a 1.6 G5 easily?
Time to bury the hatchet (I give your three guesses where...) and give Freescale a chance to prove themselves IF they can deliver.
Come on IBM, you owe us. 3 gig and over.
Antares...whither art thou?
Lemon Bon Bon
What I meant to say is why does apple keep announcing these machines too early... when they know they can't get enough supply in a timely fashion. Do what they did with the ibook, pb, and emac and stock up on the chips before shipping. It would be the wise thing to do. It just hurts them more when they can't deliver.
Originally posted by emig647
AMorph,
What I meant to say is why does apple keep announcing these machines too early... when they know they can't get enough supply in a timely fashion. Do what they did with the ibook, pb, and emac and stock up on the chips before shipping. It would be the wise thing to do. It just hurts them more when they can't deliver.
it hurts them more when they have nothing new to show and people decide to buy a windows computer.
the usaul buying cycle of computers is 2-4 years. that's something you have to deal with.
Originally posted by Lemon Bon Bon
...Time to bury the hatchet (I give your three guesses where...) and give Freescale a chance to prove themselves IF they can deliver...
I would rather see IBM and Freescale come back to the joint development table with the PowerPC like they were back with the 603/604/G3 days. We had fast development without constrained supplies of chips. Of course there was also more incentive for Moto and IBM to develop faster chips at a pace that kept up with Inetl since there were more (large) customers than just Apple. I think that 2 suppliers is the answer for Apple, but idealy IBM and Motorolla would be making compatible chips with a similar feature set so that Apple can adequatly optamize the OS for the chips with minimal development time.
Originally posted by Amorph
It would be more disruptive (in a good way) for Apple to convince the software vendors to enable the "pro card" code on their platform unconditionally. But if you really want to spend $3K for the same card and an OpenGL driver that you won't use (because Apple ships OpenGL already) that's an acceptable second best solution.
Apples OpenGL driver isn't necessarily the Quadro Firmware either. Who is saying that the Quadro firmware in the 6800 anyway? Apple? I've never read that.
But back to what you were saying about 3D software vendors enabling the code - what are the chances of Apple getting Nvidia, and Alias to enable the Quadro Drivers, and firmware for the Mac 6800 ULTRA DDL? I'm thinking it's slim.
Also to mention about the ATI 800xt thing. (in refrence to UT2K4 Performance with 9800xt's) From what information I've gathered about ATI, and OpenGL is that most PC Maya users contend that ATI openGL sucks. (at least with Maya) The Majority of Maya pro's have highly recommend the quadro.
Last mention: is that the fastest Quadro isn't the 3400 it's the 4400. The 3400 is the 3000 PCIe version, and it's still using the previous GPU. Pretty good being that Nvidia doubled performance with the new NV40 GPU for the 4000, and 4400 which is the PCIe version of the 4000 (AGP).
But according to toms hardware article that I'm about to put a link to. (not openGL specific) The ATI Pro cards are becoming contenders. But the FireGL V7100 still will not come close to matching the 4000, or 4400 with the latest NV40 GPU. It's about 2x as fast as the 3400.
http://www20.graphics.tomshardware.c...816/index.html
I still think the cinnebench thing is odd at best.
Originally posted by Amorph
This is actually one reason I'm pulling for Freescale: I would just love to see a two-supplier CPU market as lively as the two-supplier GPU market that Apple currently enjoys.
Ditto, but I wonder how feasible this is when they're selling less than a million units each quarter with no real growth. It seems like it would require tremendous R&D for such a small market.
Originally posted by onlooker
Apples OpenGL driver isn't necessarily the Quadro Firmware either. Who is saying that the Quadro firmware in the 6800 anyway? Apple? I've never read that.
Um? Firmware and drivers are two completely different things, so I don't really understand this. Firmware is low-level programmable logic that runs on the card itself. The driver is OS-specific software that runs on the PC. Firmware would control whether the card is little- or big-endian (on NVIDIA parts at least), which parts of the GPU are installed or enabled (they're configurable for OEM needs), and what the name and part number of the card are. The driver is the interface between the card and the operating system.
On PCs, the Quadro driver consists of the same basic driver that ships with the GeForce, a correct and complete (within reason) implementation of OpenGL, and the guarantee that the card and driver have been tested with various expensive pieces of hardware and software, and have been found to be compatible with them.
On Macs, the GeForce driver consists of the basic PC driver, which has been been ported to OS X by Apple engineers. OpenGL is built into the OS, so part of the port involves hooking the back of OS X's OpenGL engine to the driver itself. The only significant differences between this arrangement and a Quadro on the PC are: That the PC driver is certified by NVIDIA to work with sundry pieces of nice kit, and; if asked, the driver reports that the Mac card identifies itself as a "GeForce," not a "Quadro". The all-important OpenGL support is, if not identical, functionally equivalent.
But back to what you were saying about 3D software vendors enabling the code - what are the chances of Apple getting Nvidia, and Alias to enable the Quadro Drivers, and firmware for the Mac 6800 ULTRA DDL? I'm thinking it's slim.
Basically, what it comes down to is, how badly do the 3D app vendors want the Mac market, what are the odds of NVIDIA bringing the Quadro to the Mac market, and just how persuasive can Steve be? The middle question is particularly thorny, because NVIDIA (say) has to answer for themselves whether they want to give that all-important certification of compatibility to an implementation that they don't entirely control (because the OpenGL implementation is Apple's, and so is the port of the driver), or whether they want to take on the considerable work and expense involved in porting their driver and OpenGL implementation over themselves — and whether Apple would consider balkanization of Mac OpenGL support acceptable. Frankly, if I were in their shoes (and unless I'm missing something really big), I wouldn't bring the Quadro to the Mac either, at least not until demand reached enough of a pitch to make the considerable risk and investment worthwhile.
That leaves the 3D vendors. I think, given the reluctance of NVIDIA and ATI to commit their high-end solutions to the Mac market, and given a proven and significant revenue stream from Mac sales, that they would consider this course of action. They still risk pissing off the GPU vendors if the result is a significant chunk taken out of PC 3D workstation sales, but that adjustment just might have to be made. Also, 3D customers might have to swallow a lack of guaranteed compatibility (unless Apple or the 3D vendors are willing to take up the role of certifier) which some may be reluctant or unwilling to do.
Also to mention about the ATI 800xt thing. (in refrence to UT2K4 Performance with 9800xt's) From what information I've gathered about ATI, and OpenGL is that most PC Maya users contend that ATI openGL sucks. (at least with Maya) The Majority of Maya pro's have highly recommend the quadro.
I'll ask someone else with more low-level familiarity with the GPUs chime in here. I've heard that's actually a lack of hardware support in the GPU, not a driver problem. But ATI does their own Mac drivers, and their drivers for the Mac are as solid as their drivers for the PC are iffy, so I don't know how much any of this would carry over if I'm wrong and it is in fact a software problem.
If it's specifically a problem with ATI's OpenGL implementation, well, any software running on OS X will use Apple's OpenGL implementation, so that would become a non-issue on the Mac (unless Apple's implementation comes up short, in which case it would just be a pervasive problem instead of something specific to ATI).
On PCs, the Quadro driver consists of the same basic driver that ships with the GeForce, a correct and complete (within reason) implementation of OpenGL, and the guarantee that the card and driver have been tested with various expensive pieces of hardware and software, and have been found to be compatible with them.
On Macs, the GeForce driver consists of the basic PC driver, which has been been ported to OS X by Apple engineers. OpenGL is built into the OS, so part of the port involves hooking the back of OS X's OpenGL engine to the driver itself. The only significant differences between this arrangement and a Quadro on the PC are: That the PC driver is certified by NVIDIA to work with sundry pieces of nice kit, and; if asked, the driver reports that the Mac card identifies itself as a "GeForce," not a "Quadro". The all-important OpenGL support is, if not identical, functionally equivalent.
Where do you get this info anyway? You can PM me if you want.
The Mac's best card is projected at 1,700 on a DUAL 2.5 gig?
WHAT THE HELLL IS GOING ON!?!?!?
It MUST be a beta driver problem in Cinebench. Only bad software can cripple a machine like that.
The dual 2.5 does fine in render and physics.
WHY do Macs graphic card fall over 100% behind their PC counterparts? THAT'S OUTRAGEOUS!!!!!!!!!!!!!!!
Scratches head. IF it was 10% I could grumble and take it.
Same card. Dual cpus powering behind it. Apple can hardly be crap at writing GL drivers. They worked with Nvidia on the 6800ultra!!! There's former sgi GL experience right there.
WHAT THE f**************** is going on!?
Lemon Bon Bon
Originally posted by Lemon Bon Bon
2000(!) Open GL points and over beind an x800XT card on the pc.
The Mac's best card is projected at 1,700 on a DUAL 2.5 gig?
WHAT THE HELLL IS GOING ON!?!?!?
They don't call it "benchmarketing" for nothing.
Originally posted by Amorph
They don't call it "benchmarketing" for nothing.
I can understand that, and why lemon insano is freakin', but wasn't apple the first to use OGL outside of SGI? I think they should have some serious experience with OGL. Were also forgetting the guys from raycer graphics. Those guys were supposed to be the OGL dev's from heaven. Didn't they design the Quarts engine?
I think that ATI thing is an ATI issue with Mac's, but cinni-biache beta is an awful test.
Originally posted by onlooker
I can understand that, and why lemon insano is freakin', but wasn't apple the first to use OGL outside of SGI? I think they should have some serious experience with OGL. Were also forgetting the guys from raycer graphics. Those guys were supposed to be the OGL dev's from heaven. Didn't they design the Quarts engine?
I doubt they were the first after SGI, but they were certainly early adopters; they were the first PC vendor to ship OpenGL AFAIK.
However, as I mentioned above, a correct and complete OpenGL implementation that is designed for accuracy is going to lose speed benchmarks to a limited implementation optimized for speed at screen resolution, with the difficult routines left off. But if you're actually doing 3D work, you will prefer the former implementation no matter how much slower it is rendering the "Quake subset," because it is accurate and complete.
As for the sources of the information I provided, I can only say that it's all public, and it's been gleaned over a few years of reading information on the subject all over the place. The truth is out there.