[quote]<strong>And, if I've read things right, they will also not limit the types of calculations that can be done to a handful of options.
I've heard of capabilities like - I don't know the technical term, so I'll borrow one from the database world - stored procedures, where the software can actually cache a custom function in the GPU and call it? That alone would immensely improve the capabilities of the software. It seems to my one-semester-of-OpenGL-7-years-ago eyes that one of the major problem facing game programmers is how to disguise polygons. If the GPU makes it easier to disguise them, not as many are necessary because the engine no longer needs to "throw polygons at the problem" that everything looks like a D&D die.
I'm speculating here, though.
<hr></blockquote></strong>
This is what vertex & pixel shaders are. The current hardware has a fairly limited (but still very powerful) set of capabilities in the shaders. They are basically small assembly functions which execute on the GPU. The vertex shaders are much more powerful than the pixel shaders currently. The next generation hardware will take these things to the next level, and that is where we'll get close to FinalFanstasy quality. Like I said, some things just aren't doable in realtime (yet), but it is getting pretty damn close to looking like film. Non-graphics people will have a harder time distinguishing between them (of course the film guys are going to push the envelope even farther, so realtime will always be behind).
[quote]Originally posted by Amorph:
<strong>
I'm predicting a new OpenGL layer and QT6 in the next major release of OS X, regardless of what the MPEG-LA settles on for licensing terms, and regardless of whether they've settled. Apple can always withhold the streaming software if that's a sticking point, but I think they will need to release the rest.
Apple might call the OpenGL layer a "beta" if the ARB hasn't finished OpenGL 2.0 yet, just so that developers won't be surprised when they change something to conform to a change in the standard. But I think they'll push it out.</strong><hr></blockquote>
I'm doubtful that OGL2 will show up before 2003 in any form, but ARB will hopefully approve a first version of the shaders in time for the 10.2 release. Then the geForce3, geForce4 and R8500 will be able to flex their real muscles on the Mac (whether Mac developers jump on the bandwagon is another story). I don't know anything about QT6, but I agree that Apple can't delay it much longer for the sake of MPEG4.
What NVidia demoed was a cut down version of the human model they used in the movie. That's really not something you use in games - since a game scene usually consists of ~5 characters, backgrounds/terrain, interactive objects, buildings etc.
I think in 5 years time we'll be able to see really cool stuff, hopefully antialiased, DOF and motion blur rendering at that. And with 64bit color space to go with.
If I get Programmer right and the shading languages are not yet implemented on the Mac OpenGL that's pretty a sad thing. No wonder gf4ti isn't coming till july, since they will then probably be cheaper _and_ supported. I wonder when and if ATIs R300 will be out, I really am looking forward to it.
Nvidia has already admitted they aren't going to call their next card GeForce. It will carry a new name to match the new technology.
[quote]While the Northbridge of this chipset is connected to the x86 architecture, directly accessing the CPU, the Southbridge could theoretically be used in any HaperTransport compatible layout, also in PowerMacs. Further interesting components from nVidia could be a Hypertransport connected GPU, e.g. for all Mac layouts with soldered GPUs, like the Notebooks and iMacs. <hr></blockquote>
That's a theory I've been throwing around for a couple months now too, could happen, Nvidia recently disclosed they are no longer going to attempt to obtain a P4 bus liscense for use with Nforce. But they've applied it to PIII and AMD bus technologies so far.
[quote]There is no "HyperTransport-Connector" yet, which could be used as a replacement for e.g. AGP or PCI, but rumours are that it is worked on. So, maybe we could eventually even see HyperTransport GPUs show up in the PowerMacs. <hr></blockquote>
HT is intended for chip to chip communication and not necessarily peripheal connections. AMD and the Hyper Transport consortium don't intend to supplant or compete with PCI or AGP, but rather use HT tunnels to connect the PHY bridging chips that connect to these technologies to other chips. For example see figure below:
I was there when they did that demo. It was nothing like the film, it's technically impossible for it to be.
It's not just about how detailed the model is either. In fact, it looked like the models were pretty much the same one's in the movie, they looked really detailed to me, but they were probably being subdivided less at render time.
What makes the difference is things like radiosity and reflections and refractions and really really complex shader algorithms that take insane amounts of time to render for a movie. For one thing the characters looked plastic in the real-time geforce rendered version.
<strong>Nvidia has already admitted they aren't going to call their next card GeForce. It will carry a new name to match the new technology.</strong><hr></blockquote>
I realize that, but since we don't know what to call it, GF5 will do for now...
<strong>What makes the difference is things like radiosity and reflections and refractions and really really complex shader algorithms that take insane amounts of time to render for a movie. For one thing the characters looked plastic in the real-time geforce rendered version.</strong><hr></blockquote>
That was on the current geForce4, right? The next one will be a big leap forward, especially with what can be done per pixel compared to the fairly limited system the geForce3/4 support.
[quote] I don't think this argument will last long, but the P10 directly targets competing GPUs by emphasizing that it is a ground-up, fully programmable architecture. This is a dig at the GeForce4 and Radeon 8500 that still have elements of past T&L engines incorporated into their designs. But, they also had to support those good ol' DX7 games, too. 3Dlabs hasn't really been troubled by DirectX support in recent memory, very much focusing on the high-end OpenGL (OGL) market.<hr></blockquote>
Maybe somebody that understands the topic area might be able to explain what all of that really means (graphics aren't my thing).
Is it possible this is what nVidia has planned for the mac? Would make sense to implement something like that now with OpenGL 2.0 and a completely new system for the mac just being introduced.
Why does everybody think that Quartz is some amazing technology that needs tons of hardware thrown at it? The truth of the matter is that if the designers of Quartz had planned for hardware acceleration from day 1 then current hardware would be more than fast enough. The stuff Quartz is doing pales in comparison to the amount of graphics most 3D games are doing. Instead the engineers who did Quartz built a software graphics engine, something which I don't think has had a place on new machines in the last 2-3 years... and it has been obvious that things were going this way for the last 5-6 years, at least. Now they have to figure out how to make this software engine into a hardware engine, which is generally harder and less efficient than building a hardware engine in the first place.
"Why does everybody think that Quartz is some amazing technology that needs tons of hardware thrown at it?"
...because it's slow..?
"The truth of the matter is that if the designers of Quartz had planned for hardware acceleration from day 1 then current hardware would be more than fast enough."
Hmmm. Who knows what the real reason is? Maybe the explanation is quite simple. They had a big job incorporating backwards compatability and Unix under a decent interface. So? They get 'X' working. Then they speed it up...then they add bells and whistles and maybe h/w acceleration...
One can only guess at other reasons why they didn't add h/w acceleration from day one? Speculate?
"The stuff Quartz is doing pales in comparison to the amount of graphics most 3D games are doing."
It's funny you should say that. I've often been quite perplexed by the level of sophistication in game interfaces and the dazzling graphics there in and wondered what's keeping OS 'X' for example to 'catch up'...
"Instead the engineers who did Quartz built a software graphics engine,"
...but I find it hard to believe they didn't have an idea of where they where going with it...
"something which I don't think has had a place on new machines in the last 2-3 years... and it has been obvious that things were going this way for the last 5-6 years, at least."
Well, current machines are doing spectacular games graphics...I'm curious as to why this level of functionality isn't possible in X? Maybe its evolution and first things first?
You say computers haven't had the power in the last few years for a Quartz type engine? Hmm. Well, Opp' Systems have, visually, stayed pretty much the same since the Mac 1984 interface. Since then, computing power has become relatively ridiculously powerful. What's keeping the design of OS from having visual fluff? As soon as Apple does something different from OS 9, dinosaurs sceam blue murder? Maybe OSs aren't were they should be because there isn't enough competition in the OS market (hello Microsoft...) and in the Games market there IS alot of competition...therefore much speedier advancement in interfaces.
"Now they have to figure out how to make this software engine into a hardware engine, which is generally harder and less efficient than building a hardware engine in the first place."
Yeah, I guess so. But who's to say with any authority that this is the case? Maybe the Racyer buyout was part of a plan to accelerate the OS and go somewhere new? Maybe the Raycer buyout has yet to see its realisation? Technical delays? etc? I dunno. But it's fun to speculate.
The romantic in me says Apple has got a plan and Raycer chipsets/Open GL OS acceleration/special functionaility (whatever its form...), High End Nvidia Apple collaboration on High End Graphic Cards and G5s are all a matter of time.
Who knows. Maybe 'Jaguar' will hint at things to come...
<strong>"Why does everybody think that Quartz is some amazing technology that needs tons of hardware thrown at it?"
...because it's slow..?
</strong><hr></blockquote>
But my point is that the technology doesn't have to be "amazing" -- it just has to be the 3D hardware engine that is already in the graphics chips (Rage128 and beyond).
<strong> [quote]
"The truth of the matter is that if the designers of Quartz had planned for hardware acceleration from day 1 then current hardware would be more than fast enough."
Hmmm. Who knows what the real reason is? Maybe the explanation is quite simple. They had a big job incorporating backwards compatability and Unix under a decent interface. So? They get 'X' working. Then they speed it up...then they add bells and whistles and maybe h/w acceleration...
One can only guess at other reasons why they didn't add h/w acceleration from day one? Speculate?
</strong><hr></blockquote>
Inertia -- Apple has always built software graphics engines and worried about hardware acceleration later. QuickDraw, QD GX, and now Quartz. They are getting better at the API level it seems, but somehow they just don't deliver at the implementation & driver levels.
<strong> [quote]
"The stuff Quartz is doing pales in comparison to the amount of graphics most 3D games are doing."
It's funny you should say that. I've often been quite perplexed by the level of sophistication in game interfaces and the dazzling graphics there in and wondered what's keeping OS 'X' for example to 'catch up'...
</strong><hr></blockquote>
Games are typically built from the ground up to leverage the hardware they expect to run on. It seems like the Quartz guys weren't even allowed to talk to the OpenGL or hardware people. They'd damn well better hurry up because GUI & 2D support is rumoured to be on the list of future items for DirectX, which means M$ will be able to do some very fancy stuff in their GUI.
<strong> [quote]
"Instead the engineers who did Quartz built a software graphics engine,"
...but I find it hard to believe they didn't have an idea of where they where going with it...
"something which I don't think has had a place on new machines in the last 2-3 years... and it has been obvious that things were going this way for the last 5-6 years, at least."
Well, current machines are doing spectacular games graphics...I'm curious as to why this level of functionality isn't possible in X? Maybe its evolution and first things first?
</strong><hr></blockquote>
It probably just wasn't prioritized at a high level, so it got put off and put off and put off. Unfortunately it is something that can be done a lot better if it is made a priority very early in the design phase.
<strong> [quote]
You say computers haven't had the power in the last few years for a Quartz type engine? Hmm. Well, Opp' Systems have, visually, stayed pretty much the same since the Mac 1984 interface. Since then, computing power has become relatively ridiculously powerful. What's keeping the design of OS from having visual fluff? As soon as Apple does something different from OS 9, dinosaurs sceam blue murder? Maybe OSs aren't were they should be because there isn't enough competition in the OS market (hello Microsoft...) and in the Games market there IS alot of competition...therefore much speedier advancement in interfaces.
</strong><hr></blockquote>
I meant for doing these kinds of things in hardware. The capability is relatively new. Using graphics hardware tends to break some assumptions which are built into software that was built with only software graphics in mind. Hardware is powerful, but usually you have to fit with its way of working rather than it fitting to yours. This is why you want to plan for it at an early stage. For software that has been around a long time, this is very forgiveable because it wasn't even understood that such hardware would exist, much less how it would actually work. Over the last 5 years or so, however, the direction things are going has been pretty obvious to anybody actually paying attention to PC level graphics.
[quote]<strong>
"Now they have to figure out how to make this software engine into a hardware engine, which is generally harder and less efficient than building a hardware engine in the first place."
Yeah, I guess so. But who's to say with any authority that this is the case? Maybe the Racyer buyout was part of a plan to accelerate the OS and go somewhere new? Maybe the Raycer buyout has yet to see its realisation? Technical delays? etc? I dunno. But it's fun to speculate.
The romantic in me says Apple has got a plan and Raycer chipsets/Open GL OS acceleration/special functionaility (whatever its form...), High End Nvidia Apple collaboration on High End Graphic Cards and G5s are all a matter of time.</strong><hr></blockquote>
Apple shouldn't be trying to compete in terms of graphics chips -- they are going lose if they do. They cannot compete with nVidia, ATI, 3DLabs, etc. Why should they? They don't compete in the CPU design business, why should they try in the GPU business -- especially since modern GPUs are arguably more complex. If they only add acceleration for some new Raycer designed hardware then all the existing machines don't benefit even though they include a bunch of very powerful and under-utilized graphics chips. Apple is all about high margins on products built with well integrated off-the-shelf parts -- designing a graphics chip would be a silly thing to do.
Mach runs the hardware and provides services to BSD which has the software libraries built on it that provide the environment to build the applications that we use. By using two kernals you get hardware abstraction. It is just that one kernal (Mach) will provide services to the other kernal (BSD). Synchronizing the two is the hard part, funnel metaphor.
Maybe Apple has found a way to better implement graphic acceleration in Mach and to better synch that with BSD. Or maybe I have no clue.
nvidia states that their next video card is going to be totally extensible and SIMD based.
Doesn't this suddenly allow Apple to use any PPC chip sans AltiVec since those functions can now be offloaded to the uber-SIMD nvidia card? Obviously this would bring IBM back into focus... what about some consumerr variant of Power4. what about AMD? What about anyone?
When Steve Jobs first unveiled Quartz, he noted that it was based on technology developed at Pixar some 10 years before.
So, Programmer, to answer your objection: It was code written long before any hardware could reasonably expect to accelerate any part of it. Furthermore, NeXTStep had chugged along nicely with a similar layer (Display PostScript) between the applications and the UI.
I don't think too much of the slowness of OS X is intrinsic to Quartz. I've seen threads on the mailing lists I subscribe to that there are real performance costs for not knowing how to use it efficiently, but that's a developer problem. Given that it performs very well for an all-software solution, if they do hardware accelerate it as thoroughly as this thread implies, it should become instantaneous.
Comments
I've heard of capabilities like - I don't know the technical term, so I'll borrow one from the database world - stored procedures, where the software can actually cache a custom function in the GPU and call it? That alone would immensely improve the capabilities of the software. It seems to my one-semester-of-OpenGL-7-years-ago eyes that one of the major problem facing game programmers is how to disguise polygons. If the GPU makes it easier to disguise them, not as many are necessary because the engine no longer needs to "throw polygons at the problem" that everything looks like a D&D die.
I'm speculating here, though.
<hr></blockquote></strong>
This is what vertex & pixel shaders are. The current hardware has a fairly limited (but still very powerful) set of capabilities in the shaders. They are basically small assembly functions which execute on the GPU. The vertex shaders are much more powerful than the pixel shaders currently. The next generation hardware will take these things to the next level, and that is where we'll get close to FinalFanstasy quality. Like I said, some things just aren't doable in realtime (yet), but it is getting pretty damn close to looking like film. Non-graphics people will have a harder time distinguishing between them (of course the film guys are going to push the envelope even farther, so realtime will always be behind).
[quote]Originally posted by Amorph:
<strong>
I'm predicting a new OpenGL layer and QT6 in the next major release of OS X, regardless of what the MPEG-LA settles on for licensing terms, and regardless of whether they've settled. Apple can always withhold the streaming software if that's a sticking point, but I think they will need to release the rest.
Apple might call the OpenGL layer a "beta" if the ARB hasn't finished OpenGL 2.0 yet, just so that developers won't be surprised when they change something to conform to a change in the standard. But I think they'll push it out.</strong><hr></blockquote>
I'm doubtful that OGL2 will show up before 2003 in any form, but ARB will hopefully approve a first version of the shaders in time for the 10.2 release. Then the geForce3, geForce4 and R8500 will be able to flex their real muscles on the Mac (whether Mac developers jump on the bandwagon is another story). I don't know anything about QT6, but I agree that Apple can't delay it much longer for the sake of MPEG4.
<strong>
Nvidia already <a href="http://gamespot.com/gshw/stories/news/0,12836,2820060,00.html" target="_blank">demo'ed</a> this. Although it was at 2 FPS and probably not at the insane resolution they use to print to film.</strong><hr></blockquote>
What NVidia demoed was a cut down version of the human model they used in the movie. That's really not something you use in games - since a game scene usually consists of ~5 characters, backgrounds/terrain, interactive objects, buildings etc.
I think in 5 years time we'll be able to see really cool stuff, hopefully antialiased, DOF and motion blur rendering at that. And with 64bit color space to go with.
If I get Programmer right and the shading languages are not yet implemented on the Mac OpenGL that's pretty a sad thing. No wonder gf4ti isn't coming till july, since they will then probably be cheaper _and_ supported. I wonder when and if ATIs R300 will be out, I really am looking forward to it.
<strong>
GeForce 5 anyone?
Nvidia has already admitted they aren't going to call their next card GeForce. It will carry a new name to match the new technology.
[quote]While the Northbridge of this chipset is connected to the x86 architecture, directly accessing the CPU, the Southbridge could theoretically be used in any HaperTransport compatible layout, also in PowerMacs. Further interesting components from nVidia could be a Hypertransport connected GPU, e.g. for all Mac layouts with soldered GPUs, like the Notebooks and iMacs. <hr></blockquote>
That's a theory I've been throwing around for a couple months now too, could happen, Nvidia recently disclosed they are no longer going to attempt to obtain a P4 bus liscense for use with Nforce. But they've applied it to PIII and AMD bus technologies so far.
[quote]There is no "HyperTransport-Connector" yet, which could be used as a replacement for e.g. AGP or PCI, but rumours are that it is worked on. So, maybe we could eventually even see HyperTransport GPUs show up in the PowerMacs. <hr></blockquote>
HT is intended for chip to chip communication and not necessarily peripheal connections. AMD and the Hyper Transport consortium don't intend to supplant or compete with PCI or AGP, but rather use HT tunnels to connect the PHY bridging chips that connect to these technologies to other chips. For example see figure below:
<strong>
Nvidia already <a href="http://gamespot.com/gshw/stories/news/0,12836,2820060,00.html" target="_blank">demo'ed</a> this. Although it was at 2 FPS and probably not at the insane resolution they use to print to film.</strong><hr></blockquote>
I was there when they did that demo. It was nothing like the film, it's technically impossible for it to be.
It's not just about how detailed the model is either. In fact, it looked like the models were pretty much the same one's in the movie, they looked really detailed to me, but they were probably being subdivided less at render time.
What makes the difference is things like radiosity and reflections and refractions and really really complex shader algorithms that take insane amounts of time to render for a movie. For one thing the characters looked plastic in the real-time geforce rendered version.
<strong>Nvidia has already admitted they aren't going to call their next card GeForce. It will carry a new name to match the new technology.</strong><hr></blockquote>
I realize that, but since we don't know what to call it, GF5 will do for now...
<strong>What makes the difference is things like radiosity and reflections and refractions and really really complex shader algorithms that take insane amounts of time to render for a movie. For one thing the characters looked plastic in the real-time geforce rendered version.</strong><hr></blockquote>
That was on the current geForce4, right? The next one will be a big leap forward, especially with what can be done per pixel compared to the fairly limited system the geForce3/4 support.
In particular:
[quote] I don't think this argument will last long, but the P10 directly targets competing GPUs by emphasizing that it is a ground-up, fully programmable architecture. This is a dig at the GeForce4 and Radeon 8500 that still have elements of past T&L engines incorporated into their designs. But, they also had to support those good ol' DX7 games, too. 3Dlabs hasn't really been troubled by DirectX support in recent memory, very much focusing on the high-end OpenGL (OGL) market.<hr></blockquote>
Maybe somebody that understands the topic area might be able to explain what all of that really means (graphics aren't my thing).
Is it possible this is what nVidia has planned for the mac? Would make sense to implement something like that now with OpenGL 2.0 and a completely new system for the mac just being introduced.
<strong>Nvidia has already admitted they aren't going to call their next card GeForce. It will carry a new name to match the new technology.
</strong><hr></blockquote>
The next nVidia series will be called XeForce.
I wonder...is this what the close Nvidia and Apple rumours are pointing to?
Are apple using the Raycer team with Nvidia's tech to push a new graphics system into imminent Powermacs?
Will we have Quartz 3D aspects? 'X' demo's that make the Genie 'scale' tool seem child's play?
If a good G4 mobo revision or/and G5 appear along with this sort of tech' built into 'X' then...
...the future's looking bright green.
Lemon Bon Bon <img src="graemlins/hmmm.gif" border="0" alt="[Hmmm]" />
[quote] Holy schnikes! Nvidia is going to make the new G5 processor! Dual 2 GHz Powermacs here we come!!!! <hr></blockquote>
Heh, the next powermac is really powered by a new Nvidia Graphics chip
[ 05-03-2002: Message edited by: Crusader ]</p>
...because it's slow..?
"The truth of the matter is that if the designers of Quartz had planned for hardware acceleration from day 1 then current hardware would be more than fast enough."
Hmmm. Who knows what the real reason is? Maybe the explanation is quite simple. They had a big job incorporating backwards compatability and Unix under a decent interface. So? They get 'X' working. Then they speed it up...then they add bells and whistles and maybe h/w acceleration...
One can only guess at other reasons why they didn't add h/w acceleration from day one? Speculate?
"The stuff Quartz is doing pales in comparison to the amount of graphics most 3D games are doing."
It's funny you should say that. I've often been quite perplexed by the level of sophistication in game interfaces and the dazzling graphics there in and wondered what's keeping OS 'X' for example to 'catch up'...
"Instead the engineers who did Quartz built a software graphics engine,"
...but I find it hard to believe they didn't have an idea of where they where going with it...
"something which I don't think has had a place on new machines in the last 2-3 years... and it has been obvious that things were going this way for the last 5-6 years, at least."
Well, current machines are doing spectacular games graphics...I'm curious as to why this level of functionality isn't possible in X? Maybe its evolution and first things first?
You say computers haven't had the power in the last few years for a Quartz type engine? Hmm. Well, Opp' Systems have, visually, stayed pretty much the same since the Mac 1984 interface. Since then, computing power has become relatively ridiculously powerful. What's keeping the design of OS from having visual fluff? As soon as Apple does something different from OS 9, dinosaurs sceam blue murder? Maybe OSs aren't were they should be because there isn't enough competition in the OS market (hello Microsoft...) and in the Games market there IS alot of competition...therefore much speedier advancement in interfaces.
"Now they have to figure out how to make this software engine into a hardware engine, which is generally harder and less efficient than building a hardware engine in the first place."
Yeah, I guess so. But who's to say with any authority that this is the case? Maybe the Racyer buyout was part of a plan to accelerate the OS and go somewhere new? Maybe the Raycer buyout has yet to see its realisation? Technical delays? etc? I dunno. But it's fun to speculate.
The romantic in me says Apple has got a plan and Raycer chipsets/Open GL OS acceleration/special functionaility (whatever its form...), High End Nvidia Apple collaboration on High End Graphic Cards and G5s are all a matter of time.
Who knows. Maybe 'Jaguar' will hint at things to come...
Lemon Bon Bon
<strong>"Why does everybody think that Quartz is some amazing technology that needs tons of hardware thrown at it?"
...because it's slow..?
</strong><hr></blockquote>
But my point is that the technology doesn't have to be "amazing" -- it just has to be the 3D hardware engine that is already in the graphics chips (Rage128 and beyond).
<strong> [quote]
"The truth of the matter is that if the designers of Quartz had planned for hardware acceleration from day 1 then current hardware would be more than fast enough."
Hmmm. Who knows what the real reason is? Maybe the explanation is quite simple. They had a big job incorporating backwards compatability and Unix under a decent interface. So? They get 'X' working. Then they speed it up...then they add bells and whistles and maybe h/w acceleration...
One can only guess at other reasons why they didn't add h/w acceleration from day one? Speculate?
</strong><hr></blockquote>
Inertia -- Apple has always built software graphics engines and worried about hardware acceleration later. QuickDraw, QD GX, and now Quartz. They are getting better at the API level it seems, but somehow they just don't deliver at the implementation & driver levels.
<strong> [quote]
"The stuff Quartz is doing pales in comparison to the amount of graphics most 3D games are doing."
It's funny you should say that. I've often been quite perplexed by the level of sophistication in game interfaces and the dazzling graphics there in and wondered what's keeping OS 'X' for example to 'catch up'...
</strong><hr></blockquote>
Games are typically built from the ground up to leverage the hardware they expect to run on. It seems like the Quartz guys weren't even allowed to talk to the OpenGL or hardware people. They'd damn well better hurry up because GUI & 2D support is rumoured to be on the list of future items for DirectX, which means M$ will be able to do some very fancy stuff in their GUI.
<strong> [quote]
"Instead the engineers who did Quartz built a software graphics engine,"
...but I find it hard to believe they didn't have an idea of where they where going with it...
"something which I don't think has had a place on new machines in the last 2-3 years... and it has been obvious that things were going this way for the last 5-6 years, at least."
Well, current machines are doing spectacular games graphics...I'm curious as to why this level of functionality isn't possible in X? Maybe its evolution and first things first?
</strong><hr></blockquote>
It probably just wasn't prioritized at a high level, so it got put off and put off and put off. Unfortunately it is something that can be done a lot better if it is made a priority very early in the design phase.
<strong> [quote]
You say computers haven't had the power in the last few years for a Quartz type engine? Hmm. Well, Opp' Systems have, visually, stayed pretty much the same since the Mac 1984 interface. Since then, computing power has become relatively ridiculously powerful. What's keeping the design of OS from having visual fluff? As soon as Apple does something different from OS 9, dinosaurs sceam blue murder? Maybe OSs aren't were they should be because there isn't enough competition in the OS market (hello Microsoft...) and in the Games market there IS alot of competition...therefore much speedier advancement in interfaces.
</strong><hr></blockquote>
I meant for doing these kinds of things in hardware. The capability is relatively new. Using graphics hardware tends to break some assumptions which are built into software that was built with only software graphics in mind. Hardware is powerful, but usually you have to fit with its way of working rather than it fitting to yours. This is why you want to plan for it at an early stage. For software that has been around a long time, this is very forgiveable because it wasn't even understood that such hardware would exist, much less how it would actually work. Over the last 5 years or so, however, the direction things are going has been pretty obvious to anybody actually paying attention to PC level graphics.
[quote]<strong>
"Now they have to figure out how to make this software engine into a hardware engine, which is generally harder and less efficient than building a hardware engine in the first place."
Yeah, I guess so. But who's to say with any authority that this is the case? Maybe the Racyer buyout was part of a plan to accelerate the OS and go somewhere new? Maybe the Raycer buyout has yet to see its realisation? Technical delays? etc? I dunno. But it's fun to speculate.
The romantic in me says Apple has got a plan and Raycer chipsets/Open GL OS acceleration/special functionaility (whatever its form...), High End Nvidia Apple collaboration on High End Graphic Cards and G5s are all a matter of time.</strong><hr></blockquote>
Apple shouldn't be trying to compete in terms of graphics chips -- they are going lose if they do. They cannot compete with nVidia, ATI, 3DLabs, etc. Why should they? They don't compete in the CPU design business, why should they try in the GPU business -- especially since modern GPUs are arguably more complex. If they only add acceleration for some new Raycer designed hardware then all the existing machines don't benefit even though they include a bunch of very powerful and under-utilized graphics chips. Apple is all about high margins on products built with well integrated off-the-shelf parts -- designing a graphics chip would be a silly thing to do.
Mach runs the hardware and provides services to BSD which has the software libraries built on it that provide the environment to build the applications that we use. By using two kernals you get hardware abstraction. It is just that one kernal (Mach) will provide services to the other kernal (BSD). Synchronizing the two is the hard part, funnel metaphor.
Maybe Apple has found a way to better implement graphic acceleration in Mach and to better synch that with BSD. Or maybe I have no clue.
nvidia states that their next video card is going to be totally extensible and SIMD based.
Doesn't this suddenly allow Apple to use any PPC chip sans AltiVec since those functions can now be offloaded to the uber-SIMD nvidia card? Obviously this would bring IBM back into focus... what about some consumerr variant of Power4. what about AMD? What about anyone?
<strong>
The next nVidia series will be called XeForce.</strong><hr></blockquote>
How would you pernounce that? Zeeforce? X E Force? Xeh force? hehe
So, Programmer, to answer your objection: It was code written long before any hardware could reasonably expect to accelerate any part of it. Furthermore, NeXTStep had chugged along nicely with a similar layer (Display PostScript) between the applications and the UI.
I don't think too much of the slowness of OS X is intrinsic to Quartz. I've seen threads on the mailing lists I subscribe to that there are real performance costs for not knowing how to use it efficiently, but that's a developer problem. Given that it performs very well for an all-software solution, if they do hardware accelerate it as thoroughly as this thread implies, it should become instantaneous.
OpenGL should get pretty fast, as well.