GPU Vs. CPU....differences?
I self admittingly know nothing about how GPU's and CPU's operate and the major differences between them. I was thinking hypothetically, if Apple decided to make their own graphics chips so their not limited by ATI and Nvidia's offerings, how would they go about doing that?
Could they use part of their current G5 chip to create a kickass GPU or would they need something completely different for that?
Just trying to get a better idea of how the technology works. Thanks.
Could they use part of their current G5 chip to create a kickass GPU or would they need something completely different for that?
Just trying to get a better idea of how the technology works. Thanks.
Comments
There's not much reason for Apple to create a separate GPU when ATi/nVidia are already doing it. It would cost a mountain of cash, confuse customers about Apple GPUs' speed to the rest of the industry and Apple wouldn't magically leap ahead of the competition in GPU performance.
Kinda like what happens with the PowerPC "consortium" (whatever is left of it). Apple co-develpps the G5.
And that does not change even though GPUs become more general with the advent of shading languages, like in OpenGL 2.0, which run on the graphics card.
To prove the point a group of Swiss programmers implemented a ray-tracing application directly on a high-end nVidia GPU using OpenGL 2.0, seeing whether this would run faster than a ray-tracer on the CPU.
After a lot of tweaking and sweating over the code in the end they realised that running a ray-tracer on the GPU is slower than on a comparable CPU. A GPU is just not built to do general purpose computations. (Although it is a great achievement to manage ray-tracing on a GPU directly.)
Having said that, it is nevertheless amazing that nVidia managed to calculate graphic effects like global illumination or ambient occlusion in decent frame rates directly on a 6800 Ultra card (see their latest GPU Gems 2 book for more details).
Point is, anything graphics related (vector and matrix calculations) will likely always be faster on a GPU as they are built to do these types of calculations very fast. But the moment you need other calculations, especially ones which need to store data for reprocessing, then a CPU will be the better option.