GPU Vs. CPU....differences?

in Future Apple Hardware edited January 2014
I self admittingly know nothing about how GPU's and CPU's operate and the major differences between them. I was thinking hypothetically, if Apple decided to make their own graphics chips so their not limited by ATI and Nvidia's offerings, how would they go about doing that?

Could they use part of their current G5 chip to create a kickass GPU or would they need something completely different for that?

Just trying to get a better idea of how the technology works. Thanks.


  • Reply 1 of 4
    stoostoo Posts: 1,490member
    Although modern GPUs are also general purpose, they're faster than normal CPUs for graphics operations, both because of the functional units they contain (more of them and more graphics specific) and their proximity to video RAM.

    There's not much reason for Apple to create a separate GPU when ATi/nVidia are already doing it. It would cost a mountain of cash, confuse customers about Apple GPUs' speed to the rest of the industry and Apple wouldn't magically leap ahead of the competition in GPU performance.
  • Reply 2 of 4
    zozo Posts: 3,115member
    if anything, it would make more sense to co-develop (or sponsor) specific graphics needs. For example, lets say Apple wants to really have MPEG4 H.264 a priority (decode and encoding, etc), it would be wiser for them to organize a 100millio usd co-development agreement. This could be in form of money to build the facilities/machines, for the R&D, etc.

    Kinda like what happens with the PowerPC "consortium" (whatever is left of it). Apple co-develpps the G5.
  • Reply 3 of 4
    hobbithobbit Posts: 532member
    GPUs are so fast because they are highly tweaked to do a (very) few things very well. That way you can optimize performance a lot. On a general level, however, their performance is not up to speed to a CPU, literally.

    And that does not change even though GPUs become more general with the advent of shading languages, like in OpenGL 2.0, which run on the graphics card.

    To prove the point a group of Swiss programmers implemented a ray-tracing application directly on a high-end nVidia GPU using OpenGL 2.0, seeing whether this would run faster than a ray-tracer on the CPU.

    After a lot of tweaking and sweating over the code in the end they realised that running a ray-tracer on the GPU is slower than on a comparable CPU. A GPU is just not built to do general purpose computations. (Although it is a great achievement to manage ray-tracing on a GPU directly.)

    Having said that, it is nevertheless amazing that nVidia managed to calculate graphic effects like global illumination or ambient occlusion in decent frame rates directly on a 6800 Ultra card (see their latest GPU Gems 2 book for more details).

    Point is, anything graphics related (vector and matrix calculations) will likely always be faster on a GPU as they are built to do these types of calculations very fast. But the moment you need other calculations, especially ones which need to store data for reprocessing, then a CPU will be the better option.
  • Reply 4 of 4
    Cool, thanks for the clarification guys. I figured I would ask here since I know most of you know your stuff.
Sign In or Register to comment.