GPU's to let PPC off the hook?
Recent comments from Dave Russel seem to indicate that Apple intends a lot more from QE than they're initially letting on. QE, from what we know, leverages the GPU's power to free up the CPU. Currently we know it will draw the desktop faster, but could it go much further.
With nighly programmable GPU's on the way from nVidia, ATI and Matrox, could the CPU's role eventually be reduced to more of a high speed delegator (than number cruncher) ???
Bearing in mind that I don't know how any of the electronics work except that a GPU doesn't have the flexibility of a CPU. It does a limited number of things but it does them VERY FAST. If the average GPU (in a few years) could do even a moderate number of well chosen operations, then the modern CPU could find it's workload seriously lightened.
Could it be the case that in the future it will be much important to have a fast GPU with a fast connection directly to RAM, than the fastest CPU? Coupled of course to a Driver-API-layer-whatever that facilitates this new GPU usage with the upmost efficiency? (like Quartz Extreme?)
If this becomes the case, maybe the pressure to have faster and faster CPU's might abate and the real pressure for system designers will be to lay out the fastest memory access and instruction sets for the GPU.
Do you think this would ease the pressure to really fast/high clock CPU's? Would CPU's have to do anything subtly or significantly different than they do now to make this scenario likely?
With nighly programmable GPU's on the way from nVidia, ATI and Matrox, could the CPU's role eventually be reduced to more of a high speed delegator (than number cruncher) ???
Bearing in mind that I don't know how any of the electronics work except that a GPU doesn't have the flexibility of a CPU. It does a limited number of things but it does them VERY FAST. If the average GPU (in a few years) could do even a moderate number of well chosen operations, then the modern CPU could find it's workload seriously lightened.
Could it be the case that in the future it will be much important to have a fast GPU with a fast connection directly to RAM, than the fastest CPU? Coupled of course to a Driver-API-layer-whatever that facilitates this new GPU usage with the upmost efficiency? (like Quartz Extreme?)
If this becomes the case, maybe the pressure to have faster and faster CPU's might abate and the real pressure for system designers will be to lay out the fastest memory access and instruction sets for the GPU.
Do you think this would ease the pressure to really fast/high clock CPU's? Would CPU's have to do anything subtly or significantly different than they do now to make this scenario likely?
Comments
Just thinking out loud...
I mean, not just PPC, but X86 and whatever replacements intel and AMD have lined, where would this leave them? If the a 'GPU' becomes so powerful that it does the majority of the grunt work (not just for graphics but for other tasks as well), does the 'graphics card' become the key determiner of overall performance?
Take gaming. Currently a so-so fast CPU (athlon 1.4) with a very fast graphics card (GF4Ti4600) will run a lot of games faster and smoother than a very fast CPU (P4 2.533) with a so-so fast graphics card (Radeon7500). But that's gaming and for all their frame painting goodness, that's essentially what most DX8-9 cards do, frame and paint scenes, fast! But if they could apply some of that speed to more useful endeavors... many more actually, many even unrelated to graphics per se... well then, doesn't the GPU become the new heart of the system? And if it does, does that mean that in the future upgrading my GPU (in no short supply) could give a significantly faster machine WITHOUT THE NEED TO BUY a new PowerMac from Apple or a new CPU/MoBo from Intel/AMD? I'm not so sure how any box or CPU maker would be too pleased with that situation.
I once read that altivec would be particularly adept in such an arrangement as it could theoretically be used to translate large blocks of instructions/data into something a less sophisticated but more powerful GPU could crunch away on? I dunno if how that would work or if that even makes any sense, I just remember reading it. Is it true on any level?
[ 06-17-2002: Message edited by: Matsu ]</p>