GPU's to let PPC off the hook?

Posted:
in General Discussion edited January 2014
Recent comments from Dave Russel seem to indicate that Apple intends a lot more from QE than they're initially letting on. QE, from what we know, leverages the GPU's power to free up the CPU. Currently we know it will draw the desktop faster, but could it go much further.



With nighly programmable GPU's on the way from nVidia, ATI and Matrox, could the CPU's role eventually be reduced to more of a high speed delegator (than number cruncher) ???



Bearing in mind that I don't know how any of the electronics work except that a GPU doesn't have the flexibility of a CPU. It does a limited number of things but it does them VERY FAST. If the average GPU (in a few years) could do even a moderate number of well chosen operations, then the modern CPU could find it's workload seriously lightened.



Could it be the case that in the future it will be much important to have a fast GPU with a fast connection directly to RAM, than the fastest CPU? Coupled of course to a Driver-API-layer-whatever that facilitates this new GPU usage with the upmost efficiency? (like Quartz Extreme?)



If this becomes the case, maybe the pressure to have faster and faster CPU's might abate and the real pressure for system designers will be to lay out the fastest memory access and instruction sets for the GPU.



Do you think this would ease the pressure to really fast/high clock CPU's? Would CPU's have to do anything subtly or significantly different than they do now to make this scenario likely?

Comments

  • Reply 1 of 3
    noahjnoahj Posts: 4,503member
    Wasn't this type of elegant design what made the Amiga so powerful for it's time?
  • Reply 2 of 3
    buonrottobuonrotto Posts: 6,368member
    In general, I could see a trend away from a true "CPU," i.e., a central processor. I think QE is the first step in this new specialized use of chips, but also the Book E stuff being discussed right now in future hardware is moving towards the same process from the other direction. This nVidia Cg language is another step in this direction since the programmer doesn't have to "hard code" for the chips but rather lets the APIs and kernel do its thing and sort out all that stuff. In other words, abstraction makes it more possible. I guess the tradeoff is a performance hit for the overhead, but the specific abilities of the chip/board design could offset this.



    Just thinking out loud...
  • Reply 3 of 3
    matsumatsu Posts: 6,558member
    How long before the GPU usurps the CPU's role not only in graphics but also for a large part of mathematical heavy lifting? Say, instead of offloading functions to a render farm, the CPu offloads functions to a super fast 'Graphics' card with one or more powerful and programmable GPU's that (unlike a cluster) can talk directly to system RAM at high speed?



    I mean, not just PPC, but X86 and whatever replacements intel and AMD have lined, where would this leave them? If the a 'GPU' becomes so powerful that it does the majority of the grunt work (not just for graphics but for other tasks as well), does the 'graphics card' become the key determiner of overall performance?



    Take gaming. Currently a so-so fast CPU (athlon 1.4) with a very fast graphics card (GF4Ti4600) will run a lot of games faster and smoother than a very fast CPU (P4 2.533) with a so-so fast graphics card (Radeon7500). But that's gaming and for all their frame painting goodness, that's essentially what most DX8-9 cards do, frame and paint scenes, fast! But if they could apply some of that speed to more useful endeavors... many more actually, many even unrelated to graphics per se... well then, doesn't the GPU become the new heart of the system? And if it does, does that mean that in the future upgrading my GPU (in no short supply) could give a significantly faster machine WITHOUT THE NEED TO BUY a new PowerMac from Apple or a new CPU/MoBo from Intel/AMD? I'm not so sure how any box or CPU maker would be too pleased with that situation.



    I once read that altivec would be particularly adept in such an arrangement as it could theoretically be used to translate large blocks of instructions/data into something a less sophisticated but more powerful GPU could crunch away on? I dunno if how that would work or if that even makes any sense, I just remember reading it. Is it true on any level?



    [ 06-17-2002: Message edited by: Matsu ]</p>
Sign In or Register to comment.