Nvidia begins work on first GPGPUs for Apple Macs

13»

Comments

  • Reply 41 of 51
    Quote:
    Originally Posted by Amorph View Post


    When AltiVec came out, Apple didn't have the various Core libraries (including the abstracted SIMD library), which automatically distribute their load across available resources. If you use those you get GPGPU for free when your machine has it ... .



    Sure, but Amiga and Silicon Graphics have been here before. They are now both mile-markers on the highway of computer history. For all of the applications I can think of which this is a boon, the Cell paradigm offers a more compelling future. After all, it is a media processor, too, but it has a degree of general purpose facilities built-in to the silicon. If both existed, I would say the money for a GPGPU would be better spent on a processor upgrade, where the processor in question happens to have a handful of SPEs on it.
  • Reply 42 of 51
    programmerprogrammer Posts: 3,458member
    Quote:
    Originally Posted by MCPtz View Post


    I don't really think the cell is the epitome of SIMD. It can be programmed to be mimd and so many various things, it's very flexible which is good but at the same time difficult to master from its complexity. But of course it's still super fast, powerful, and very interesting.



    GPUs have much more in common with Cell than the pure-SIMD style machine you describe. In practice the combination of MIMD/SIMD is needed to support flexible and varying workloads (e.g. running vertex & fragment shaders at the same time, and varying them across primitives). Few uses of hardware these days needs all of the machine's computational horsepower continuously for the same task at the same time. Having instruction/register level streaming SIMD at the core level and a multiple core arrangement provides the advantages without quite so much of the brute single-mindedness that a pure SIMD architecture brings with it.
  • Reply 43 of 51
    programmerprogrammer Posts: 3,458member
    Quote:
    Originally Posted by Splinemodel View Post


    Sure, but Amiga and Silicon Graphics have been here before. They are now both mile-markers on the highway of computer history. For all of the applications I can think of which this is a boon, the Cell paradigm offers a more compelling future. After all, it is a media processor, too, but it has a degree of general purpose facilities built-in to the silicon. If both existed, I would say the money for a GPGPU would be better spent on a processor upgrade, where the processor in question happens to have a handful of SPEs on it.



    Before long you won't be able to tell the difference between these strategies.
  • Reply 44 of 51
    Quote:
    Originally Posted by Programmer View Post


    Sure it does. Amdhal's Law is extremely important in this case. If you have a process where 50% of the time is in a data parallel problem, and the other 50% is in a serial portion of the problem, then by throwing the data parallel problem at a GPU that can do those calcs a billion times faster... your program will run (at most) twice as fast



    All true, but you're missing the point.



    Nobody is saying that CUDA is going to make your entire system run faster. The point here is that computers (and Macs, especially) are increasingly being used for applications that can take great advantage of parallelism. Video processing is one big task that even home users are doing today (thanks to programs like iMovie and iLife). Gaming (thanks to the massive amounts of 3D rendering modern games perform) is another such task.



    "Special purpose" or not, when large percentages of your customer base is running software that can take advantage of this sort of parallelism, it makes sense to use hardware that can provide it.

    Quote:
    Originally Posted by Programmer View Post


    I don't know what you were coding, but modern SIMD is epitomized by the IBM Cell processor. A main processor plus up to 8 on-chip 3+ GHz dual issue SIMD cores. 200+ GFLOPS performance, compared to quad core Intel Core 2 Duos in the ~50 GFLOPS ballpark... with much higher cost and power consumption. On scalar code the Intel CPUs win hands down. On SIMD code, the Cell crushes them.



    Which is why you would never want to run an office suite on a PS3, and why you wouldn't want to play a game like Assasin's Creed on a typical desktop computer.



    As always, you want the right tool for each task. Parallelism isn't right for many classes of apps, but it is absolutely right for others. And today, those "others" are quickly becoming mainstream computing activities.
  • Reply 45 of 51
    programmerprogrammer Posts: 3,458member
    Quote:
    Originally Posted by shamino View Post


    All true, but you're missing the point.



    No, I get the point... I was just using an extreme example to illustrate more clearly. If I had instead said that 90% of the CPU time was on a data parallel task, then your infinitely fast data parallel processor would only make the application run 10x faster. Ensuring that your data parallel task is 90% of the workload is actually pretty challenging.



    And the Cell is perfectly capable of running an office suite. Its Power core is roughly equivalent to a G5 of half the clock rate.



    The days of ever escalating scalar CPU performance are history, so faster hardware is only going to make concurrent workloads run faster. The real challenge is for the software guys to make as much of their code concurrent as possible, and you will quickly find that getting rid of that last 10% scalar code is really hard and may actually be impossible. That puts a fairly hard upper limit on performance.
  • Reply 46 of 51
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by Programmer View Post


    Sure it does. Amdhal's Law is extremely important in this case. If you have a process where 50% of the time is in a data parallel problem, and the other 50% is in a serial portion of the problem, then by throwing the data parallel problem at a GPU that can do those calcs a billion times faster... your program will run (at most) twice as fast.



    In the practical sense, I agree with you. I'm not denying that. But, it wasn't exactly what Amdahl himself was talking about. That's all I'm saying.
  • Reply 47 of 51
    programmerprogrammer Posts: 3,458member
    Quote:
    Originally Posted by melgross View Post


    In the practical sense, I agree with you. I'm not denying that. But, it wasn't exactly what Amdahl himself was talking about. That's all I'm saying.



    http://en.wikipedia.org/wiki/Amdahl's_law



    Seems like precisely what he was talking about to me. Perhaps you are refering to this?



    http://hint.byu.edu/documentation/Gu...w/Amdahls.html



    Which has always been my argument in practice... if you can't improve the efficiency of your code on a given problem, make the problem larger in a way that scales better.
  • Reply 48 of 51
    Well.



    With Octo core cpu configs and Cuda GPUs?



    You've essentially got the best of both worlds?



    If there is a blurring or merging between the two then surely it can get even better?



    (Thinks of the old Amiga...or C64...)



    It's nice that Apple 'is' working on this stuff with Nvidia. There seems to be evidence that they need to work harder with Nv' on Mac drivers.



    Having seen games like Crysis and Conan Morg?



    I can say that it will be nice for Apple users to have a powerful gpu to handle them. I even look forward to a day when we can drop in the gpu of our choice into a Mac Pro or Mini Tower. Still optimistic about the latter...it's needed in light of the gulf that now exists inbetween the iMac and Mac Pro.



    Yes. It looks like the R700 or the G100 will be a nice hopping on point for Mac Pro users. It's a shame we'll probablly have to wait until June(?) if Nv' or Ati launch then...or longer (knowing Apple...) for said cards.



    Lemon Bon Bon.
  • Reply 49 of 51
    l255jl255j Posts: 57member
    If you put a GPGPU into a computer, what do plug your two 30" Displays into? Do you just add another GPU for that purpose?
  • Reply 50 of 51
    macroninmacronin Posts: 1,174member
    Quote:
    Originally Posted by L255J View Post


    If you put a GPGPU into a computer, what do plug your two 30" Displays into? Do you just add another GPU for that purpose?



    Yes.



    The GPGPU is not for display output, it is for intense number crunching of data?
  • Reply 51 of 51
    Quote:
    Originally Posted by MacRonin View Post


    Yes.



    The GPGPU is not for display output, it is for intense number crunching of data?



    Yes, the Tesla card this article mentions has no display capabilities. Of course, for a casual user not looking for a $1500 dedicated computing card, you can simply use CUDA on any 8-series GPU (or later).



    Cuda-Supported GPUs
Sign In or Register to comment.