Nvidia begins work on first GPGPUs for Apple Macs

Posted:
in Future Apple Hardware edited January 2014
Graphics chipmaker Nvidia Corp. is in the early developmental stages of its first Mac-bound GPGPUs, AppleInsider has learned.



Short for general-purpose computing on graphics processing units, GPGPUs are a new wave of graphics processors that can be instructed to perform computations previously reserved only for a system's primary CPU, allowing them aid in the speed of non graphics related applications.



The technology -- in Nvidia's case -- leverages a proprietary architecture called CUDA, which is short for Compute Unified Device Architecture. It's currently compatible with the company's new GeForce 8 Series of graphics cards, allowing developers to use the C programming language to write algorithms for execution on the GPU.



GPGPUs have proven most beneficial in applications requiring intense number crunching, examples of which include high-performance computer clusters, raytracing, scientific computing applications, database operations, cryptography, physics-based simulation engines, and video, audio and digital image processing.



It's likely that the first Mac-comptaible GPGPUs would turn up as build-to-order options for Apple's Mac Pro workstations due to their ability to aid digital video and audio professionals in sound effects processing, video decoding and post processing.



Precisely when those cards will crop up is unclear, though Nividia through its Santa Clara, Calif.-based offices this week put out an urgent call for a full time staffer to help design and implement kernel level Mac OS X drivers for the cards.



Nvidia's $1500 Tesla graphics and computing hybrid card released in June is the chipmaker's first chipset explicitly built for both graphics and high intensity, general-purpose computing.







Programs based on the CUDA architecture can not only tap its 3D performance but also repurpose the shader processors for advanced math. The massively parallel nature leads to tremendous gains in performance compared to regular CPUs, NVIDIA claims.



In science applications, calculations have seen speed boosts from a 45 times to as much as 415 times in processing MRI scans for hospitals. Increases such as this can mean the difference between using a single system and a whole computer cluster to do the same work, the company says.
«13

Comments

  • Reply 1 of 51
    >_>>_> Posts: 336member
    It's only a matter of time before apple slaps a processor in our keyboards and mice as well. :P



    - Xidius
  • Reply 2 of 51
    I'll take one as soon as they are ready
  • Reply 3 of 51
    Hmmm GPGPUs, catchy name! But seriously, sounds confusing!



    Will software need to be modified to leverage the power of these processors?



    I'm very confused at the role of the GPU/CPU, they seem to be crossing paths more and more! Someone, tell me, what does the GPU do, and what does the CPU do, and why are they better for their separate tasks.
  • Reply 4 of 51
    wircwirc Posts: 302member
    As GUIs get more processor heavy, i.e. RI, and people realize that computers are useful for much more than gaming, I think these will be a great asset, especially for a companies with software-hardware integration like Apple.
  • Reply 5 of 51
    Quote:
    Originally Posted by AppleInsider View Post


    n science applications, calculations have seen speed boosts from a 45 times to as much as 415 times in processing MRI scans for hospitals. Increases such as this can mean the difference between using a single system and a whole computer cluster to do the same work, the company says.





    I think an 8 core Mac Pro IS a cluster.
  • Reply 6 of 51
    eaieai Posts: 417member
    Well, you think wrong then.



    "A computer cluster is a group of loosely coupled computers that work together closely so that in many respects they can be viewed as though they are a single computer. The components of a cluster are commonly, but not always, connected to each other through fast local area networks." From The Fount of all Knowledge.
  • Reply 7 of 51
    boogabooga Posts: 1,082member
    Quote:
    Originally Posted by thefunky_monkey View Post


    Hmmm GPGPUs, catchy name! But seriously, sounds confusing!



    Will software need to be modified to leverage the power of these processors?



    I'm very confused at the role of the GPU/CPU, they seem to be crossing paths more and more! Someone, tell me, what does the GPU do, and what does the CPU do, and why are they better for their separate tasks.



    Yes, the computation pipeline would have to be largely rewritten. And even then, the GPGPU is very specific in the kinds of operations it can accelerate. If you can define your operation in terms of what discrete set of input pixels you'll need in order to generate each output pixel (blurring, sharpening, color tweaking, etc), it can accelerate phenomenally. If, however, your operation can only be expressed in terms of what output pixel will need to be modified for each input pixel (histograms, accumulators, searches, etc), it's no better than a regular CPU. To make matters a little worse, in order to switch between the two "modes", you may need to copy your entire image from VRAM to RAM and back to VRAM.



    The newer GPGPUs attempt to compensate for some of these issues, but the areas where this computation is applicable tend to be pretty specific.
  • Reply 8 of 51
    boogabooga Posts: 1,082member
    A great way to prototype this stuff is to boot up Quartz Composer in the Leopard devtools and try some GLSL shaders. It abstracts away the CPU/GPU issues nicer than any other tool I've seen.
  • Reply 9 of 51
    Quote:
    Originally Posted by wirc View Post


    As GUIs get more processor heavy, i.e. RI, and people realize that computers are useful for much more than gaming, I think these will be a great asset, especially for a companies with software-hardware integration like Apple.



    OMG!! Computers are useful for more than just gaming ??? (KIDDING) This is a very promising development. When it migrates down to the point where us mere mortals can afford it, time will tell.... Sorry but I won't be spending $1,500 for a graphics card ANYTIME soon...
  • Reply 10 of 51
    Quote:
    Originally Posted by eAi View Post


    Well, you think wrong then.



    "A computer cluster is a group of loosely coupled computers that work together closely so that in many respects they can be viewed as though they are a single computer. The components of a cluster are commonly, but not always, connected to each other through fast local area networks." From The Fount of all Knowledge.



    Did you miss the smiley?
  • Reply 11 of 51
    nagrommenagromme Posts: 2,834member
    I like this idea better than PhysX hardware. Let PhysX run on GPGPU instead.
  • Reply 12 of 51
    cubertcubert Posts: 728member
    "calculations have seen speed boosts from a 45 times to as much as 415 times"



    Do they really mean 45-415%? Or, are we really talking about a 4500 to 41500% increase?



    If so, that is absolutely unbelievable!
  • Reply 13 of 51
    I'm really curious how these things might work for audio - audio companies have been making DSP cards for years, but they are usually extremely expensive and CPUs have become a better bang for the buck.



    Anyone know if any audio apps are taking advantage of things like this yet?
  • Reply 14 of 51
    melgrossmelgross Posts: 33,510member
    It's an interesting development, but it looks to be pretty expensive. I imagine that's the only reason why they'd do it for the few Macs that can use it.



    Since this would be useless without the OS recognizing it for what it is, I wonder if the reason why they're doing it, is because Apple is somehow involved. If Apple didn't support it, it wouldn't be useful.
  • Reply 15 of 51
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by minderbinder View Post


    I'm really curious how these things might work for audio - audio companies have been making DSP cards for years, but they are usually extremely expensive and CPUs have become a better bang for the buck.



    Anyone know if any audio apps are taking advantage of things like this yet?



    The advantage to these chips is that they can be programmed to do whatever you need them to do. A DSP is just a number cruncher, as is a vector processor.
  • Reply 16 of 51
    shaminoshamino Posts: 527member
    This is really good news. Apple has been offloading processing into GPUs for some time, but so far, I think it's only been for certain graphic-oriented frameworks (like CoreImage) and some special-purpose apps (like Motion).



    With this announcement, we may see more generalized frameworks (possibly integrated into Accelerate, CoreAudio, and other frameworks) to take advantage of the immense about of GPU power that goes unused most of the time.

    Quote:
    Originally Posted by thefunky_monkey View Post


    Will software need to be modified to leverage the power of these processors?



    Maybe yes, maybe no. If Apple updates OS frameworks (like CoreImage, CoreAudio, Accelerate, and other commonly-used components) to use these capabilities, lots of apps will automatically begin using them.



    Of course, you probably won't be able to take advantage of some features without using new APIs, which will obviously require updates to applications.

    Quote:
    Originally Posted by thefunky_monkey View Post


    I'm very confused at the role of the GPU/CPU, they seem to be crossing paths more and more! Someone, tell me, what does the GPU do, and what does the CPU do, and why are they better for their separate tasks.



    The lines have been getting very blurry in recent years.



    A modern GPU is really a high-end math coprocessor. Things like matrix arithmetic across large data sets, fourier transforms, and other features useful for image processing are built-in. Many features are highly specialized, like various 3D texture mapping abilities, anti-aliasing, compositing objects, etc.



    For quite a while now, it has been possible to upload code into the GPU for execution. Typically, this is to get better video performance, but it can be done for other purposes as well. The source and target memory of an operation doesn't have to be on-screen - it can be anywhere in the video card's memory.



    Of course, this is only useful if your application needs the kinds of operations the GPU can provide. Audio processing is one such operation. I can think of a few other possibilities.



    The big deal about NVidia's CUDA architecture (if I understand the article right) is that it will now be possible to use the GPU for a much wider range of processing tasks than just those that can benefit from image-processing algorithms.

    Quote:
    Originally Posted by minderbinder View Post


    I'm really curious how these things might work for audio - audio companies have been making DSP cards for years, but they are usually extremely expensive and CPUs have become a better bang for the buck.



    Anyone know if any audio apps are taking advantage of things like this yet?



    It wouldn't surprise me if audio processing apps start taking advantage of this.



    I know Motion (part of the Final Cut Studio suite) requires a powerful GPU, because it offloads tons of rendering work into it. I think the other parts of FCS (including Soundtrack Pro) will use GPU power, if its available, but I'm not certain of that.
  • Reply 17 of 51
    Sounds useful, but will Apple buy it?



    By the time it is available for the Mac, the Mac hardware and OSX should have completely transitioned to a dumbed down system for making photo albums of the kids' little league and sending prettified emails about how you just managed to make photo albums of the kids' little league.



    To extend its usefulness will be video iChat so you can hold up the photo album to show nana what you just did.



    To get Steve's attention though, Nvidia will have to work a lot harder to make this the world's thinnest graphics card.
  • Reply 18 of 51
    hmurchisonhmurchison Posts: 12,425member
    GPU doing more processing? There goes my "Performance per Watt" metric.
  • Reply 19 of 51
    It will take a marketing miracle for this to be a success: it was hard enough getting compilers to support altivec, which is a hell of a lot more seamless than this proposal.



    Even so, I wholeheartedly support the effort. Superscalar CPUs are often inefficient for many kinds of modern computing. However, I see IBM's Cell as a much better paradigm than this is, and certainly one that's more likely to have an impact on the future of computing. Board-to-board data flow is supposed to be going away, not coming back.
  • Reply 20 of 51
    I mentioned this a few months ago with the release of the AMD/ATi equivalent and got a resounding yawn from all but a few.



    Now the AI speaks and you are in awe?



    These cards won't be released until UEFI is standard for nVidia and AMD/ATi.
Sign In or Register to comment.