Apple said to have sights on Nvidia's CUDA technology

Posted:
in General Discussion edited January 2014
Apple may adopt and roll into its own set of developer tools a version of Nvidia's CUDA technology that will allow programmers of everyday Mac apps to reap the parallel computational benefits of the chipmaker's graphics processors.



In an interview with CNet News.com's Tom Krazit earlier this week, Nvidia chief executive Jen-Hsun Huang dropped hints that Apple was extremely interested in the technology, so much so that it plans to distribute its own flavor of the technology to its developer base.



"Apple knows a lot about CUDA," Huang said, adding that the Mac maker's implementation "won't be called CUDA, but it will be called something else."



Essentially CUDA, short for Compute Unified Device Architecture, is a proprietary set of application programming interfaces (APIs) that allows developers of various types of applications -- not just those specialized for graphics operations -- to leverage the parallel processing capabilities of Nividia's latest graphics chips, such as the GeForce 8600M found in the new MacBook Pros.



In the weeks leading up the launch of those MacBook Pros, AppleInsider reported that Nvidia was preparing to deliver its first Mac-based graphics chips that support CUDA. Since then, the Santa Clara-based chip maker has gone on to say that programs developed for the GeForce 8 series will also work without modification on all future Nvidia video cards.



CNet, which actually visited Nvidia's headquarters for the interview with Huang, noted that engineers "demonstrated how a CUDA-enabled version of a program similar to QuickTime running on a desktop or laptop could dramatically speed up the processor of transcoding a movie or television show into a format suitable for the iPhone."



While Huang declined to share any further details regarding Apple's intentions for his firm's technology, the speculation is that Mac developers could hear some specifics as early as Monday's opening keynote address at the company's annual developers conference in San Francisco.
«1

Comments

  • Reply 1 of 21
    wheelhotwheelhot Posts: 465member
    Do this upgrade is eligible for current Penryn MacBookPro owners?
  • Reply 2 of 21
    apple will need to put a nvidia card in the mini or have a mid-tower with a nvidia card to get the most use out of this.
  • Reply 3 of 21
    SpamSandwichSpamSandwich Posts: 33,407member
    Quote:
    Originally Posted by AppleInsider View Post


    In an interview with CNet News.com's Tom Krazit earlier this week, Nvidia chief executive Jen-Hsun Huang dropped hints that Apple was extremely interested in the technology, so much so that it plans to distribute its own flavor of the technology to its developer base.



    Well, not now they won't. Steve will gut Huang like a fish.
  • Reply 4 of 21
    jeffdmjeffdm Posts: 12,951member
    Maybe they can use this to make the Pro Apps run better than it does with the old ATI X1900 XT card. It looks like 10.5.3 improved things a tiny bit, but the ATI is still better for pro apps than the 8800.
  • Reply 5 of 21
    Is Apple also going to implement ATI's competing CTM programming support? I can't really see Apple providing acceleration features for only nVidia GPUs since Apple seems to like to dual source most components. ATI's CTM actually seems to have a larger installed base too since it works with the previous generation X1k architecture while nVidia's only starts on their newer 8xxx generation. Supposedly though ATI's CTM provides low-level programming access so you could theoretically implement nVidia's CUDA interface on top of ATI's CTM, so I guess CUDA is the best way to go.



    Ironically, with the rumours of Carbon's depreciation and the future being Cocoa and Objective-C, CUDA seems to be standard C based.
  • Reply 6 of 21
    ksecksec Posts: 1,569member
    This begs the question of how Intel will react to it. Given the close relationship between Apple and Intel.

    Intel has Larabee coming up in 2009. Which will be x86 graphics with Advanced Vector Extensions.

    If intel want any sort of penetration in the Graphics Industry Apple has got to be the first customer on their wish list.



    Or would Apple play save and have Nvidia as their Graphics partner so they dont put all eggs in one basket.



    And ATI has always been a historical partner with Apple. Lots of their previous generation product uses ATI Gfx. Given how Intel Hate AMD more then anything else, would we still see an Apple implementation of CUDA working on ATI?
  • Reply 7 of 21
    In regards to Larabee, seeing that it's x86 based it probably wouldn't take that much effort to develop accelerated apps on it, which I think it the point. So I don't think it's an either or proposition. The resources could be spent on CUDA integration since it won't take as much to bring Larabee on stream. Why would Apple want to do both? Well, I'm pretty sure the current targets for Larabee are for IGP and as co-processors. Neither will effect the usage of nVidia or ATI discrete GPUs, and in the case of co-processors, likely in the Mac Pro, Larabee and CUDA could work in tandem. Eventually Larabee will probably try to reach into higher-end discrete GPU markets, but that is still a ways off, and they would be fighting uphill into an entrenched market.
  • Reply 8 of 21
    jeffdmjeffdm Posts: 12,951member
    Quote:
    Originally Posted by ltcommander.data View Post


    In regards to Larabee, seeing that it's x86 based it probably wouldn't take that much effort to develop accelerated apps on it, which I think it the point. So I don't think it's an either or proposition. The resources could be spent on CUDA integration since it won't take as much to bring Larabee on stream. Why would Apple want to do both? Well, I'm pretty sure the current targets for Larabee are for IGP and as co-processors. Neither will effect the usage of nVidia or ATI discrete GPUs, and in the case of co-processors, likely in the Mac Pro, Larabee and CUDA could work in tandem. Eventually Larabee will probably try to reach into higher-end discrete GPU markets, but that is still a ways off, and they would be fighting uphill into an entrenched market.



    Isn't the Intel graphics built into every Intel consumer chipset now? The benefit of including support for that would be that all the Apple computers would have it whether or not they have a separate graphics chip, except for the server and the workstation. And I suppose the server & workstation chipsets could include it too. Servers often don't need much better than framebuffer graphics anyway, and just standardizing on that for every chipset of any level would help Apple.
  • Reply 9 of 21
    ksecksec Posts: 1,569member
    Quote:
    Originally Posted by JeffDM View Post


    Isn't the Intel graphics built into every Intel consumer chipset now? The benefit of including support for that would be that all the Apple computers would have it whether or not they have a separate graphics chip, except for the server and the workstation. And I suppose the server & workstation chipsets could include it too. Servers often don't need much better than framebuffer graphics anyway, and just standardizing on that for every chipset of any level would help Apple.



    No, not all Intel Chipset have the Intel IGfx in it. And their implementation of Unified Shader is rather poor. So unless there are some miracle happen i dont see much power we can harness from it.
  • Reply 10 of 21
    Quote:
    Originally Posted by JeffDM View Post


    Isn't the Intel graphics built into every Intel consumer chipset now? The benefit of including support for that would be that all the Apple computers would have it whether or not they have a separate graphics chip, except for the server and the workstation. And I suppose the server & workstation chipsets could include it too. Servers often don't need much better than framebuffer graphics anyway, and just standardizing on that for every chipset of any level would help Apple.



    Larabee is not the same as Intel's current graphics architecture.



    My understanding of Larabee is that it is to x86 CPUs what the SPEs in the Cell on the PS3 is to PowerPC. Basically, the SPE is kind of like a glorified Altivec engine that specializes in vectorized floating point computation. Larabee will consist of many glorified SSE execution units that also specialize in vectorized floating point computation. Larabee is then useful as a co-processor to speed up multimedia processing, but will still need a general purpose CPU for other tasks. Games are of course floating point intensive, so the hope for Intel is that Larabee can also be a basis for them to make more progress in the graphics card market now that AMD has acquired ATI, starting with incorporating Larabee in IGPs and hopefully moving into discrete graphics cards.



    The benefit of Larabee over current GPUs is that the architecture is x86 similar to current CPUs so that minimal programming is required to allow a Larabee GPU to accelerate programs other than games. I believe Larabee being x86 also allows a lot more programming flexibility for games too since I don't believe you would have to program a game through DirectX or OpenGL anymore. Games probably still would since those are so entrenched and optimized, but more options are still nice.
  • Reply 11 of 21
    winterspanwinterspan Posts: 605member
    Quote:
    Originally Posted by ltcommander.data View Post


    The benefit of Larabee over current GPUs is that the architecture is x86 similar to current CPUs so that minimal programming is required to allow a Larabee GPU to accelerate programs other than games. I believe Larabee being x86 also allows a lot more programming flexibility for games too since I don't believe you would have to program a game through DirectX or OpenGL anymore. Games probably still would since those are so entrenched and optimized, but more options are still nice.



    Quote:
    Originally Posted by ltcommander.data View Post


    In regards to Larabee, seeing that it's x86 based it probably wouldn't take that much effort to develop accelerated apps on it, which I think it the point. So I don't think it's an either or proposition. The resources could be spent on CUDA integration since it won't take as much to bring Larabee on stream. Why would Apple want to do both? Well, I'm pretty sure the current targets for Larabee are for IGP and as co-processors. Neither will effect the usage of nVidia or ATI discrete GPUs, and in the case of co-processors, likely in the Mac Pro, Larabee and CUDA could work in tandem. Eventually Larabee will probably try to reach into higher-end discrete GPU markets, but that is still a ways off, and they would be fighting uphill into an entrenched market.



    I don't have any experience with CUDA, although I understand the technology. I am wondering how much inherent benefit x86 would be to accelerating apps on Larabee. CUDA is basically C language extensions, and either technology still requires a complete rewrite of the core algorithms in order to benefit from the massive parallel processing. Wouldn't you easily be able to link up regular x86 libraries to your CUDA program? What technical benefit would the x86 nature of Larabee provide? I guess I need to understand more about compiling the same code on different architectures and the challenges associated...





    Quote:
    Originally Posted by JeffDM View Post


    Isn't the Intel graphics built into every Intel consumer chipset now? The benefit of including support for that would be that all the Apple computers would have it whether or not they have a separate graphics chip, except for the server and the workstation. And I suppose the server & workstation chipsets could include it too. Servers often don't need much better than framebuffer graphics anyway, and just standardizing on that for every chipset of any level would help Apple.



    Yes, many of apple's products that don't come with discrete graphics cards have integrated graphics on the Intel motherboard, however even the most modern generation is incredibly weak compared to anything else, even AMDs version.
  • Reply 12 of 21
    MarvinMarvin Posts: 15,322moderator
    Quote:
    Originally Posted by ltcommander.data View Post


    Ironically, with the rumours of Carbon's depreciation and the future being Cocoa and Objective-C, CUDA seems to be standard C based.



    Apple aren't forcing people to use Objective-C. C code works without any modification. If you mix Obj-C and C++ you have to make some adjustments but we're talking about low level code, which requires no interface. Cocoa/Obj-C will be the standard for graphical interfaces, which is a big improvement but the low-level code is unaffected.



    I always like to see Apple looking more at nVidia. They seem to have much better software developments that take advantage of their hardware than ATI. Building this into the system would be great. It's about time that the GPU and CPU weren't treated as separate but used in as many tasks as possible together.
  • Reply 13 of 21
    jayinsfjayinsf Posts: 6member
    Regarding the comments that question what Apple might do with regard to Nvidia CUDA vs. ATI CTM vs. Intel Larabee, I would really look for Apple to come up with an API that masks all the differences, using LLVM to JIT/optimize for the available hardware in a manner similar to what they've already done with their graphics pipeline.
  • Reply 14 of 21
    jeffdmjeffdm Posts: 12,951member
    Quote:
    Originally Posted by JayInSF View Post


    Regarding the comments that question what Apple might do with regard to Nvidia CUDA vs. ATI CTM vs. Intel Larabee, I would really look for Apple to come up with an API that masks all the differences, using LLVM to JIT/optimize for the available hardware in a manner similar to what they've already done with their graphics pipeline.



    The Accelerate framework does something like this, it's a computational abstraction where you don't need to do anything special to support a new architecture. Maybe it doesn't do everything each system supports, but it basically takes a lot of standard math operations and converts them to the appropriate code. It supports Altivec and SSE, I imagine that it would work for the GPU math processing too.
  • Reply 15 of 21
    Quote:
    Originally Posted by AppleInsider View Post


    engineers "demonstrated how a CUDA-enabled version of a program similar to QuickTime running on a desktop or laptop could dramatically speed up the processor of transcoding a movie or television show into a format suitable for the iPhone."



    sorry if this seems uninformed



    but will this slow down screen redraws during encoding/heavy lifting tasks? I thought GPUs were included to make sure the screen was responsive even when the CPU was "flat out" if "both chips" are being tasked to encode something, wont screen responsiveness potentially suffer??



    (I'm sure codes and protocols are set up to avoid this) gonna hammer battery life though!
  • Reply 16 of 21
    wheelhotwheelhot Posts: 465member
    I prefer NVidia compared to ATi, each time Apple releases a model with ATi GPU, there is always problems.
  • Reply 17 of 21
    Quote:
    Originally Posted by JeffDM View Post


    Isn't the Intel graphics built into every Intel consumer chipset now? The benefit of including support for that would be that all the Apple computers would have it whether or not they have a separate graphics chip, except for the server and the workstation. And I suppose the server & workstation chipsets could include it too. Servers often don't need much better than framebuffer graphics anyway, and just standardizing on that for every chipset of any level would help Apple.



    real severs have 16 to 32 meg of video ram on a pci based video chip that eats up chipset i/o and system ram.
  • Reply 18 of 21
    Quote:
    Originally Posted by Joe_the_dragon View Post


    apple will need to put a nvidia card in the mini or have a mid-tower with a nvidia card to get the most use out of this.



    or CUDA + PA Semi = custom chip
  • Reply 20 of 21
    wmfwmf Posts: 1,164member
    Quote:
    Originally Posted by JayInSF View Post


    Regarding the comments that question what Apple might do with regard to Nvidia CUDA vs. ATI CTM vs. Intel Larabee, I would really look for Apple to come up with an API that masks all the differences, using LLVM to JIT/optimize for the available hardware in a manner similar to what they've already done with their graphics pipeline.



    Right on. LLVM can allow Core * to use whatever hardware is available: ATI, Nvidia, Intel, Larrabee...
Sign In or Register to comment.