How AMD and Nvidia lost the mobile GPU chip business to Apple -- with help from Samsung an...

13

Comments

  • Reply 41 of 65
    Why the hell doesn't Apple just buy Nividia or AMD and get its desktops all the GPU power they need? I'm sure they could with one of them.
  • Reply 42 of 65
    tundraboytundraboy Posts: 1,620member

    "Additionally, the best GPUs among Android products were often seeing the fewest sales, because licensees were achieving volume sales with cheap products. That forced Google and its partners to optimize for the lowest common denominator while higher quality options were failing to achieve enough sales to warrant further development."

     

    This is what a race to the bottom looks like from the engineering trenches.  Good luck attracting the smartest, most creative engineers if all you can offer them is work on the cheapest, least-common-denominator, lagging-edge products.

  • Reply 43 of 65
    What about TV's, going to 4K 5K and higher will also require a lot of processing power. Android is moving to the TV arena, so probably hardware will have to go with that. There is still no competition of APPLE on TV's.

    While I agree that Apple has no offering the TV arena -- I don't think they should ...

    However, that does not mean that Apple has no skin in the game for hi-resolution displays. I am reading/writing this on a loaded iMac 5K computer that has a base price similar to Dell's 5K display.

    Based on experience, I suspect this computer returns a profit margin consistent with similar Apple products.


    1000
  • Reply 44 of 65
    Mobile? You must mean, Embedded Mobile. ImgTec isn't touching the Laptop/Mobile traditional space and never will.

    ImgTec like ARM have extreme power constraints and quite frankly have reached a tipping point.

    Without changes away from Silica to another more exotic material such a constrained surface area and size of SoCs will force these devices go 3D/Cubic or admit they peaked in performance/watt.

    The same is going to happen in GPGPUs of normal laptop/mobile to desktop/workstation scale.

    7nm is the the Quantum tipping point. 10nm is the last shrink.

    A9 is going to be 14nm. See where this is going? Unless the phone die or iPad die doubles or triples in size they won't be pumping out more performance/watt.

    If they do that Apple will be consulting AMD as they will be entering their footprint in the upcoming Carrizo APUs, already announced coming in June and the new R9 300 series GPGPUs in March are a huge HBM memory leap forward in performance and TDP drop.
  • Reply 45 of 65
    kibitzerkibitzer Posts: 1,113member

    DED's case studies outshine the work of every other writer specializing in this industry. I'm surprised he's stayed with AI this long and hasn't wound up teaching technical strategy at some top graduate business school. Enjoy him while you got him.

  • Reply 46 of 65
    hypoluxa wrote: »
    Why the hell doesn't Apple just buy Nividia or AMD and get its desktops all the GPU power they need? I'm sure they could with one of them.

    Anti-Trust bars this from happening.

    Apple could become a licensed partner. Nvidia is never going to happen: CUDA only shop. AMD is slowly happening with extended partnerships.
  • Reply 47 of 65
    kibitzer wrote: »
    DED's case studies outshine the work of every other writer specializing in this industry. I'm surprised he's stayed with AI this long and hasn't wound up teaching technical strategy at some top graduate business school. Enjoy him while you got him.

    You must not read much in the actual Semiconductor industry.
  • Reply 48 of 65
    foggyhillfoggyhill Posts: 4,767member
    Quote:

    Originally Posted by jm6032 View Post

     

    I totally disagree. Until a portable device has active cooling systems and power supplies capable of hundreds of watts (or however many are needed in the future), 20+ inch displays, keyboards and everything else a desktop uses now, the day of the desktop will not end.


     

    It ends when 99% don't needs any of those things in their device to cover 100% of their needs. OF course, some people will need what you mentinoned, but for others it will be some imaginery beast only seen in movies or on TV.

     

    The only thing that will slow down the disapearance from the mainstream is the need to drive 4K displays. That'S not done easily in a small, low power package for now. But, very soon it will be.

  • Reply 49 of 65
    foggyhillfoggyhill Posts: 4,767member
    Quote:

    Originally Posted by mdriftmeyer View Post



    Mobile? You must mean, Embedded Mobile. ImgTec isn't touching the Laptop/Mobile traditional space and never will.



    ImgTec like ARM have extreme power constraints and quite frankly have reached a tipping point.



    Without changes away from Silica to another more exotic material such a constrained surface area and size of SoCs will force these devices go 3D/Cubic or admit they peaked in performance/watt.



    The same is going to happen in GPGPUs of normal laptop/mobile to desktop/workstation scale.



    7nm is the the Quantum tipping point. 10nm is the last shrink.



    A9 is going to be 14nm. See where this is going? Unless the phone die or iPad die doubles or triples in size they won't be pumping out more performance/watt.



    If they do that Apple will be consulting AMD as they will be entering their footprint in the upcoming Carrizo APUs, already announced coming in June and the new R9 300 series GPGPUs in March are a huge HBM memory leap forward in performance and TDP drop.

     

    The power constraint comes in great part from what they're used in. They're clocked extremely low now. There's a lot more performance possible in the A chips than what we've seen here.

  • Reply 50 of 65
    kibitzerkibitzer Posts: 1,113member
    Quote:
    Originally Posted by mdriftmeyer View Post





    You must not read much in the actual Semiconductor industry.



    You're right. I don't. Sorry. Just a layman's opinion.

  • Reply 51 of 65
    Mobile? You must mean, Embedded Mobile. ImgTec isn't touching the Laptop/Mobile traditional space and never will.

    ImgTec like ARM have extreme power constraints and quite frankly have reached a tipping point.

    Without changes away from Silica to another more exotic material such a constrained surface area and size of SoCs will force these devices go 3D/Cubic or admit they peaked in performance/watt.

    The same is going to happen in GPGPUs of normal laptop/mobile to desktop/workstation scale.

    7nm is the the Quantum tipping point. 10nm is the last shrink.

    A9 is going to be 14nm. See where this is going? Unless the phone die or iPad die doubles or triples in size they won't be pumping out more performance/watt.

    If they do that Apple will be consulting AMD as they will be entering their footprint in the upcoming Carrizo APUs, already announced coming in June and the new R9 300 series GPGPUs in March are a huge HBM memory leap forward in performance and TDP drop.

    What about SoI or SoS?

    A while back I did some surfing and there were companies doing research in this area ... AIR, it included APUs and stacked DRAM chips.

    Basically, much smaller traces requiring much less power while providing much better heat dissipation.
  • Reply 52 of 65
    koopkoop Posts: 337member

    I was watching someone play Hearthstone on the Galaxy Tab S and it was dreadfully choppy. This is bizarre because Blizzard is a AAA developer with tremendous talent, and the Tab S is Samsung's flagship tablet. It's even running a modern Snapdragon chip in it. /shrug

     

    I can't imagine that was going on there. It runs very smooth on the iPad Air (1st gen). I do feel Samsung insists on overkill for their screen resolutions which makes their GPU gap even worse. This is very apparent in the Wifi only version of the Tab S which has horrible performance.

     

    --

     

    With that said I don't know how Nvidia or ATI is addressing the market. It seems like they completely missed the boat. At the same time I disagree with desktop gaming dying, in fact it's been going through a resurgence over the past decade or so. ATI rules the roost with consoles, and Nvidia is a fantastic GPU maker for PC. My GTX 970 has been able to make all my PC games look stunning at 60FPS or more. 

     

    As much as people dream of it, you'll never see parity between desktop class graphics and mobile graphics. The laws of physics will always be on the side of the desktop GPU makers. They simply have more space and power to work with.

  • Reply 53 of 65
    hypoluxa wrote: »
    Why the hell doesn't Apple just buy Nividia or AMD and get its desktops all the GPU power they need? I'm sure they could with one of them.

    Because purchasing Nvidia or AMD will reduce net profits o Apple.

    There are two scenario:

    First, Apple buys Nvidia or AMD and continue selling chips to other manufacturer. This is the most unlikely, because other manufacturer would benefit from Apple research and development without risks.

    Second, Apple buys Nvidia or AMD and stop selling chips to other manufacturer. This would protect Apple's research and development, but the gain in profit from design their own chips, should be greater than profits lost from selling to other manufacturers. And in the (very small) PC market is quite unlikely. In mobile (with the Ax serie) it was possible because of the huge selling numbers.

    And we will never see the convergence of the mobile and desktop CPUs, the difference in power figure is too big (almost two order of magnitude). Apple should design a special ARM-based SoC for desktop use, but it won't be profitable with such low selling volumes.
  • Reply 54 of 65
    What about SoI or SoS?

    A while back I did some surfing and there were companies doing research in this area ... AIR, it included APUs and stacked DRAM chips.

    Basically, much smaller traces requiring much less power while providing much better heat dissipation.

    AMD and Hynix co-developed HBM.
    http://electroiq.com/blog/2013/12/amd-and-hynix-announce-joint-development-of-hbm-memory-stacks/

    PDF:
    http://www.memcon.com/pdfs/proceedings2014/NET104.pdf
  • Reply 55 of 65
    foggyhill wrote: »
    The power constraint comes in great part from what they're used in. They're clocked extremely low now. There's a lot more performance possible in the A chips than what we've seen here.

    You're missing the point that the power curve has the frequency low to keep battery performance high. Battery performance nose dives if it raises the frequency towards to 2.0Ghz and above envelope.
  • Reply 56 of 65
    Another good read. Really enjoy these. Thank you.
  • Reply 57 of 65

    Good article! But I don't agree with this point:

    Quote:

    Nvidia can't be pleased to have its fastest tablet chip just "nearly as fast" as Apple's A8X (without considering Metal)


    Nvidia's Tegra K1 offers full OpenGL 4.5 support with proprietary extensions, so Tegra K1 has more hardware and software features than PowerVR 6 (and probably PVR7) even with Metal API. Yep, maybe Metal API is a modern API, but FP OpenGL has more features and with AZDO-techs you can achieve similar to Metal API performance. Yes, PVR6XT in iPad Air 2 scores equal results to Tegra K1 in GFXBench T-Rex/Manhattan tests but in more complex tests (for example with tesselation feature) Tegra K1 will be faster.

    And with Metal API developers can target only PowerVR6-devices on iOS, but with FP OpenGL 4.* API support developers can target Windows/OSX/Linux on AMD/NVIDIA GPUs + Tegra K1/X1 on Android.

  • Reply 58 of 65
    jexusjexus Posts: 373member
    Quote:

    Originally Posted by hypoluxa View Post



    Why the hell doesn't Apple just buy Nividia or AMD and get its desktops all the GPU power they need? I'm sure they could with one of them.

    Do you really want Nvidia CEO to have control of Apple that badly?

    That is literally the only condition Nvidia has ever specified for a buyout.

     

    If Hsun-Huang doesn't lead the new combined company, then no deal.

  • Reply 59 of 65
    danielsw wrote: »
    I don't think so.

    Why would they set up a factory in Texas and dump all that R&D and design $$$ into the new machine and not invest more to take it further.

    The obvious missing element is an up-to-date Thunderbolt display. But also obvious is that the technology is changing rapidly. So they're most likely hedging their bets as well as spending more design $$$ to "get it right."

    Apple is doing great with existing markets, so it behooves them to take their time to get the Pro geared up to grab the various high-end markets once they get their GPU act together. Perhaps their mobile chip development will provide valuable knowledge to utilize with the Pro.

    Well I would certainly like to believe that. I was meaning that perhaps the volume didn't warrant the investment. Then again flag ships are flag ships.

    If Apple wants to own the high end of their markets then the Mac Pro is part of that strategy. Remember, Apple is not focused on "market share", they want to control the narrative.

    It's been a number of years since I owned Apple's top-of-the-line computers, but it was always amazing to experience just what the rarified air of computing looked and smelt like. Today's introductory iMacs put to shame what the Mac IIx could do, and we expect it to do so. On one hand it's nice to put the power of an average iMac toward a job and see impressive results, but kids today expect that. They will see future things unimaginable to me, but they won't have the wonder I felt as personal computers came into being.
  • Reply 60 of 65
    abozhin wrote: »
    Good article! But I'm not agree with this point:
    Nvidia can't be pleased to have its fastest tablet chip just "nearly as fast" as Apple's A8X (without considering Metal)
    Nvidia's Tegra K1 offers full OpenGL 4.5 support with proprietary extensions, so Tegra K1 has more hardware and software features than PowerVR 6 (and probably PVR7) even with Metal API. Yep, maybe Metal API is a modern API, but FP OpenGL has more features and with AZDO-techs you can achieve similar to Metal API performance. Yes, PVR6XT in iPad Air 2 scores equal results to Tegra K1 in GFXBench T-Rex/Manhattan tests but in more complex tests (for example with tesselation feature) Tegra K1 will be faster.
    And with Metal API developers can target only PowerVR6-devices on iOS, but with FP OpenGL 4.* API support developers can target Windows/OSX/Linux on AMD/NVIDIA GPUs + Tegra K1/X1 on Android.

    The K1 is a powerful chip, and it will sell well, I'm sure. However, being powerful while being a low power drain is also necessary for the mobile market. Apple could get more power out of their processors, but at the cost of zipping through a battery faster, so, in mobile, it's a balance of factors that wins the day.

    To balance those factors a designer needs total control over the chip design and the software design. I think Apple has that covered probably better then any other.

    Welcome to the forum!
Sign In or Register to comment.