After eating AMD & Nvidia's mobile lunch, Apple Inc could next devour their desktop GPU business

Posted:
in macOS edited February 2015
While rumors have long claimed that Apple has plans to replace Intel's x86 chips in Macs with its own custom ARM Application Processors, MacGPUs are among a series of potentially more valuable opportunities available to Apple's silicon design team, and could conceivably replicate Apple's history of beating AMD and Nvidia in mobile graphics processors--and Intel in mobile CPUs.

AMD & Nvidia


Building upon How Intel lost the mobile chip business to Apple, a second segment examining how Apple could muscle into Qualcomm's Baseband Processor business, and a third segment How AMD and Nvidia lost the mobile GPU chip business to Apple, this article examines how AMD and Nvidia could face new competitive threats in the desktop GPU market.

Apple's aspirations for GPU (and GPGPU)

In parallel with the threat Apple poses to Qualcomm in Baseband Processors, there's also a second prospect for Apple that also appears to have far greater potential than ARM-based Macs: Apple could focus its efforts on standardizing its GPUs across iOS and Macs, as opposed to standardizing on ARM as its cross platform CPU architecture.Apple could focus its efforts on standardizing its GPUs across iOS and Macs, as opposed to standardizing on ARM as its cross platform CPU architecture

Such a move could potentially eliminate AMD and Nvidia GPUs in Macs, replacing them with the same Imagination Technology PowerVR GPU architecture Apple uses in its iOS devices.

Today's A8X mobile GPU (used in iPad Air 2) is certainly not in the same class as dedicated Mac GPUs from AMD and Nvidia, but that's also the case with Apple's existing A8X CPU and Intel's desktop x86 chips.

Scaling up GPU processing capabilities are, however, conceptually easier than scaling CPU because simple, repetitive GPU tasks are easier to parallelize and to coordinate across multiple cores than general purpose code targeting a CPU.

Apple has already completed a lot of the work to manage queues of tasks across multiple cores and multiple processors of different architectures with Grand Central Dispatch, a feature introduced for OS X Snow Leopard by Apple's former senior vice president of Software Engineering Bertrand Serlet in 2009, and subsequently included in iOS 4.

A transition to PowerVR GPUs would also be easier to incrementally roll out (starting with low end Macs, for example) as opposed to the CPU replacement of Intel with ARM processors. That's because OpenGL enables existing graphics code to work across different GPUs, greatly simplifying any transition work for third party app developers; there's no need for a sophisticated Rosetta-style technology to manage the transition.

Metal to the Mac

By allowing developers to bypass the overhead imposed by OpenGL with its own Metal API, Apple can already coax console-class graphics from its 64-bit mobile chips. Last summer, a series of leading video game developers demonstrated their desktop 3D graphics engines running on Apple's A7, thanks to Metal.

Apple Metal API


Just six months earlier, Nvidia was getting attention for hosting demonstrations of those same desktop graphics engines running on its Tegra K1 prototype, which incorporates the company's Kepler GPU design adapted from its desktop products.

However, that chip was so big and inefficient it could not run within a smartphone even by the end of the year (and Nvidia has no plans to ever address the phone market with it). In contrast, with Metal Apple was showing off similar performance running on its flagship phone from the previous year).

With Macs running a souped up edition of the GPU in today's A8A (or several of them in parallel), Apple could bring Metal games development to the Mac on the back of its own PowerVR GPU designs licensed from Imagination.

Cross platform pollination

Apple already uses Intel's integrated HD Graphics (branded as Iris and Iris Pro in higher end configurations; they are built into Intel's processor package) in its entry level Macs, and currently offers both discrete Nvidia GeForce or AMD Radeon and FirePro graphics in its Pro models.

By incorporating its own PowerVR GPU silicon, Apple could initially improve upon the graphics performance of entry level Macs by leveraging Metal. Additionally, because Metal also handles OpenCL-style GPGPU (general purpose computing on the GPU), the incorporation of an iOS-style architecture could also be used to accelerate everything from encryption to iTunes transcoding and Final Cut video effects without adding significant component cost.

Both iOS and OS X make heavy use of hardware accelerated graphics in their user interface to create smooth animations, translucency and other effects.

Standardizing on the same graphics hardware would enable new levels of cross pollination between the two platforms. Certainly moreso than simply blending their very different user interfaces to create a hybrid "iOSX," which has become popular to predict as an inevitability even though it has little benefit for users, developer or Apple itself, and despite the fact Windows 8 is currently failing as a "one size fits all" product strategy for Microsoft.

The very idea of Apple replacing x86 CPUs in Macs with its own Ax ARM Application Processors would appear to necessitate a switch to the Imagination GPU architecture anyway, making an initial GPU transition a stepping stone toward any eventual replacement of Intel on Macs.

Not too easy but also impossible not to

Of course, some of the same reasons Apple might not want to leave Intel's x86 also apply to Nvidia and AMD GPUs: both companies have developed state of the art technology that would certainly not be trivial (or risk free) to duplicate.

Mac Pro CPU/GPUs


It is clear, however, that Apple is intently interested in GPUs, having developed a modern computing architecture for Macs that emphasizes the GPU as a critical computing engine, not just for video game graphics but system wide user interface acceleration and GPGPU.

Apple's new Mac Pro architecture (above) incorporates two GPUs and only a single Intel CPU on the three sides of its core heat sink, in clear recognition of the fact that GPU processing speed and sophistication are increasing at a faster pace than CPUs in general (and Intel's x86 in particular), and that multiple GPUs are easier to productively put to work and fully utilize compared to multiple CPUs.

The same phenomenon is visible when you zoom in on Apple's modern Ax package: its GPU cores are given more dedicated surface area than the ARM CPUs, even on an A8X equipped with 3 CPU cores, as seen in the image below by Chipworks.

A7 & A8 GPU vs CPU

Monopolies resistant to change are now fading

All the speculation about Apple moving from Intel's x86 to ARM also fails to consider the potential for more radical change; why not move virtually all heavy processing on the Mac to a series of powerful GPUs and use a relatively CPU as basic task manager, rather than just replacing the CPU with another CPU architecture?

When the PC computing world was dominated by Intel and Microsoft, there was little potential for making any changes that might threaten the business models of either Wintel partner. PowerPC and AMD64 were opposed by Intel, while Unix and OpenGL were opposed by Microsoft. That left little room for experimenting with anything new in either hardware or software. When the PC computing world was dominated by Intel and Microsoft, there was little potential for making any changes that might threaten the business models of either Wintel partner

The two couldn't even rapidly innovate in partnership very well: Microsoft initiated its Windows NT project in the late 1980s for Intel's new i860 RISC processor, but was forced back to the x86 by Wintel market forces. Then a decade later Microsoft ported Windows to Intel's 64-bit VLIW Itanium IA-64 architecture, only to find the radical new chip flop after it failed to deliver upon its promised performance leap.

In the consumer market, Microsoft attempted to create a PDA business with Windows CE, and Intel invested efforts in maintaining DEC's StrongARM team to build ARM chips for such devices. The results were underwhelming (and the entire concept was largely appropriated from Newton and Palm, rather than being really new thinking for PCs).

Over the next 15 years of building the same formulaic x86 Windows boxes, Intel eventually grew frustrated with the failure of Windows even as Microsoft chafed with the mobile limitations of x86. Their solo efforts, including Microsoft's Windows RT using ARM chips and Intel's expensive experiments with Linux and Android have been neither successful nor very original thinking.

Thinking outside the PC

We are now at a new frontier in computing that hasn't been as open to change and new ideas since IBM first crowned Intel and Microsoft coregents of its 1982 PC, a product that wedded Intel's worst chip with Microsoft's licensed version of better software to build a "personal computer" product that IBM hoped would not actually have any material impact upon its own monopoly in business machines.

Apple's own meritocratic adoption of PowerPC was sidelined in 2005 by market realities, forcing it to adopt a Wintel-friendly architecture for its Macs (and for the new Apple TV in 2007). However, by 2010 Apple's volume sales of ARM-powered iPods and iPhones enabled it to introduce an entirely new architecture, based on the mobile ARM chips it had co-developed two decades earlier.

When it introduced the iPad in 2010, it wasn't just a thin new form factor for computing: it was a new non-Wintel architecture. While everyone else had already been making phones and PDAs using ARM chips, there hadn't been a successful mainstream ARM computer for nearly twenty years.

That same year, Apple also converted its set top box to the same ARM architecture to radically simplify Apple TV (below) and be able to offer it at a much lower price. The fact that the product already uses PowerVR graphics makes it a potential harbinger for what Apple could do in its Mac lineup, and what it will almost certainly do in other new product categories, starting with Apple Watch.

iOS Apple TV

Skating to where the GPU will be

Microsoft maintained dominating control over PC games, operating systems and silicon chip designs between 1995 and 2005 by promoting its own DirectX APIs for graphics rather than OpenGL. Apple fought hard to prop up support for OpenGL on Macs, but the success of iOS has delivered the most devastating blow to DirectX, similar to the impact MP3-playing iPods had on Microsoft's aspirations for pushing its proprietary Windows Media formats into every mobile chip.

By scaling up Imagination GPUs on the Mac, Apple could reduce its dependance on expensive chips from Nvidia and AMD and bolster support for Metal and GPU-centric computing, potentially scaling back its dependence on Intel CPUs at the same time by pushing more and more of the computing load to its own GPUs.

Apple owns a roughly 9 percent stake in Imagination and--as the IP firm's largest customer--Apple makes up about 33 percent of Imagination's revenues. That makes Apple strongly motivated to pursue an expansion of PowerVR graphics, rather than continuing to fund GPU development by AMD and Nvidia.

This all happened before: How AMD and Nvidia lost the mobile GPU chip business to Apple--with help from Samsung and Google
«13

Comments

  • Reply 1 of 52
    asciiascii Posts: 5,936member

    It's a clever middle ground you've thought of. A way for Apple to save on components with minimal disruption to users. The only thing that makes me doubt it is that I think Apple are radical enough to go the whole hog (ARM CPU too) and just expect users to adjust.

     

    Also, do Intel even sell CPUs without an onboard GPU? If not would Apple just let the iGPU sit dormant?

  • Reply 2 of 52

    Apple will not drop Intel because it would be a dumb business move at this point even if it could run Mac OS on ARM.

    Apple believes that the desktop PC sales as we know it will decrease in the future as sale of iOS based devices surge.

    They are not worried about CPU or GPU for desktops because they will not sale so many of them.

    Moreover, Intel + AMD + Nvidia facilitates running Windows + Linux etc... in VMWare Fusion for example.

     

    Apple needs to become just as Swift in the enterprise as it is in the consumer space on its way to the trillion dollar valuation.

    Apple needs to own Enterprise Analytics + Cloud + Hardware + CPU + Software + DB + Services in addition to mobiles and wearables.

    If the SIRI + Watson + Enterprise Apps effort with IBM works out well, Apple should consider merging with Big Blue to get the afore mentioned.

  • Reply 3 of 52
    Dear writer:
    You know the PowerVR chip is Intel intergrated gpu, right?
    The only thing Apple did on A8x is CPU, and a small group of specialized circuit.
    Otherwise, we won't know it's a member of PowerVR 6XT
  • Reply 4 of 52
    oomuoomu Posts: 130member
    it's all nice and full of dreams

    but what I still want is an innovative product with full of great ideas and powers in a tiny convenient package to do HEAVY task with CG calculs, image rendering, signal analysis and high resolution image edition

    to summarize : the best work station with state of art technologies in still a reasonable price.

    The Mac Pro is already a forward thinking machine, with its own challenges, but it's still intel/amd based, and I don't see how Apple could provide something more powerful than that combo.

    ARM/powervr chips will improve of course, but so will intel and amd/nvidia desktops chips.

    Everything is "mobile" now, but we still need dedicated tools for creation.

    Maybe with attrition, Apple will force others companies like nvidia to give up even desktop, opening an opportunity to create a powerful new gpu

    but:

    - we don't have proof the architecture is scalable in a reasonable price
    - softwares does matter: OpenGL is not the end of everything: decades of x86 softwares, CUDA compatibility and so on. Even OpenCL is not that obvious, we need a full-featured OpenCL stack.

    of course, I'm all open to new radical computing and great new softwares, don't think I will not thrilled to learn new BETTER tools, but you ask (or dream) of a challenge even Apple couldn't take.
  • Reply 5 of 52

    --------------------------------------------------

    Sorry.  But there are too many histrionics in these series of articles.

     

    FIRST: Apple is NOT funding AMD or nVidia.  Apple buys GPUs from them just like Apple buys its CPU/GPU chips from Samsung. The primary difference with the chips it buys from Samsung and those from AMD or nVidia is that Apple designs the chips that it buys from Samsung. AMD and nVidia design their own chips.

     

    SECOND: Desktop chips - whether CPU or GPU - are hitting ceilings that limit their performance.  It is called the laws of physics.  Intel ran into that problem years ago when its CPUs hit a wall with their power requirements and heat output. In fact, the Mac Pro cannot take Intel's fastest chips because it doesn't have the cooling capacity to take the heat they generate.  Similar problems occur with the GPUs.  Today's top GPUs need TWO POWER SUPPLIES - one from the computer and another separate from the computer.   The top desktop computers draw huge amounts of power.  Think 500 to 1000 Watts.   Run that 24/7 and you get a huge power bill.  Should Apple do its own GPUs, it will run INTO THE SAME PROBLEM AND LIMITATIONS.  

     

    THIRD:  In the mobile arena, Apple has been improving performance by targeting the low lying fruit, the easiest problems to solve.  But when you look at the performance improvement curve of Apple's iPhone/iPad Ax chips, you see that with the A8, the curve is actually SLOWING DOWN.  And this is because Apple has run into the laws of physics again.  There is only so much you can do with limited battery power and limited cooing capacity on a mobile device.

    FOURTH:  Much of computing CANNOT be done in parallel. Word processing, spreadsheets, games, email, browsing, etc. are not parallel process tasks.  Even Photoshop is limited to how many parallel processes it can handle.  Apple has further been attempting to get users to use single tasks at a time in full-screen mode.  Even on CPUs, after 2 CPU Cores, more parallelism by adding more CPU cores actually limits the top speed that any core can accomplish by increasing the heat output of the chip.  This is why Intel has to slow down the clockspeed as more cores are added to chips. Thus, including further parallelism isn't going to make performance greater on any single task.  

     

    Should Apple want to tackle the desktop with its own custom GPUs, realize that they will always be playing catch up and will always be slower than those from AMD and nVidia.

     

    The only reason for doing so is to save money in manufacturing.  But that will have the side effect of lowering the quality of the User Experience.

     

    For example: just look at the new Apple Mac Mini.  It is now LIMITED to a 2-core CPU, rather than the 4-core of the previous model. It is SLOWER. But it is less expensive to make. The same limitations are found in the new Apple iMac with the 21-inch screen.

     

    It is a sad day to see Apple going backwards in the user experience and choosing cheap components over higher quality components.

    --------------------------------------------------

  • Reply 6 of 52
    richlrichl Posts: 2,213member

    Where's the value in Apple using its own GPUs on x86? As pointed out, the Intel CPUs already have built-in GPUs. It would only be the high-end machines that could benefit at all. My understanding is that the motherboards that Apple use are designed and manufactured by Intel. Would Apple create the GPUs and then ship them off to Intel to be integrated? It doesn't make economic sense in the same way as Apple's custom ARM SoC.

     

    Metal isn't really an advantage on x86 either. It's not unique or even that novel - it's the way the whole industry is going. Just look at AMD's Mantle. 

  • Reply 7 of 52
    jameskatt2 wrote: »
    Should Apple want to tackle the desktop with its own custom GPUs, realize that they will always be playing catch up and will always be slower than those from AMD and nVidia.
    The only reason for doing so is to save money in manufacturing.  But that will have the side effect of lowering the quality of the User Experience.

    The problem is that numbers (and money) in desktop GPU arena are much lower than in mobile market, so even if technically successfull, it will be far from having the average Apple profit percentage.
  • Reply 8 of 52
    The Intel/AMD/nVidia defense force appears to be out. You guys remind me of the folks who said the switch to x86 wouldn't work because of endian issues.


    This actually makes a lot of sense, and fits with the way things have been going. Apple's been taking OS X toward a heavily GPU driven model for years. Given that the A8X used what Anandtech called a GXA6850, which was two GX6450's that Apple had custom modified to work in tandem, I see no reason why they couldn't switch the GPU first. Especially if they just modify IMGTech's work for their own purposes.
  • Reply 9 of 52
    Quote:

    Originally Posted by happywaiman View Post



    Dear writer:

    You know the PowerVR chip is Intel intergrated gpu, right?

    The only thing Apple did on A8x is CPU, and a small group of specialized circuit.

    Otherwise, we won't know it's a member of PowerVR 6XT



    What? PowerVR is an entirely different technology than Intel Integrated Graphics.

     

    Dear commenter: Everything in your post is wrong.

  • Reply 10 of 52
    All of these pieces from AI are on the money. There is a reason that Apple has spent as much time laying the foundations (Metal, Grand Central) to make all these things work - the question is when, not if.

    It's not like Ax chips are ready to take on pro-level duties. It's the low end that's ripe for the picking.

    Intel, AMD, and especially nVidia all over promise, under deliver, and arrive late. Apple has had to delay products while all 3 get their stuff in order. ARM/PowerVR gives Apple CONTROL - over innovation, over timing, over form factors. Apple values this. At the low end, things like being able to virtualized Windows or compile for X86 simply doesn't matter. User experience matters more. And each Ax generation closes the gap to make this feasible.
  • Reply 11 of 52
    radarthekatradarthekat Posts: 3,842moderator
    "Also, do Intel even sell CPUs without an onboard GPU? If not would Apple just let the iGPU sit dormant?"

    I think the notion is that they could supplement the on-board GPU with additional Apple GPU cores in a separate chip, all accessed via OpenGL when desired, and with the supplemental GPU cores being directly accessed via Metal to augment the operating system's own needs in rendering the UI.
  • Reply 12 of 52
    MacProMacPro Posts: 19,718member
    ctmike78 wrote: »
    All of these pieces from AI are on the money. There is a reason that Apple has spent as much time laying the foundations (Metal, Grand Central) to make all these things work - the question is when, not if.

    It's not like Ax chips are ready to take on pro-level duties. It's the low end that's ripe for the picking.

    Intel, AMD, and especially nVidia all over promise, under deliver, and arrive late. Apple has had to delay products while all 3 get their stuff in order. ARM/PowerVR gives Apple CONTROL - over innovation, over timing, over form factors. Apple values this. At the low end, things like being able to virtualized Windows or compile for X86 simply doesn't matter. User experience matters more. And each Ax generation closes the gap to make this feasible.



    I totally agree with you, another great article from DED. I don't know this as a fact but I strongly suspect the number one reason for MBP failures over the years has been caused by failing third party GPUs.
  • Reply 13 of 52
    MacProMacPro Posts: 19,718member
    On a parallel issue ... do any of the tech experts here know if there are plans in the works to improve the nMPs use of the dual GPUs? Those Applications from Apple that utilize them scream but third party applications are not using the potential. I wonder of there is any progress along the lines of non compliant applications being able to use the dual GPUs with the help of Core technology in the OS with a 'Grand Dispatch for GPU' type method? I believe this is already the case in the Wintel world.
  • Reply 14 of 52
    I am a little confused ....

    Desktop GPUs require/exploit dedicated Video RAM to do their thing.

    For example, this from the iMac 5K configuration options:

    [B][U][SIZE=4]Graphics[/SIZE][/U][/B]

    Your iMac with Retina 5K display comes standard with the AMD Radeon R9 M290X with 2GB of dedicated GDDR5 video memory for superior graphics performance.
    For even better graphics response, configure your iMac with the AMD Radeon R9 M295X with 4GB of dedicated GDDR5 video memory.

    [B]Graphics[/B]

    • AMD Radeon R9 M290X 2GB GDDR5
    • AMD Radeon R9 M295X 4GB GDDR5 [B][COLOR=blue][Add $250.00][/COLOR][/B]

    It cost $250 for 2GB -- for a part not made by Apple.


    Wouldn't any Apple Desktop GPU offering need to include expensive, 3rd-party VRAM to support its GPU chip?

    If so, then estimating cost advantages of multiple Ax chips costing ~$35 become much less meaningful -- and would likely require extra engineering to allow the Ax GPUs to share the VRAM.
  • Reply 15 of 52
    I am a little confused ....

    Desktop GPUs require/exploit dedicated Video RAM to do their thing.

    For example, this from the iMac 5K configuration options:

    Graphics

    Your iMac with Retina 5K display comes standard with the AMD Radeon R9 M290X with 2GB of dedicated GDDR5 video memory for superior graphics performance.
    For even better graphics response, configure your iMac with the AMD Radeon R9 M295X with 4GB of dedicated GDDR5 video memory.

    Graphics

    • AMD Radeon R9 M290X 2GB GDDR5
    • AMD Radeon R9 M295X 4GB GDDR5 [Add $250.00]

    It cost $250 for 2GB -- for a part not made by Apple.


    Wouldn't any Apple Desktop GPU offering need to include expensive, 3rd-party VRAM to support its GPU chip?

    If so, then estimating cost advantages of multiple Ax chips costing ~$35 become much less meaningful -- and would likely require extra engineering to allow the Ax GPUs to share the VRAM.
    It costs you $250 not Apple
  • Reply 16 of 52
    rob53rob53 Posts: 3,241member

    Single thread vs parallel processing aside, more of the Top 500 supercomputers are using GPUs to accelerate their systems. The current #2 supercomputer, Titan from Cray installed at Oak Ridge, uses NVIDIA K20X GPUs. These are expensive desktop and supercomputer devices that have a theoretical speed of almost 4 TFLOPS (single precision). If Apple can develop a desktop-grade GPU that's usable for desktop and professional applications, it would make sense because this is where the market for high performance computing (HPC) is going. I also found GPU-based server accelerators, even from AMD with their FirePro S9150, that run at this speed. Dropping in a bunch of slower speed GPUs along with Apple's programming processes could swing the GPU market. The current FirePro GPUs in the Mac Pro are single core (true?) while the Apple A8-series is already using quad core GPUs. The AMD and NVIDIA GPUs take a ton of power to run while the A8 GPUs are doing quite well on limited power. As Daniel states, strapping several together with more power to them could turn things around.

  • Reply 17 of 52
    ksecksec Posts: 1,569member
    Quote:
    Originally Posted by happywaiman View Post



    Dear writer:

    You know the PowerVR chip is Intel intergrated gpu, right?

    The only thing Apple did on A8x is CPU, and a small group of specialized circuit.

    Otherwise, we won't know it's a member of PowerVR 6XT



    No, Intel hasn't used the PowerVR IP in their iGPU for 3 generation now.

     

    As for the article. This isn't something new. As i have stated before it is much more likely Apple make their own GPU rather then switching OSX to ARM. The reason is simply because making a GPU with PowerVR IP is easy ( relatively ), coding its drivers is freaking hard work and takes long time. Since Apple handles its own drivers, it may be of Apple's best interest to only code/optimize for one GPU.

     

    And I am sure Intel could make a custom x86 chip without iGPU to Apple rather then losing Apple's Mac business. On Broadwell, ~50% of die space belongs to GPU on a 2 Core + GT3 Die. I am sure Apple could get favorable pricing for the die size saving.

  • Reply 18 of 52
    I am a little confused ....

    Desktop GPUs require/exploit dedicated Video RAM to do their thing.

    For example, this from the iMac 5K configuration options:

    Graphics

    Your iMac with Retina 5K display comes standard with the AMD Radeon R9 M290X with 2GB of dedicated GDDR5 video memory for superior graphics performance.
    For even better graphics response, configure your iMac with the AMD Radeon R9 M295X with 4GB of dedicated GDDR5 video memory.

    Graphics

    • AMD Radeon R9 M290X 2GB GDDR5
    • AMD Radeon R9 M295X 4GB GDDR5 [Add $250.00]

    It cost $250 for 2GB -- for a part not made by Apple.


    Wouldn't any Apple Desktop GPU offering need to include expensive, 3rd-party VRAM to support its GPU chip?

    If so, then estimating cost advantages of multiple Ax chips costing ~$35 become much less meaningful -- and would likely require extra engineering to allow the Ax GPUs to share the VRAM.
    It costs you $250 not Apple

    Yeah, I realize that ... Except it. likely, costs Apple $150-$200 -- not insignificant nor Apple's normal margins.

    Point is, the cost is there, and it would also be there (if not more) in any Ax Desktop GPU offering.
  • Reply 19 of 52
    Quote:
    Originally Posted by jameskatt2 View Post

     

    --------------------------------------------------

    Sorry.  But there are too many histrionics in these series of articles.

     

    FIRST: Apple is NOT funding AMD or nVidia.  Apple buys GPUs from them just like Apple buys its CPU/GPU chips from Samsung. The primary difference with the chips it buys from Samsung and those from AMD or nVidia is that Apple designs the chips that it buys from Samsung. AMD and nVidia design their own chips.

     

    SECOND: Desktop chips - whether CPU or GPU - are hitting ceilings that limit their performance.  It is called the laws of physics.  Intel ran into that problem years ago when its CPUs hit a wall with their power requirements and heat output. In fact, the Mac Pro cannot take Intel's fastest chips because it doesn't have the cooling capacity to take the heat they generate.  Similar problems occur with the GPUs.  Today's top GPUs need TWO POWER SUPPLIES - one from the computer and another separate from the computer.   The top desktop computers draw huge amounts of power.  Think 500 to 1000 Watts.   Run that 24/7 and you get a huge power bill.  Should Apple do its own GPUs, it will run INTO THE SAME PROBLEM AND LIMITATIONS.  

     

    THIRD:  In the mobile arena, Apple has been improving performance by targeting the low lying fruit, the easiest problems to solve.  But when you look at the performance improvement curve of Apple's iPhone/iPad Ax chips, you see that with the A8, the curve is actually SLOWING DOWN.  And this is because Apple has run into the laws of physics again.  There is only so much you can do with limited battery power and limited cooing capacity on a mobile device.

    FOURTH:  Much of computing CANNOT be done in parallel. <snip>

     

    Should Apple want to tackle the desktop with its own custom GPUs, realize that they will always be playing catch up and will always be slower than those from AMD and nVidia.

     

    The only reason for doing so is to save money in manufacturing.  But that will have the side effect of lowering the quality of the User Experience.

     

    For example: just look at the new Apple Mac Mini.  It is now LIMITED to a 2-core CPU, rather than the 4-core of the previous model. It is SLOWER. But it is less expensive to make. The same limitations are found in the new Apple iMac with the 21-inch screen.

     

    It is a sad day to see Apple going backwards in the user experience and choosing cheap components over higher quality components.

    --------------------------------------------------


    I think you have judged this story from a bit of a Wintel perspective and missed some of the radical engineering that Apple's SoC team have been doing, notably in last year's A7 chip. Looking at your points:

     

    First - The article is correct, just a little brief. All manufacturers buying chips from AMD or nVidia (or Intel for that matter) are funding them. A chip supplier charges for their devices $(cost + profit + amortisation of R&D). That last item is the (very high) design cost spread across all the devices sold. The more devices sold, the more R&D available so the better the design for all of the customers. The author's case is that Apple buys so many devices that it is indirectly funding better devices for the other chip customers, being Apple's competitors.

    Second - Your argument is too influenced by the traditional Intel world. What ARM has done, and Apple improved upon, is use a new design paradigm to get much more compute performance from a given power consumption. Don't forget that Apple bought PA Semi that specialised in low power consumption CPUs and, lo, several years later, a radical new ARM core (Apple's A7 and now A8) with lots of compute that doesn't burn lots of power. The iPad Airs similarly demonstrate that Imagination Technologies/Apple are playing similar tricks with GPUs - there's not hundreds of Watts from an iPad battery nor arrays of fans inside the case yet look at the performance.

    Third - A8 is indeed an incremental step that does not deliver huge multipliers of compute performance over A7. However, A7 was a radical step and A8 being a consolidation may mean nothing more than the team are focused elsewhere at the moment. One year is not enough data to say they've hit the ceiling.

    Fourth - I think Apple would agree with you. That's why A7 has only two cores yet thrashes four and more core designs from others. The Apple Cyclone core is highly parallel within the core (committing up to six instructions in parallel), delivering the equivalent of multi-core performance to software that can use only one core.

     

    Apple isn't there yet but they are climbing a different and more efficient performance/power curve and you can get different results from a different approach.

     

    [I recommend the AnandTech article on A7 for a bit more perspective on Apple's achievement]

  • Reply 20 of 52
    Note to self: Never write headlines on an empty stomach.
Sign In or Register to comment.