Apple's Phil Schiller and Anand Shimpi tease details of A13 Bionic chip

Posted:
in General Discussion
The new 2019 iPhones utilize the latest of Apple's own custom-designed processors, and Apple executives have been talking about how their approach is different to rivals -- and why it's paying off.




The new iPhone 11, iPhone 11 Pro, and iPhone 11 Pro Max represent just the latest devices where Apple controls everything from the hardware to the operating system. But in choosing to make its own processors like the A13 Bionic, it has seen iPhone benchmarks best just about all comers. Yet they consistently outperform rivals even when the processor specifications appear to be lagging behind. Apple says this is down to their approach to how a processor works, and why.

Wired spoke to both Phil Schiller, Apple's senior vice president of worldwide marketing, and Anand Shimpi, previously founder of Anandtech and now on Apple's Platform Architecture Team.

"We talk about performance a lot publicly," said Anand Shimpi. "But the reality is, we view it as performance per watt. We look at it as energy efficiency, and if you build an efficient design, you also happen to build a performance design."

"For applications that don't need the additional performance," he continued, "you can run at the performance of last year's and just do it at a much lower power."

While neither executive would disclose a great deal of specifics, both Shimpi and Schiller claimed that it was this careful application of performance that works for Apple.

Schiller, in particular, emphasized how the company intelligently utilizes the processor's performance to keep it working rather than ramping it up or switching it off.

"One of the biggest examples of the benefits of the performance increase this year is the text to speech," Schiller told Wired. "We've enhanced our iOS 13 text-to-speech capabilities such that there is much more natural language processing, and that's all done with machine learning and the neural engine."

Apple revealing some of the key design updates for the new A13 Bionic processor
Apple revealing some of the key design updates for the new A13 Bionic processor


"Machine learning is running during all of that, whether it's managing your battery life or optimizing performance," he continued. "There wasn't machine learning running ten years ago. Now, it's always running, doing stuff."

AppleInsider has previously reported on how the A13 Bionic is Apple's fastest-ever A-Series processor, and detailed how it was utilized to consume less power.
applesnoranges

Comments

  • Reply 1 of 16
    mjtomlinmjtomlin Posts: 2,673member
    There seems be a misunderstanding making its way around the web...

    octa-core neural engine for machine intelligence functions that can run a trillion operations per second

    That's from the Wired article and I've read it elsewhere... This is false. The Neural Engine in the A12 performs 5 trillion ops. The NE in the A13 has a 20% performance increase over that; 6 trillion.

    The "trillion operations per second" figure being regurgitated over and over is the performance of the CPU with its new ML accelerators.


    Update: The Wired articles was updated...
    octa-core neural engine for machine intelligence functions that can run over five trillion operations per second


    edited September 2019 tmaymwhiteRayz2016gregoriusmmuthuk_vanalingamgilly33applesnorangesCloudTalkinrundhvidwatto_cobra
  • Reply 2 of 16
    neilmneilm Posts: 985member

    [...] Anand Shimpi, previously founder of Anandtech and now on Apple's Platform Architecture Team.

    So who founded AnandTech now?
    mjtomlinmuthuk_vanalingamSpamSandwichStrangeDaysCloudTalkinfastasleeprundhvidwatto_cobraFileMakerFeller
  • Reply 3 of 16
    neilm said:

    [...] Anand Shimpi, previously founder of Anandtech and now on Apple's Platform Architecture Team.

    So who founded AnandTech now?
    Oh William  :s
    muthuk_vanalingambakedbananas
  • Reply 4 of 16
    First I have heard of anand in years! Used to love his in depth reviews of iPhones. 
    hmurchisonwatto_cobra
  • Reply 5 of 16
    morkymorky Posts: 200member
    I miss Anand. There is nothing like his A-series articles since he went to Apple. 
    mjtomlinmacguiwatto_cobra
  • Reply 6 of 16
    neilm said:

    [...] Anand Shimpi, previously founder of Anandtech and now on Apple's Platform Architecture Team.

    So who founded AnandTech now?

    Lol. Too good man.
    cornchip
  • Reply 7 of 16
    wizard69wizard69 Posts: 13,377member
    I read the entire Wired article and gained nothing that wasn't summarized here.   In fact that graphing above tells us more than the whole article.  The gains in the GPU are particularly interesting.

    so one has to wonder how much of this is due to TSMC new process technology versus Apples chip engineering.  Obviously chip engineering is a good fraction, the GPU just improved too much for engineering of the GPU to not be a part.  

    The other rather  big surprise is the lack of a significant beefing up of Neural Engine.   This makes e wonder if they will rely upon the ML extensions in ARM from now on.  Their focus on ML techniques had me expecting a doubling of performance in the Neural Engine.   
    cornchipwatto_cobra
  • Reply 8 of 16
    neilm said:

    [...] Anand Shimpi, previously founder of Anandtech and now on Apple's Platform Architecture Team.

    So who founded AnandTech now?
    I want to raise that point every time I'm told "That will be such-and-such at the window."

    "How much if I pay back here?"

    Have yet to actually pull that stunt. (Usually too busy pulling other stunts.)
    watto_cobra
  • Reply 9 of 16
    ForumPost said:
    neilm said:

    [...] Anand Shimpi, previously founder of Anandtech and now on Apple's Platform Architecture Team.

    So who founded AnandTech now?
    Oh William  :s
    Oh, Canada! :neutral: 
  • Reply 10 of 16
    wizard69 said:

    The other rather  big surprise is the lack of a significant beefing up of Neural Engine.   This makes e wonder if they will rely upon the ML extensions in ARM from now on.  Their focus on ML techniques had me expecting a doubling of performance in the Neural Engine.   
    I wonder if the algorithms that they have that use the Neural Engine are about as optimized for the hardware as they are going to get and they need different specialized hardware for future ML algorithms. Machine Learning is changing rapidly. I’ve read that not all hardware accelerators are useful for all different ML scenarios.

    https://rodneybrooks.com/a-better-lesson/

    See point 6.

    Computer architects are now trying to compensate for these problems by building special purpose chips for runtime use of trained networks. But they need to lock in the hardware to a particular network structure and capitalize on human analysis of what tricks can be played without changing the results of the computation, but with greatly reduced power budgets. This has two drawbacks. First it locks down hardware specific to particular solutions, so every time we have a new ML problem we will need to design new hardware. And second, it once again is simply shifting where human intelligence needs to be applied to make ML practical, not eliminating the need for humans to be involved in the design at all.

    edited September 2019
  • Reply 11 of 16
    jdb8167 said:

    I wonder if the algorithms that they have that use the Neural Engine are about as optimized for the hardware as they are going to get and they need different specialized hardware for future ML algorithms. Machine Learning is changing rapidly. I’ve read that not all hardware accelerators are useful for all different ML scenarios.


    That's exactly what it is. They mentioned it has a new ML controller that directs certain ML tasks/code to the most appropriate processor, be it the GPU, CPU, or NE.

    So there's no need to "beef" up the NE if some of its tasks are being pushed to other processors.
    fastasleeptmaywatto_cobra
  • Reply 12 of 16
    tmaytmay Posts: 6,312member
    jdb8167 said:
    wizard69 said:

    The other rather  big surprise is the lack of a significant beefing up of Neural Engine.   This makes e wonder if they will rely upon the ML extensions in ARM from now on.  Their focus on ML techniques had me expecting a doubling of performance in the Neural Engine.   
    I wonder if the algorithms that they have that use the Neural Engine are about as optimized for the hardware as they are going to get and they need different specialized hardware for future ML algorithms. Machine Learning is changing rapidly. I’ve read that not all hardware accelerators are useful for all different ML scenarios.

    https://rodneybrooks.com/a-better-lesson/

    See point 6.

    Computer architects are now trying to compensate for these problems by building special purpose chips for runtime use of trained networks. But they need to lock in the hardware to a particular network structure and capitalize on human analysis of what tricks can be played without changing the results of the computation, but with greatly reduced power budgets. This has two drawbacks. First it locks down hardware specific to particular solutions, so every time we have a new ML problem we will need to design new hardware. And second, it once again is simply shifting where human intelligence needs to be applied to make ML practical, not eliminating the need for humans to be involved in the design at all.

    Maybe FPGA is the next big thing to incorporate into an SOC.
    watto_cobra
  • Reply 13 of 16
    morky said:
    I miss Anand. There is nothing like his A-series articles since he went to Apple. 
    I disagree. Andrei F's articles at AnandTech now are extraordinary; there's nothing else out there even close, as far as I'm aware. His coverage of the A11 and A12 especially has been great.

    Contrast that with this Wired article, which was seriously terrible.
    wizard69 said:
    The other rather  big surprise is the lack of a significant beefing up of Neural Engine.   This makes e wonder if they will rely upon the ML extensions in ARM from now on.  Their focus on ML techniques had me expecting a doubling of performance in the Neural Engine.   
    I don't find this surprising at all. They are clearly optimizing for power consumption, and they are hitting very strict power targets. I suspect it's no coincidence that they gained 20% on power dissipation, so they added 20% more capacity.

    For me the biggest surprise is that that while single-core CPU performance went up roughly 13-15%, multi-core performance went up maybe 19-20% (pending more benchmarks, especially Andrei's). There are two ways to explain this:
    1) The efficiency cores improved more than the performance cores, or
    2) Performance scaling is closer to linear than before, due to improvements in cache (coherency?), mesh/ring bus, power gating (not so likely), or...?

    #1 seems unlikely to explain most of this gain, as the improvements there would have to be really large to move the needle that much (though that's not impossible). #2 seems more likely. Any of those options would be good, and are particularly exciting as they all (except maybe the power gating explanation?) bode especially well for the future Apple laptop/desktop chip. In essence, this is suggesting that Apple will be able to deliver even better multicore results than was previously expected (based on the A12), and those expectations were already extremely high. They were high because it looks like Apple was able to achieve closer to linear scaling with their 8-core A12X than Intel was with, for example, the i9-9900. And now, scaling seems to have improved further with the A13 vs. the A12.

    It's not conclusive, but it's a really really good sign.

    [Edited for typo]
    edited September 2019 tmaywatto_cobra
  • Reply 14 of 16
    tmay said:
    jdb8167 said:
    wizard69 said:

    The other rather  big surprise is the lack of a significant beefing up of Neural Engine.   This makes e wonder if they will rely upon the ML extensions in ARM from now on.  Their focus on ML techniques had me expecting a doubling of performance in the Neural Engine.   
    I wonder if the algorithms that they have that use the Neural Engine are about as optimized for the hardware as they are going to get and they need different specialized hardware for future ML algorithms. Machine Learning is changing rapidly. I’ve read that not all hardware accelerators are useful for all different ML scenarios.

    https://rodneybrooks.com/a-better-lesson/

    See point 6.

    Computer architects are now trying to compensate for these problems by building special purpose chips for runtime use of trained networks. But they need to lock in the hardware to a particular network structure and capitalize on human analysis of what tricks can be played without changing the results of the computation, but with greatly reduced power budgets. This has two drawbacks. First it locks down hardware specific to particular solutions, so every time we have a new ML problem we will need to design new hardware. And second, it once again is simply shifting where human intelligence needs to be applied to make ML practical, not eliminating the need for humans to be involved in the design at all.

    Maybe FPGA is the next big thing to incorporate into an SOC.
    FPGA is always a promise, never reality, like “the internet of things”.  
  • Reply 15 of 16
    AppleExposedAppleExposed Posts: 1,805unconfirmed, member
    I always shake my head when someone says "iPhones don't need extra power. We don't need an app to open .01 seconds faster."

    Back in the iPhone 6 days people claimed there was no need for extra power.
    watto_cobra
  • Reply 16 of 16
    I always shake my head when someone says "iPhones don't need extra power. We don't need an app to open .01 seconds faster."

    Back in the iPhone 6 days people claimed there was no need for extra power.
    Who said that?

    The problem is that performance isn't free. It has costs, the most obvious being power consumption. Balancing that is complex, and this year, Apple's decided to improve both performance and power utilization. this is a good move for everyone- everyone likes better battery life, and reducing power utilization means that sustained performance will improve even more (as it will take longer to throttle, and will throttle less).
    bakedbananaswatto_cobramattinoz
Sign In or Register to comment.