First benchmark indicates A14 is major upgrade from A13

2

Comments

  • Reply 21 of 51
    This is more a modest evolution and not a ‘major increase’. Which is fine. 

    I’m more interested in GPU performance and GPU features (and have that compared to AMD/Nvidia), as well as improvements to the Neural Engine side of things, because these factors will be way more telling of Apple’s advancements and strategy. 
    And whether there will be a T3 chip following up the T2.

    Who cares whether the CPU is just increasing in speed and efficiency, if the most useful and apparent factors are GPU and ML? 
  • Reply 22 of 51
    hexclockhexclock Posts: 1,257member

    sflocal said:
    The industry continues to evolve, and new fabrication methods are constantly in development, along with materials science.  The industry will continue, and will evolve.  It has too.  Silicon still has more life in it, but it will be interesting what to see what the next revolution will be for CPU design.  Necessity is the mother of all inventions.  

    Graphene is one tech that's in the works.
    https://www.extremetech.com/extreme/175727-ibm-builds-graphene-chip-thats-10000-times-faster-using-standard-cmos-processes
    And here is another graphene one aimed at power generation:

    https://phys.org/news/2020-10-physicists-circuit-limitless-power-graphene.html


    watto_cobra
  • Reply 23 of 51
    BeatsBeats Posts: 3,073member
    M68000 said:
    I can’t dispute these claims.  I’m no expert in cpu manufacturing.  But have to wonder...  there has to be a limit, a wall that must be reached at some point with current chip fabrication\materials. Every year we keep hearing that in just a years time the new cpu is XX percentage better than the one before it. The other thing is the phones are so good right now,  does it really matter in real life daily use?

    My iPhone 7 with its 4 generations back A10 processor seems to be chugging along just fine.   To be honest, I don't see what "faster" would get me.

    But then software, especially on the iPad, will keep getting more and more demanding.   So, when buying new, you might want to allow for some reserve horsepower for down the road.  It's a bit like 5G -- you're investing mostly in capacity for the future.

    Apps from 2022 will be too fast for your iPhone 7.
  • Reply 24 of 51

    My iPhone 7 with its 4 generations back A10 processor seems to be chugging along just fine.   To be honest, I don't see what "faster" would get me.

    But then software, especially on the iPad, will keep getting more and more demanding.   So, when buying new, you might want to allow for some reserve horsepower for down the road.  It's a bit like 5G -- you're investing mostly in capacity for the future.
    It really depends on what you use your device for. If the device does what you want, then great. However apps on both the iPhone and iPad are getting more demanding. I used to not need so much power. 

    I have a second gen SE, and it runs Wizards Unite very nicely. I put iOS 14 beta on it, Wizards Unite would no longer run at all. I got our spare iPhone 6S running to run WU. The 6S ran it, however it was vastly slower and the game crashed a lot. 

    Some spreadsheets can take a lot of processing power. I have a spreadsheet to process COVID-19 data from USAfacts to give me charts. It takes a good 15 minutes to process the data on my SE or my 2014 Mac mini or 2014 MacBook Air. The spreadsheet is over 20 meg. 
    watto_cobra
  • Reply 25 of 51
    mjtomlinmjtomlin Posts: 2,673member
    This is more a modest evolution and not a ‘major increase’. Which is fine. 

    I’m more interested in GPU performance and GPU features (and have that compared to AMD/Nvidia), as well as improvements to the Neural Engine side of things, because these factors will be way more telling of Apple’s advancements and strategy. 
    And whether there will be a T3 chip following up the T2.

    Who cares whether the CPU is just increasing in speed and efficiency, if the most useful and apparent factors are GPU and ML? 

    It is actually a bigger evolution than most think. I read (somewhere?) that Apple didn't increase performance in the traditional sense, but rather in efficiency insomuch that it can sustain peak performance for a much longer time period now. I'm going to guess this is a design brought about by Apple moving their Macs to Apple Silicon, where sustained performance is necessary for the types of workloads on traditional computing platforms.

    We can assume that Apple will be using the same CPU cores in their Apple Silicon SoCs.. the major difference in those SoCs will be timing, controllers, data throughput, and of course core count. We can also assume the least performant Mac SoC, will be similar to the performance of the A14X. Other Mac SoCs will scale up from there.

    If these benchmarks are true and given past performance increases with the AxX variants over the Ax SoCs, then the A14X should pass 7,000 in multi-core scores. Which would rival the performance of a 16" MacBook Pro (with an 8 core i9). And as I mentioned that would probably be the least performant Mac.
    edited October 2020 watto_cobra
  • Reply 26 of 51
    flydogflydog Posts: 1,124member
    M68000 said:
    I can’t dispute these claims.  I’m no expert in cpu manufacturing.  But have to wonder...  there has to be a limit, a wall that must be reached at some point with current chip fabrication\materials. Every year we keep hearing that in just a years time the new cpu is XX percentage better than the one before it. The other thing is the phones are so good right now,  does it really matter in real life daily use?
    Why "must" there be a limit?   Is mankind supposed to stop making things better because doesn't "really matter in real life daily use" ?   
    tmaymike1docno42watto_cobrafastasleep
  • Reply 27 of 51
    GeorgeBMacGeorgeBMac Posts: 11,421member
    flydog said:
    M68000 said:
    I can’t dispute these claims.  I’m no expert in cpu manufacturing.  But have to wonder...  there has to be a limit, a wall that must be reached at some point with current chip fabrication\materials. Every year we keep hearing that in just a years time the new cpu is XX percentage better than the one before it. The other thing is the phones are so good right now,  does it really matter in real life daily use?
    Why "must" there be a limit?   Is mankind supposed to stop making things better because doesn't "really matter in real life daily use" ?   

    The limits keep changing.  
    There have been 3 major factors on those limits:   Processing power, communications and software.

    In the early days of computing of we had one big computer sitting in the basement of each major business unit and it took care of everything that unit needed.
    Then communications came along and it enabled that one computer to take care of multiple business units.
    And then communications expanded again and it needed to take care of not just business units but the individual staff people in those units.
    Then, the personal computer came along and process of processing power and communications pushing each other along (along with software) started all over again.

    Now, because of communications advances (as well as storage), it is returning full circle back to that central processing unit taking care of many users.

    Essentially, each component both enables and pushes the other components to advance.
    watto_cobra
  • Reply 28 of 51
    mcdavemcdave Posts: 1,927member
    ppietra said:
    so, 18% improvement in single core, 27% improvement in multicore! Seems like a bit better than people were led to believe.
    What isn’t mentioned is the 70% improvement in Geekbench compute metal score. That is way over what was expected
    Yes, the CPU matches the i7 MBP 13.
    The A14X should match the i9 MBP 16 + 5300M graphics.  Fanless!
    docno42watto_cobra
  • Reply 29 of 51
    techconctechconc Posts: 275member
    mykem said:
    The single core alone is almost twice that of the Snapdragon 865 (1,583 vs 870).

    The multi-core is a lot closer although you have to consider that the SD865 runs on 8 cores vs 6 on the Apple A14 (Apple has been using 6 cores on the regular A series chip since the A10- the “X” version for the iPad Pro uses 8 cores). For the multi-core the SD865’s GB5 score is around 3,280 vs 4,198 for the A12.
    Outpacing Android based SoCs is nothing new for Apple.  I'd be surprised if that weren't the case.  What's interesting to me is that this is the first phone chip that has stronger singe core performance than anything from Intel.  Looking at Geekbench 5 Processor benchmark chart, the top rated chip for single core performance is currently the Intel Core i7-1165G7 which scores 1474.  For the first time, a phone CPU is now more than 7% faster than the fastest Intel desktop chip at single core performance.   We all knew this day would come, it's still a remarkable milestone when it happens.
    GeorgeBMacthtdocno42watto_cobra
  • Reply 30 of 51
    techconctechconc Posts: 275member
    This is more a modest evolution and not a ‘major increase’. Which is fine. 

    I’m more interested in GPU performance and GPU features (and have that compared to AMD/Nvidia), as well as improvements to the Neural Engine side of things, because these factors will be way more telling of Apple’s advancements and strategy. 
    And whether there will be a T3 chip following up the T2.

    Who cares whether the CPU is just increasing in speed and efficiency, if the most useful and apparent factors are GPU and ML? 
    As someone else noted already, the GPU compute scores are up about 70% in the A14.   The transistor count is up 39% which is pretty massive and it's clear this didn't go into the CPU so much.  Apple's focus seems to be exactly as you mention... with AI and machine learning.  This doesn't show up on Geekbench 5 as they don't have benchmarks to capture this.  There are industry benchmarks such as AIMark, etc. which do.  To that end, Apple doubled the performance of their Neural Engine and they now have 2nd generation matrix multiplication units used for ML.  The point being, much of the focus of the A14 has been on improving the AI/ML performance rather than traditional CPU where they already have a substantial lead.

    As for a T3 chip?  No need.  The T2 chip was effectively a way to help bring Intel chips up to speed with some of the things Apple needs to do (Secure enclave, encryption, ISP, etc.).  The T2 is effectively based on the A10 technology and performance.  Apple Silicon Macs won't need a Tx chip because that's already built into their SoC.  I'm not expecting any significant new Intel based product introductions in the future. I'm guessing the last iMac update was the last hoorah for Intel on the Mac.
    GeorgeBMacthtwatto_cobra
  • Reply 31 of 51
    wizard69wizard69 Posts: 13,377member
    People said the same thing 5 years ago for how much better do we really need? Yet it is likely yours is under that threshold or perhaps about to reach it as your computing expectations rise slowly but surely, year over year. There's still much headroom for performance and efficiency, as well as cost, which all balance better every year.
    Performance isn't anywhere near good enough for most users and it certainly isn't for Apple.
  • Reply 32 of 51
    wizard69wizard69 Posts: 13,377member
    M68000 said:
    I can’t dispute these claims.  I’m no expert in cpu manufacturing.  But have to wonder...  there has to be a limit, a wall that must be reached at some point with current chip fabrication\materials. Every year we keep hearing that in just a years time the new cpu is XX percentage better than the one before it. The other thing is the phones are so good right now,  does it really matter in real life daily use?

    My iPhone 7 with its 4 generations back A10 processor seems to be chugging along just fine.   To be honest, I don't see what "faster" would get me.

    But then software, especially on the iPad, will keep getting more and more demanding.   So, when buying new, you might want to allow for some reserve horsepower for down the road.  It's a bit like 5G -- you're investing mostly in capacity for the future.
    5G will never live up to user expectations.     On the other hand Apple needs faster hardware just to realize what they have planned for the various versions of iOS.   As for users; everybodies needs vary but most of us would prefer better performance.    That includes all performance metrics.  
    watto_cobra
  • Reply 33 of 51
    wizard69wizard69 Posts: 13,377member
    techconc said:
    This is more a modest evolution and not a ‘major increase’. Which is fine. 

    I’m more interested in GPU performance and GPU features (and have that compared to AMD/Nvidia), as well as improvements to the Neural Engine side of things, because these factors will be way more telling of Apple’s advancements and strategy. 
    And whether there will be a T3 chip following up the T2.

    Who cares whether the CPU is just increasing in speed and efficiency, if the most useful and apparent factors are GPU and ML? 
    As someone else noted already, the GPU compute scores are up about 70% in the A14.   The transistor count is up 39% which is pretty massive and it's clear this didn't go into the CPU so much.  Apple's focus seems to be exactly as you mention... with AI and machine learning.  This doesn't show up on Geekbench 5 as they don't have benchmarks to capture this.  There are industry benchmarks such as AIMark, etc. which do.  To that end, Apple doubled the performance of their Neural Engine and they now have 2nd generation matrix multiplication units used for ML.  The point being, much of the focus of the A14 has been on improving the AI/ML performance rather than traditional CPU where they already have a substantial lead.

    As for a T3 chip?  No need.  The T2 chip was effectively a way to help bring Intel chips up to speed with some of the things Apple needs to do (Secure enclave, encryption, ISP, etc.).  The T2 is effectively based on the A10 technology and performance.  Apple Silicon Macs won't need a Tx chip because that's already built into their SoC.  I'm not expecting any significant new Intel based product introductions in the future. I'm guessing the last iMac update was the last hoorah for Intel on the Mac.
    If people don't believe this all they need to do is to look at the photo of A14 where 1/4 of the chip is now devoted to Neural Engine.   I fully believe that we will see more and more AI type technologies within iOS and Siri will certainly be doing much more locally.   Combine this with the massively improved GPU's and it is pretty obvious that these massive compute chips.   The problem is existing methods of testing do not apply.
    watto_cobra
  • Reply 34 of 51
    GeorgeBMacGeorgeBMac Posts: 11,421member
    wizard69 said:
    M68000 said:
    I can’t dispute these claims.  I’m no expert in cpu manufacturing.  But have to wonder...  there has to be a limit, a wall that must be reached at some point with current chip fabrication\materials. Every year we keep hearing that in just a years time the new cpu is XX percentage better than the one before it. The other thing is the phones are so good right now,  does it really matter in real life daily use?

    My iPhone 7 with its 4 generations back A10 processor seems to be chugging along just fine.   To be honest, I don't see what "faster" would get me.

    But then software, especially on the iPad, will keep getting more and more demanding.   So, when buying new, you might want to allow for some reserve horsepower for down the road.  It's a bit like 5G -- you're investing mostly in capacity for the future.
    5G will never live up to user expectations.     On the other hand Apple needs faster hardware just to realize what they have planned for the various versions of iOS.   As for users; everybodies needs vary but most of us would prefer better performance.    That includes all performance metrics.  

    That likely depends on your expectations.
    If you expect every square inch of the U.S. to be covered with mm-wave technology you'll likely be disappointed.
  • Reply 35 of 51
    melgrossmelgross Posts: 33,510member
    I remember, from what was now a long time ago, the CEO of what was a prominent computer company (I don’t remember the name right now) was asked, in an interview, why his company wasn’t going to 16 bits. His response was “How fast does a word processor have to go?” His company went out of business a year, or so, later.

    wcery so often, someone makes a similar remark. One would think that by now, the answer would be obvious. But it’s not, to some people. They just lack the imagination,mane understanding of where the computer and telecommunications business has gone over time.

    we will ALWAYS need more memory, more storage, even with the online storage Apple and other companies offer, which is helpful. We will need faster computers overall.

    1,000 years from now, when our personal assistant will be hovering right near us talking directly to our brain, she will be eagerly awaiting her next upgrade.
    GeorgeBMacmuthuk_vanalingamdocno42watto_cobra
  • Reply 36 of 51
    hypoluxahypoluxa Posts: 694member
    M68000 said:
    I can’t dispute these claims.  I’m no expert in cpu manufacturing.  But have to wonder...  there has to be a limit, a wall that must be reached at some point with current chip fabrication\materials. Every year we keep hearing that in just a years time the new cpu is XX percentage better than the one before it. The other thing is the phones are so good right now,  does it really matter in real life daily use?

    Well if grfx HW manufacturers would stop increasing the pixel density of monitors, tv's, cameras etc, we wouldn't need to increase the cpu speeds in order to crunch all those pixels/data being made by them and grafx apps. lol. That's probably an oversimplification of it all however...

    edited October 2020 muthuk_vanalingamwatto_cobra
  • Reply 37 of 51
    mcdavemcdave Posts: 1,927member
    hexclock said:

    sflocal said:
    The industry continues to evolve, and new fabrication methods are constantly in development, along with materials science.  The industry will continue, and will evolve.  It has too.  Silicon still has more life in it, but it will be interesting what to see what the next revolution will be for CPU design.  Necessity is the mother of all inventions.  

    Graphene is one tech that's in the works.
    https://www.extremetech.com/extreme/175727-ibm-builds-graphene-chip-thats-10000-times-faster-using-standard-cmos-processes
    And here is another graphene one aimed at power generation:

    https://phys.org/news/2020-10-physicists-circuit-limitless-power-graphene.html

    Is graphene the new Apple Cider Vinegar?
    watto_cobra
  • Reply 38 of 51
    mcdavemcdave Posts: 1,927member

    melgross said:

    1,000 years from now, when our personal assistant will be hovering right near us talking directly to our brain, she will be eagerly awaiting her next upgrade.
    Surely you mean 50 years from now, when our personal assistant is us.

    She’ll be looking for a more interesting peripheral.
    GeorgeBMacwatto_cobra
  • Reply 39 of 51
    M68000 said:
    I can’t dispute these claims.  I’m no expert in cpu manufacturing.  But have to wonder...  there has to be a limit, a wall that must be reached at some point with current chip fabrication\materials. Every year we keep hearing that in just a years time the new cpu is XX percentage better than the one before it. The other thing is the phones are so good right now,  does it really matter in real life daily use?
    Apple's silicon team has been working on the A series SoCs for a decade now, and have been refining and optimizing the silicon architecture year over year.

    Does it matter in daily use? Yes. We see it when we go into portrait mode and get a preview of the blur effect before we shoot. On Android I believe you only get the blur in post, and then it's only foreground sharp/background blurry as opposed to planes of blur depending on distance. We can edit video clips in the Camera Roll and the video is rerendered on the spot, just like a photo. We can shoot 4K video out of any of the cameras with high dynamic range.

    Apple's future Macs will be based on the Apple silicon SoC, which will probably feature a higher proportion of Firestorm high performance cores operating much more efficiently than current Intel cores.

    They apparently haven't run into Moore's wall yet, and next year's A15 is rumored to be produced on TSMC's new 3nm node.

    Apple's chip team is one of the best silicon teams in the world - irrespective of x86 or ARM.
    docno42watto_cobra
  • Reply 40 of 51
    melgrossmelgross Posts: 33,510member
    mcdave said:

    melgross said:

    1,000 years from now, when our personal assistant will be hovering right near us talking directly to our brain, she will be eagerly awaiting her next upgrade.
    Surely you mean 50 years from now, when our personal assistant is us.

    She’ll be looking for a more interesting peripheral.
    We won’t see that kind of advance in 50 years. AI has turned out to be much more difficult than thought. Back in the early ‘50’s, it was believed, by scientists working on it, that human-like intelligence would be accomplished in a few years. Here we are, almost 70 years later, and we’re not much further along.

    besides, hovering without noisy and fuel inefficient power sources is something we may never get. Hence, 1,000 years.
    docno42watto_cobra
Sign In or Register to comment.