Generation gaps: How much faster Apple Silicon gets with each release

Posted:
in Current Mac Hardware edited November 6

Apple Silicon speed has steadily improved since the debut in 2020. Here's how much faster Apple has made its chips in just four years.

Apple logos with M4, M4 Pro, and M4 Max text on gradient backgrounds.
M4 is Apple's latest chips - Image credit: Apple



Chip generations tend to improve with age. As designs get better and production processes squeeze more onto a smaller space, the performance of chips get better over time.

This is also true of Apple Silicon, Apple's self-designed chips used in its Mac lineup, as well as the iPad Pro and iPad Air. The replacement of Intel's chips has proven itself repeatedly to have been a great move for Apple, with the improvements impressing customers and resulting in more buyers.

Now, as the fourth generation of Apple Silicon ships in the form of the M4 series, we have three generational jumps to analyze. We can see more accurately how Apple's chip lineup has improved since the M1 first launched in November 2020.

Apple Silicon Chip comparisons



When performing this comparison, we are using the Geekbench results listings as our base. Using the latest results eliminates any version changes in the benchmark, so the results should all be on a fair footing.

There are some other issues to consider when using this approach, such as chip releases with core count options. There are also differences between a MacBook Pro and a Mac Studio, for example, which can affect the thermal management and therefore the results.

There are other factors too, including the production process providing improvements with die shrinks, as well as memory bandwidth upgrades.

To save splitting hairs, we are only using the top result for each chip in each category, to give it the highest potential score.

When it comes to the M4, Geekbench's Mac results list doesn't have any figures. However, you can search for the models of Mac in its database and fish out results.

For M4 models, we averaged out figures that seemed plausible, to try and root out any false or heavily errant results.

Single-core changes in Apple Silicon



Of the three benchmark results, the single-core testing offers the least variance between models. This consistency is pretty much down to how Apple produces versions of its chips.

A chip maker like Intel could differentiate between chip variants in a generation by modifying multiple factors, including core clock speeds and the number of cores. Apple tends to keep the clock speeds pretty similar across a generation, but it still has the option to change the core counts.

Each Apple Silicon chip could have different core splits between performance cores and efficiency cores. But since we know the speed of each core type will be pretty comparable, there's not going to be much difference between an M1 and an M1 Max here.

Also, when performing a single-core test in Geekbench, the performance cores are the ones that tend to be used.

Bar chart comparing Geekbench single-core scores of different M1, M2, M3, and M4 processors, showing increasing performance across series and models.
Geekbench single-core benchmarks



When compared by percentage change from the M1 version, we see pretty similar results for each of the base, Pro, and Max chips.

The M2 generation is between 11% and 16% better than the M1 for single-core results. The M3 is between 29% and 20% better, while the M4 is between 63% and 68% better.

For the two Ultra chips, the M2 Ultra is 16% better than the M1 Ultra in single-core testing.

What this comparison shows is that Apple's upgrades are quite consistent across a generation when it comes to single-core comparisons. It also demonstrates that there is a fairly sizable performance boost evident in each generation.

Multi-core changes in Apple Silicon



While single-core was fairly straightforward to understand, things get a bit tougher when it comes to the multi-core scores.

The problem here is that prefix "Multi," in that it means all of the chips on a CPU are put under load. With more cores, a chip can score higher.

However, Apple hasn't been entirely consistent in the way it distributes its performance and efficiency cores.

On the lowest 8-core CPUs, it usually splits them evenly, with four each in use. On a more powerful Max chip, such as the M3 Max, it could have a small collection of four efficiency cores, but then fill out the other 12 spots with performance cores.

The inconsistency comes in with the M3 Pro generation. A 12-core M3 Pro is configured with six efficiency cores and six performance cores.

This is odd, since the 8-core M3 and the 16-core M3 Max both have four efficiency cores and the remainder are performance cores. The even split means the M3 Pro doesn't have as much CPU number-crunching performance in multi-core testing than it normally would.

Bar chart comparing Geekbench multi-core benchmarks for M1, M2, M3, M4 series, showing increasing performance with higher variants like Pro, Max, Ultra.
Geekbench multi-core benchmarks



Examining the figures, the addition of lots of cores makes the differences more pronounced, as a percentage. The M2 generation is generally 16% to 17% better than the M1 in multi-core testing.

For the M3 generation, the M3 is 39% better than the M1 and the M3 Max is 66% better, but the M3 Pro is only 24% better. Again, the M3 Pro is an anomaly for Apple Silicon.

M4 is, again, considerably better, reaching 70% better than M1 for the base level, 84% better for the Pros, and 111% better for the Max chips.

Again, the M2 Ultra is 17% better than the M1 Ultra.

GPU changes in Apple Silicon



When it comes to graphical performance, we turn to the Metal test results in Geekbench. Much like how multi-core performance is based on clock speed and core counts, the GPU performance relies on the same things.

However, the core counts of GPUs can grow by a considerable number, depending on the variant.

For example, the humble base M1 has at most an 8-core GPU, the M1 Pro has a 16-core GPU, the Max has a 32-core version, and the Ultra up to 64 cores.

Likewise, M4 starts with 10 cores for the GPU, rising up to 20 cores at most for the Pro, and a maximum 40-core GPU on the M4 Pro.

Clock speeds and other graphical improvements can also impact results.

Bar chart comparing Geekbench Metal Benchmarks of M1, M2, M3, M4 chips, showing performance improvements from M1 to M4 Ultra with values displayed.
Geekbench Metal benchmarks



Comparing the results against the M1 counterparts, the Geekbench figures say the M2 is 41% better than the M1, the M3 is 45% better, and the M4 is 75% more powerful than the original.

For Pro models, there's a 22% improvement from M1 to M2, but the improvement seemingly dips to 17% for the M3 over the M1. Oddly the M4 Pro's GPU is only 3% better than the M1, at least according to the results.

This seems to be an oddity with Geekbench's results listings at some point, as they should be a lot higher. Since the results are updated regularly, it's possible that these figures could correct themselves within days.

At the Max end, things pretty much return to normal. The M2 Max's GPU is 26% better than the M1 Max, the M3 Max is 35% faster than the M1 Max, and the M4 Max is 69% better.

On the Ultra chips, the M2 Ultra's GPU is 38% better than the M1 Ultra's version.

Consistent improvements



Each time Apple introduces a new generation of Apple Silicon, it's promoted as the best version yet. Faster cores, more cores, and better graphics each year.

It's clear from the Geekbench listings that Apple is keeping up with its promise.

At least, if you ignore the oddities that are the M3 Pro's CPU core split and the Pro model GPU results. The former is explainable as Apple's decision, the latter is more likely to be a results problem rather than an Apple issue.

What is certainly understandable is that Apple is making a considerable improvement in each generation of its chips, regardless of the variant.

We can expect that, when M5 eventually arrives, it will be about 20% better than the current M4 chips. That is, if you base the results against what each chip generation brings versus the previous in these figures.

Apple could go wild and offer something completely different in the next generation. More cores, different performance-efficiency core splits, new GPU ideas, can all make a difference to performance.

It could do that, or it could stick to its more gradual improvements. Either way, whatever comes next should be Apple's best yet. As usual.



Read on AppleInsider

Galfan
«1

Comments

  • Reply 1 of 38
    netroxnetrox Posts: 1,505member
    Exactly why do we need to keep adding more CPU cores when most creative oriented applications would benefit from having more GPU cores? 
    edited November 6 williamlondonForumPost
  • Reply 2 of 38
    chasmchasm Posts: 3,601member
    netrox said:
    Exactly why do we need to keep adding more CPU cores when most creative oriented applications would benefit from having more GPU cores? 
    Not that I’m the last word on this topic, but to put this VERY simply CPUs do math and GPUs take that math and manipulate pixels. Graphics are created through math, so more CPUs enable GPUs to do their job better.

    More GPUs are needed when you have really really large screens/more screens. More CPUs are needed when you need more graphics.
    williamlondonAlex1NForumPostCrossPlatformFroggerwatto_cobra
  • Reply 3 of 38
    XedXed Posts: 2,886member
    netrox said:
    Exactly why do we need to keep adding more CPU cores when most creative oriented applications would benefit from having more GPU cores? 
    Power efficiency is one way that additional cores of varying types are used.
    williamlondonAlex1Nwatto_cobra
  • Reply 4 of 38
    Would be really interesting to know the degree of binning that Apple uses.   Generally the more complex the higher the failure rate but that leaves a chip that could probably operate using fewer CPU/GPU parts resulting in a lower grade chip.   Since the chips are tested and sealed with the chip name (M4, M4 Pro, M4 Max, M4 Ultra) on the top we can't tell really what the configuration is other than it will be the maximum advertised chipset.   Hence a binned M4 Max could be sold/used as either a MR or M4 Pro.

    The costs to develop and produce the chips increase with more CPU/GPU components, BUT there are engineering costs at each level.

    What if Apple were to make only two chips instead of four, with binning or may even simply substitution and crippling higher level chips - what would be cost/benefit of doing that.   I'm sure Apple/TSMC are doing these calculations on each new chip generation.
    Alex1Nwatto_cobra
  • Reply 5 of 38
    If you look at Adobe apps for example, they benefit more from CPU as long as the GPU is at a certain level. Once a user has a mid-range GPU then they don’t need GPU as much as RAM & CPU (and the real-life RAM requirements have decreased with Apple Silicon). 

    Another example is ZBrush which is purely CPU based. Even most of the time working in other 3D applications the CPU is more imporant as people working in 3D spend more time not-rendering than rendering and the machine can render while you put the kettle on. 

    It’s gamers and some 3d renderers that use more GPU - but CPU 3D Rendering is more accurate and so CPU rendering (obviously with farms) is the default in hollywood whilst us mortals have to just use what’s available on our budgets - typically a desktop GPU rather than a cloud render. The usual options when thinking only of rendering for games or lower-end 3D rendering are GPU (cheap and fast on PC), or CPU (slower, more accurate and slightly better on Mac generally).  

    When/if Apple release an M4 Ultra that is twice the performance M4 Max (GPU) it should be equivalent to an Nvidia 4090 and set the cat amongst the pigeons. 2025 could be the start of Apple desktop disruption.

    williamlondonAlex1Nnetroxapple4thewindanoxwatto_cobra
  • Reply 6 of 38
    MacProMacPro Posts: 19,851member
    My Studio M2 Ultra will do, for now; the M5 or M6 Ultra should be something to behold though.
    williamlondonAlex1Ndanoxwatto_cobra
  • Reply 7 of 38
    saareksaarek Posts: 1,582member
    Around 20% better per generation suits me just fine. It’s way better than what Intel was achieving each year!
    Alex1NAlex_Vwatto_cobra
  • Reply 8 of 38
    Thanks for these synopses, it helps me understand the performance differences among these chips, and which tasks benefit from them.
    Alex_Vjas99watto_cobra
  • Reply 9 of 38
    programmerprogrammer Posts: 3,482member
    chasm said:
    netrox said:
    Exactly why do we need to keep adding more CPU cores when most creative oriented applications would benefit from having more GPU cores? 
    Not that I’m the last word on this topic, but to put this VERY simply CPUs do math and GPUs take that math and manipulate pixels. Graphics are created through math, so more CPUs enable GPUs to do their job better.

    More GPUs are needed when you have really really large screens/more screens. More CPUs are needed when you need more graphics.
    Sorry, but that is wrong.  GPUs excel at doing math at high memory bandwidths... but they basically need to be able to do the math in parallel, and the application has to be written specifically to use the GPU.  CPUs are the default place for code to run, and are generally better at doing complex logic with lots of decisions, randomly chasing through memory for data, and doing less "orderly" computations.  To leverage multiple CPUs, the application has to be written to do that and it isn't the default.  Code typically starts its existence on a single CPU, then the programmer improves it to take advantage of multiple CPUs, then they might improve it further to either use the CPU SIMD or matrix hardware, or re-write critical pieces to run on the GPU.  These days it is also quite common for application programmers to use libraries (often Apple's) which do things like leverage multiple cores, SIMD, matrix hardware, and GPUs.  Creative oriented applications are often graphics or audio heavy, and those things can usually take advantage of all this potential hardware parallelism as long as they are optimized to do so (and the good ones are).

    The question of CPUs vs GPUs on the SoC is a complex one.  Many applications don't use the GPU at all, except for the UI (which hardly needs any GPU at all) but are optimized for multiple CPUs... adding more GPU for those applications gets you nothing.  Even GPU-heavy applications can also benefit from more CPUs, in some cases.  Ultimately though, the GPUs tend to be memory bandwidth limited, so scaling up the GPU beyond what the memory bandwidth can support gets us very little.
    netroxForumPostapple4thewinMacProAlex1Nwatto_cobra
  • Reply 10 of 38
    programmerprogrammer Posts: 3,482member

    emoeller said:
    Would be really interesting to know the degree of binning that Apple uses.   Generally the more complex the higher the failure rate but that leaves a chip that could probably operate using fewer CPU/GPU parts resulting in a lower grade chip.   Since the chips are tested and sealed with the chip name (M4, M4 Pro, M4 Max, M4 Ultra) on the top we can't tell really what the configuration is other than it will be the maximum advertised chipset.   Hence a binned M4 Max could be sold/used as either a MR or M4 Pro.
    I suspect the extent of the binning used is obvious from their available products for sale... i.e. you can get the cheaper version with fewer cores, or the more expensive one with "all" of them.  They are unlikely to use extremely low function Maxes to sell as the baseline or Pro.
    Alex1Nwatto_cobra
  • Reply 11 of 38
    22july201322july2013 Posts: 3,731member
    Thanks for the charts. They make me feel that my M1 Mac is still worth using.
    Alex_VAlex1Nteaearlegreyhotbluefire1watto_cobra
  • Reply 12 of 38
    jonyojonyo Posts: 119member
    I'm not expert on what relies on CPU vs GPU, but my creative work is all about (as far as computing power) simultaneous real-time audio DSP and mannnny simultaneous audio streams running at once (basically recording studio in a box type stuff) over thunderbolt, which as far as I can tell comparing some audio-specific benchmarks, don't really use the GPU at all since the scores look exactly the same for the same # of CPU cores and varying #s of GPU cores.

    I wanted to get a separate audio-work-only desktop computer to keep stripped down for only the audio work which I currently do with my all-purpose M2 Max MBP (which works great) and I was considering the new M4 Pro Mac Mini since it appears to benchmark higher than my M2 Max on non-graphic stuff, and is way cheaper than the M2 Mac Studio in various flavors, but I think I'm going to wait and see if they come out with a M4 Max (or Ultra) Mac Studio next year sometime, since I'm not in a hurry and want to mega-future-proof the purchase by going big. An M4 Max or M4 Ultra Mac Studio with 64 or 96 GB of RAM is just about where I want to be I think.
    Alex1Nwatto_cobra
  • Reply 13 of 38
    1der1der Posts: 1member
    It seems Cook’s law is then about 4 years.  It's always fun to make lots of assumptions and project into the future. In doing so I imagine in say 40 years what seemingly AI miracles could be accomplished with the machine in your hand being 1000 times as powerful 
    Alex_VAlex1Nwatto_cobra
  • Reply 14 of 38
    There is an error in this article. It seems that for the m4 pro gpu graph the editors picked out the opencl test report from Geekbench instead of the metal test.
    in metal the m4 pro gpu scores around 113865
    thtprogrammerAlex1Nmuthuk_vanalingamwatto_cobra
  • Reply 15 of 38
    aderutter said:
    If you look at Adobe apps for example, they benefit more from CPU as long as the GPU is at a certain level. Once a user has a mid-range GPU then they don’t need GPU as much as RAM & CPU (and the real-life RAM requirements have decreased with Apple Silicon). 

    Another example is ZBrush which is purely CPU based. Even most of the time working in other 3D applications the CPU is more imporant as people working in 3D spend more time not-rendering than rendering and the machine can render while you put the kettle on. 

    It’s gamers and some 3d renderers that use more GPU - but CPU 3D Rendering is more accurate and so CPU rendering (obviously with farms) is the default in hollywood whilst us mortals have to just use what’s available on our budgets - typically a desktop GPU rather than a cloud render. The usual options when thinking only of rendering for games or lower-end 3D rendering are GPU (cheap and fast on PC), or CPU (slower, more accurate and slightly better on Mac generally).  

    When/if Apple release an M4 Ultra that is twice the performance M4 Max (GPU) it should be equivalent to an Nvidia 4090 and set the cat amongst the pigeons. 2025 could be the start of Apple desktop disruption.

    Unless Apple stops overcharging for memory and storage probably not. Current prices for “
    Apple M2 Ultra with 24‑core CPU, 76‑core GPU, 32‑core Neural Engine64GB unified memory2TB SSD storage”Equals to 5,399.99 and a prebuilt pc with a 4090 equals to 3,999.99  https://www.bestbuy.com/site/corsair-vengeance-a7400-gaming-desktop-amd-ryzen-9-9900x-64gb-rgb-ddr5-memory-nvidia-geforce-rtx-4090-2tb-ssd-black/6604319.p?skuId=6604319
    williamlondonwatto_cobra
  • Reply 16 of 38
    MacProMacPro Posts: 19,851member
    aderutter said:
    If you look at Adobe apps for example, they benefit more from CPU as long as the GPU is at a certain level. Once a user has a mid-range GPU then they don’t need GPU as much as RAM & CPU (and the real-life RAM requirements have decreased with Apple Silicon). 

    Another example is ZBrush which is purely CPU based. Even most of the time working in other 3D applications the CPU is more imporant as people working in 3D spend more time not-rendering than rendering and the machine can render while you put the kettle on. 

    It’s gamers and some 3d renderers that use more GPU - but CPU 3D Rendering is more accurate and so CPU rendering (obviously with farms) is the default in hollywood whilst us mortals have to just use what’s available on our budgets - typically a desktop GPU rather than a cloud render. The usual options when thinking only of rendering for games or lower-end 3D rendering are GPU (cheap and fast on PC), or CPU (slower, more accurate and slightly better on Mac generally).  

    When/if Apple release an M4 Ultra that is twice the performance M4 Max (GPU) it should be equivalent to an Nvidia 4090 and set the cat amongst the pigeons. 2025 could be the start of Apple desktop disruption.

    Unless Apple stops overcharging for memory and storage probably not. Current prices for “
    Apple M2 Ultra with 24‑core CPU, 76‑core GPU, 32‑core Neural Engine64GB unified memory2TB SSD storage”Equals to 5,399.99 and a prebuilt pc with a 4090 equals to 3,999.99  https://www.bestbuy.com/site/corsair-vengeance-a7400-gaming-desktop-amd-ryzen-9-9900x-64gb-rgb-ddr5-memory-nvidia-geforce-rtx-4090-2tb-ssd-black/6604319.p?skuId=6604319
    How do you compare the price of RAM for a PC against RAM for an Apple SoC?  There is no way DDR5 in a MoB in a PC can transfer data as fast.  Same with the built-in storage on a SoC.  They are different in every way, so trying to compare prices is not possible.  I have a very high-end Corsair Vengeance Gaming PC and the M2 Ultra, so I can compare performance.  The price was not far apart.  I didn't have to spend days in the BIOS to get 64 GB of 7200 MT/s CL34 DDR5 working on my M2 Ultra either.
    edited November 7 danoxAlex1Nwatto_cobra
  • Reply 17 of 38
    MacProMacPro Posts: 19,851member
    1der said:
    It seems Cook’s law is then about 4 years.  It's always fun to make lots of assumptions and project into the future. In doing so I imagine in say 40 years what seemingly AI miracles could be accomplished with the machine in your hand being 1000 times as powerful 
    Same here.  However, I bet your 1000-times increase is way short of the mark in terms of performance gain.
    edited November 7 Alex_VAlex1Nwatto_cobra
  • Reply 18 of 38
    XedXed Posts: 2,886member
    aderutter said:
    If you look at Adobe apps for example, they benefit more from CPU as long as the GPU is at a certain level. Once a user has a mid-range GPU then they don’t need GPU as much as RAM & CPU (and the real-life RAM requirements have decreased with Apple Silicon). 

    Another example is ZBrush which is purely CPU based. Even most of the time working in other 3D applications the CPU is more imporant as people working in 3D spend more time not-rendering than rendering and the machine can render while you put the kettle on. 

    It’s gamers and some 3d renderers that use more GPU - but CPU 3D Rendering is more accurate and so CPU rendering (obviously with farms) is the default in hollywood whilst us mortals have to just use what’s available on our budgets - typically a desktop GPU rather than a cloud render. The usual options when thinking only of rendering for games or lower-end 3D rendering are GPU (cheap and fast on PC), or CPU (slower, more accurate and slightly better on Mac generally).  

    When/if Apple release an M4 Ultra that is twice the performance M4 Max (GPU) it should be equivalent to an Nvidia 4090 and set the cat amongst the pigeons. 2025 could be the start of Apple desktop disruption.

    Unless Apple stops overcharging for memory and storage probably not. Current prices for “
    Apple M2 Ultra with 24‑core CPU, 76‑core GPU, 32‑core Neural Engine64GB unified memory2TB SSD storage”Equals to 5,399.99 and a prebuilt pc with a 4090 equals to 3,999.99  https://www.bestbuy.com/site/corsair-vengeance-a7400-gaming-desktop-amd-ryzen-9-9900x-64gb-rgb-ddr5-memory-nvidia-geforce-rtx-4090-2tb-ssd-black/6604319.p?skuId=6604319
    That's not a valid argument for a claim of overcharging. You want to try again?
    edited November 7 danoxAlex1Nwilliamlondon
  • Reply 19 of 38
    Nobody cares! Even my M1 iMac is still fast enough. Apples focus on speed has no customer value and is not a trigger to upgrade. 
    williamlondon
  • Reply 20 of 38
    XedXed Posts: 2,886member
    dutchlord said:
    Nobody cares! Even my M1 iMac is still fast enough. Apples focus on speed has no customer value and is not a trigger to upgrade. 
    I care. I am happy to have these chats to compare. If there were other upgrades, like WiFi 7 and a tandem OLED display I'd definitely be upgrading this year with how much faster the M4 Max is over my M1 Max.
    Alex1Nwilliamlondonwatto_cobra
Sign In or Register to comment.