Early M2 Max benchmarks may have just leaked online

2»

Comments

  • Reply 21 of 23
    programmerprogrammer Posts: 3,458member
    bulk001 said:
    So what’s it been? A year or two and Apple is already basically at the same place Intel is with Incremental updates spread out over a period of years unable to deliver on a predictable timetable. Yes there are some battery life advantages and the initial jump of the v1 chip but it is not very promising moving forward if this is accurate! 
    ... while dealing with the massive disruption brought about by a global pandemic. I don't share your pessimism. Remember that Apple famously ships a great v1.0 and then iterates consistently; some people moan about the lack of regular "OMG" updates but over time the consistency of improvement leads to massive gains.

    Also, tenthousandthings pointed out that in 2014 (a mere 8 years ago!) TSMC was using a 20nm process node. I mentally used a swear word when I read that. Astonishing progress to be shipping at 5nm and imminently 3nm in that timeframe. Well done to everyone at TSMC, that is spectacular!
    A resetting of expectations is also required.  The reality of semiconductor fabrication is that since running into the power wall back in the mid-00s, things haven't been scaling smoothly like they had since humanity started building integrated circuits.  The time between nodes has increased, the risk of going to new nodes has increased, and the cost of going to new nodes has dramatically increased.  The free lunch ended almost two decades ago, and since then the semiconductor industry has been clawing out improvements with higher effort and lower rewards. A lot of the improvements that have been gained has happened by doing things other than simply bumping CPU clock rate and cache sizes.  GPU advancements were the first "post-CPU" wave, SoC happened in the mobile space first then moved into the laptop/desktop space, and more recently ML related hardware has become common.  Most of these things require software changes to make any use of at all, nevermind actually optimizing for them.  And that is why Apple needed to move to Apple Silicon -- not because they could build a better CPU than Intel, but because they need to build the chip Apple needs.  There is far more in the Axx/Mx chips than just the CPUs and GPUs.  How they are interconnected, how they share cache/memory resources, fixed function hardware units, what is the mix of devices, how the various accelerators are tuned for the workloads running on Apple systems, etc.  Expect to see variants tuned for specific systems, perhaps variants at the packaging level... the same cores re-packaged and used in different configurations, etc.

    Just the fact that TSMC has so many variations on the 5nm process node ought to be a clue about how hard getting to the next level has become.  Intel being stuck for a long time at 10nm was a foreshadowing of the future.  And at each stage, the designers are going to have to work harder and innovate more with each process advancement to wring as much value from it as possible... because the next one is going to be even more horrendously expensive and risky (and likely bring diminishing returns, plus "interesting" problems).

    thttenthousandthingswatto_cobraradarthekat
  • Reply 22 of 23
    thttht Posts: 5,447member
    tht said:
    The numbers don’t check out here on CPU.

    if they improved the single core, and then added 2 more CPU cores (high performance I assume), the multi core should be a good bit higher than that.

    somebody help me out in the math here. 
    It's 8+4 not 10+2. Just 8 performance cores. They added 2 more efficiency cores according to rumors.
    Aaaaaahhhhhhhhh, ok, that makes a lot more sense on the math.

    I’m curious, are the additional high efficiency cores there to help so as many tasks as possible on less power to extend overall battery life? I can’t think of another reason why to add additional efficiency cores. 
    Yeah, you would think so. The curious decision was why they dropped down to 2 efficiency cores for the M1 Pro or M1 Max chips when the M1 has 4 efficiency cores. These cores are very small. Were they that hurt for die size that they could not include 4 eff cores in the M1 Pro/Max chips? There are a lot odd decisions in the M1 Pro and M1 Max decisions. I don't know if it truly saved them time and money to do it the way they did, but that is the only reason I can think of.

    Marvin was right about the GPU performance in the M2 Pro/Max. If they can match the 30 to 40% performance uplift on GPU, it's a nice win. There's actually another 30% to 40% uplift they can do by having the performance scale better with more GPU cores. If they can solve their GPU core perf scaling issues as well as have more perf per core, an M2 Max could have 60% to 80% improvement in GPUs. I'd hazard a guess that the GPU tile memory is too small to support a lot GPU compute loads and it can't feed the cores. Whatever it is, they need to fix it.

    I still think they should have added CPU cores from the M1 Pro to M1 Max versions. Go from 8+2 in the Pro to 14+4 in the Max, and repeat with successive generations. They should do this, or have the GPUs be capable of doing more CPU ops, so CPU type loads scale accordingly to prices. Still weird that the M1 Pro isn't in a Mac mini or iMac 24. There are too many "no" responses within their marketing group.
    watto_cobramuthuk_vanalingamradarthekat
  • Reply 23 of 23
    tht said:
    Yeah, you would think so. The curious decision was why they dropped down to 2 efficiency cores for the M1 Pro or M1 Max chips when the M1 has 4 efficiency cores. These cores are very small. Were they that hurt for die size that they could not include 4 eff cores in the M1 Pro/Max chips? There are a lot odd decisions in the M1 Pro and M1 Max decisions. I don't know if it truly saved them time and money to do it the way they did, but that is the only reason I can think of.

    There are a lot of details in a processor (esp. SoC) which are not visible externally... not just their area and power consumption.  These factors can motivate decisions that seem odd to the external observer.  Perhaps the on-chip network has a limited number of "slots" in its fabric, or the memory subsystem only supports a limited number of clients?  Efficiency cores would consume such a resource just as performance cores would, so it might be necessary to remove efficiency cores in order to reallocate such a resource to the performance cores.  And increasing the amount of such a resource isn't necessarily just about time & money -- it could affect performance, power consumption, layout, etc.
Sign In or Register to comment.