Apple 'M1X' chip specification prediction appears on benchmark site

13»

Comments

  • Reply 41 of 47
    mcdavemcdave Posts: 1,927member
    Let’s just hope the M1X/anything electronic is on a card for easy upgrade/hardware-as-a-service.
    Core-counts and generic CPU ISA aren’t the story here. The ‘X’ SoCs should have dedicated AV silicon to match iMac/Mac Pro performance not solving all problems with high-wattage brute force. That said, if Apple doubles the GPU performance within UMA, Affinity products on M1X will outperform Ryzen TR + Nvidia RTX30xx at around 25% power draw; https://forum.affinity.serif.com/index.php?/topic/124022-benchmark-1900-results/page/8/
  • Reply 42 of 47
    cloudguy said:
    With all due respect why do you believe that this is even possible? As I have stated numerous times, the idea that ARM is inherently superior to x86 was wishful thinking. If it were true, ARM would have more than 3% of the server market.
    I've read a number of your posts and I'm trying to figure out whether you are simply trolling people in this forum or if you are actually that willfully obtuse.  I'd like to give you the benefit of the doubt, but I'm still not sure.

    Anyway, trying reading an actual technical comparison. Anandtech has done a good job for example.  Apple's silicon is faster than Intel's on a per core basis.  This is proven not just with common benchmarks such as Geekbench, etc. but with larger workloads that run for longer periods of time such as SPEC, etc.  That said, let's be clear....  ARM based chips are inherently more efficient than x86 based chips.  This was as true for Intel's most efficient low power Atom chips as it is for their desktop chips today.  Apple silicon specifically is BOTH more efficient AND more powerful than anything from Intel.  To date, Intel does have larger chips with more cores, but that's simply a matter of Apple scaling their SoCs with more cores as they move up their product line. 

    Finally, why would market share of the server market prove which chip is inherently better?  What do you know about cloud computing?  Do you understand the model of developing on the desktop and then moving it to the cloud?  The server market will follow the desktop market.  In terms of chips sold, there are FAR more ARM based chips sold each year than Intel based chips. 
    fastasleepwatto_cobraDetnatortmay
  • Reply 43 of 47
    If the Mac Pro is the only model that supports 3rd party GPUs in any form that's going to be a really tiny market for graphics card makers. Is it even worth supporting? So it begs the question, could Apple simply scale up their GPU, Neural Engine, etc, units to replace them? Granted it probably wouldn't be on the CPU die but rather on a 5nm GPU co-processor tapped into the shared memory pool. Who says the unified memory concept cannot be extended to the very top of the line?

    What's clear at this point is that Apple is cutting INTEL and AMD out fo their MacBook and iMac development, so why not the MacPro? Why would AMD bother to develop for a single Mac?
    Apple certainly dropped some major hints at the last WWDC which seemed to at least imply that the Apple Silicon path includes their own custom GPUs and not third party GPUs from AMD or Nvidia. They didn't outright say that, but their slides were structured in a way that certainly implied it.  On top of that, there have been specific rumors of discrete Apple GPU, codenamed "Lifuka".

    proline said:
    Don't be silly. There is clearly a need for three chips:

    M1 (MBA, small MBPs, iPads)
    M1X (16" MBP, iMac)
    X1 ((iMac Pro, Mac Pro)

    What you are waiting for is the X1, not the M1X.
    I'm just speculating of course, but I don't think there will be just one pro configuration.  Rumors seem to suggest there will be a 16 and 32 CPU configurations.  I would imagine there would be multiple GPU options as well.  This is especially true if Apple creates a discrete GPU as rumored.

    As for the iMac it is conceivable that there will be just one configuration for iMac, though I'd like to see more... again, at least with regard to GPU configurations.  We'll see.


    fastasleepwatto_cobratmay
  • Reply 44 of 47
    wizard69wizard69 Posts: 13,377member
    techconc said:
    If the Mac Pro is the only model that supports 3rd party GPUs in any form that's going to be a really tiny market for graphics card makers. Is it even worth supporting? So it begs the question, could Apple simply scale up their GPU, Neural Engine, etc, units to replace them? Granted it probably wouldn't be on the CPU die but rather on a 5nm GPU co-processor tapped into the shared memory pool. Who says the unified memory concept cannot be extended to the very top of the line?

    What's clear at this point is that Apple is cutting INTEL and AMD out fo their MacBook and iMac development, so why not the MacPro? Why would AMD bother to develop for a single Mac?
    Apple certainly dropped some major hints at the last WWDC which seemed to at least imply that the Apple Silicon path includes their own custom GPUs and not third party GPUs from AMD or Nvidia. They didn't outright say that, but their slides were structured in a way that certainly implied it.  On top of that, there have been specific rumors of discrete Apple GPU, codenamed "Lifuka".

    proline said:
    Don't be silly. There is clearly a need for three chips:

    M1 (MBA, small MBPs, iPads)
    M1X (16" MBP, iMac)
    X1 ((iMac Pro, Mac Pro)

    What you are waiting for is the X1, not the M1X.
    I'm just speculating of course, but I don't think there will be just one pro configuration.  Rumors seem to suggest there will be a 16 and 32 CPU configurations.  I would imagine there would be multiple GPU options as well.  This is especially true if Apple creates a discrete GPU as rumored.

    As for the iMac it is conceivable that there will be just one configuration for iMac, though I'd like to see more... again, at least with regard to GPU configurations.  We'll see.



    It is funny but I've read those WWDC slides and watched the videos and haven't come to the same conclusion.   There could be an Apple GPU, an AMD GPU or a GPU developed in partnership with AMD, for the performance GPU slot.   I don't see a good reason for Apple to ignore the reality of the performance that can be had from AMD at the top end.   Not to mention that pros may need an AMD solution due to huge lags in software development.   The type of external GPU really comes down to Apples willingness to embrace PCI-Express on Apple Silicon.   Interestingly that leaves AMD still in the race due to their fabric interface which could provide a better solution for an off chip GPU.   Not to mention the potential of a CDNA chip migrating out of their (AMD's) super computing initiatives.   I just don't see the possibility of an external GPU that is not an Apple chip as dead.

    As for CPU cores in a pro chip they will have no choice but to offer a many core machine simply to compete with Thread Ripper and where that will be by the time the Apple Silicon Mac Pro ships.   The question is how many.   Frankly 32 will be a minimal count and they would need to move to 64 cores fairly quickly.   In a nut shell they would be competing with Thread Rippers built on Zen 4 cores with very wide DDR5 memories.   Right now that means 128 threads though I could see AMD going further than 128 threads.

    I actually believe Apple will take a page from AMD's play book and implement something similar to AMD's chiplets.   The question then becomes how do they split everything up.   Should the GPU be on the CPU chiplet, its own chiplet or the I/O chiplet.   Or maybe the GPU stays with the performance cores on the CPU chiplets and the low power cores move to an I/O chiplet.   There are so many possibilities here but I highly doubt we will see a single piece of silicon with everything on it.   Then you have to ask where do they start to use the chiplet design, on the desktops, high end laptop or just the Mac Pro.    I see "just" the Mac Pro as a non starter.   So I wouldn't be surprised ot see the higher end iMacs, MBP's and other machines debut with a radically different processor solution than is seen with the M1 series.   The M1 is rationally a chip migrated from the iPad/iPhone tradition, that isn't a long term solution for performance machines.    The current obsession with M1X could be misguided when you consider what Apple needs to do for the pro machines.
    watto_cobra
  • Reply 45 of 47
    mazda 3smazda 3s Posts: 1,613member
    xyzzy-xxx said:
    16 GB maximum memory for a 16" MBP ?
    hattig said:
    xyzzy-xxx said:
    16 GB maximum memory for a 16" MBP ?
    This is why we know this 'leak' not based on real information.

    The M1 had a trade-off to make, because of the integrated GPU being decent. GPUs need memory bandwidth. That meant the M1 had to use LPDDR4X memory, which is faster, but you cannot connect as much (and it's always soldered and non-expandable). You will see that Intel's TigerLake and AMD's Renoir (and Cezanne) will use LPDDR4X as well on the better performing systems (their memory controllers support DDR4 as well), but with limited memory as well (some go up to 32GB I believe, so there's room for Apple to improve).

    It's likely the M1X will have to support higher amounts of memory (and not just 32GB, but up to 128GB or more, because of the Pro users), so it will have to use DDR4 or DDR5 (hopefully the latter). This means that the bandwidth will be reduced, and the GPU will need to compromise - either have its own pool of memory (so much for 'unified memory' but a HBM stack could be provided for it) or be less efficient (or have a wider bus, but there are physical limits). There were rumours that the M1X would be paired with a larger discrete Apple GPU, so the integrated GPU could actually be smaller (or a more exotic configuration).

    Other rumours suggest 12 large cores and 4 small cores for the CPU side of things... so I think we have a lot to look forward to finding out in the next couple of months, assuming these devices are being released soon.
    The source link says max 32GB of RAM, with a max of 16GB addressable by the GPU. 
    watto_cobra
  • Reply 46 of 47
    smalmsmalm Posts: 677member
    If the prediction turns out to be accurate, it looks like the upgrade would be focused on graphics. The listing suggests that "M1X" could have a 16-core GPU [...]. It could feature 256 execution units, rather than the M1's 128
    Pure nonsense.
    The M1 GPU has 1024 ALUs, 128 per GPU-core.
    The M1X GPU may double that to 2048 ALUs.
    And CPU Monkey doesn't appear to have much credibility among benchmarking sites.
    Nonsense like that about the GPU doesn't help...
    watto_cobra
  • Reply 47 of 47
    hattig said:
     
    The M1 had a trade-off to make, because of the integrated GPU being decent. GPUs need memory bandwidth. That meant the M1 had to use LPDDR4X memory, which is faster, but you cannot connect as much (and it's always soldered and non-expandable). You will see that Intel's TigerLake and AMD's Renoir (and Cezanne) will use LPDDR4X as well on the better performing systems (their memory controllers support DDR4 as well), but with limited memory as well (some go up to 32GB I believe, so there's room for Apple to improve).

    It's likely the M1X will have to support higher amounts of memory (and not just 32GB, but up to 128GB or more, because of the Pro users), so it will have to use DDR4 or DDR5 (hopefully the latter). This means that the bandwidth will be reduced, and the GPU will need to compromise - either have its own pool of memory (so much for 'unified memory' but a HBM stack could be provided for it) or be less efficient (or have a wider bus, but there are physical limits). There were rumours that the M1X would be paired with a larger discrete Apple GPU, so the integrated GPU could actually be smaller (or a more exotic configuration).
    The only way I can see Apple producing different level SKUs using a unified memory and a common SoC architecture is to separate the SoC, the GPU cores (and possibly the neural engine cores) on to separate chiplets, featuring a matrix of GPU (and possibly neural engine) cores. Due to the vagaries of statistical semiconductor flaws, you could bin SoCs by the number of flawed CPU cores, GPU chiplets by the number of flawed GPU cores, and neural engine chiplets by the number of flawed neural engine cores.

    M1 features a package on package design with SoC and memory on the package - there's really no reason why a new package on package design couldn't place SoC, GPU matrix, neural engine matrix, memory, and I/O and interconnect controllers on the same package interconnected by a high speed fabric. You could package the more fortunate chips together to produce a higher level SKU and the least fortunate chips together for the lowest level SKU. This allows you to use binning of the most critical processors and allows you to overcome the statistical probability of various level chips.

    As for memory, you could have multi-tiered memory, some on the PoP and some off. Graphics and neural engine processing would take place in on-PoP memory - more mundane tasks through interconnect memory. Not sure you'd want to put all memory on the PoP since this would be a big ass chip and cooling it might be problematic.

    Dunno ... we'll see what the Silicon Team comes up with in the not terribly distant future 😀.
    watto_cobratmay
Sign In or Register to comment.