Apple 'M1X' chip specification prediction appears on benchmark site

2

Comments

  • Reply 21 of 47
    robaba said:
    Limited to 16 gig memory?  Nah, that makes no sense.  As the computing power increases, Apple will need to bump the memory and bandwidth of the on-chip fabric in order to prevent bottlenecking.  I also expect Apple to have an on-package graphics chip for cost and production flexibility reasons.
    I think it means that the graphics are limited to 16GB of memory.
    watto_cobra
  • Reply 22 of 47
    robaba said:
    Limited to 16 gig memory?  Nah, that makes no sense.  As the computing power increases, Apple will need to bump the memory and bandwidth of the on-chip fabric in order to prevent bottlenecking.  I also expect Apple to have an on-package graphics chip for cost and production flexibility reasons.
    I think it means that the graphics are limited to 16GB of memory.
    That still makes no sense if the graphics are the same as the integrated graphics on the M1. The M1 uses a unified memory design where all of RAM is available to the GPU. It is one of the reasons that the M1 is so efficient. I seriously doubt that Apple is changing their unified memory design for this generation.
    watto_cobra
  • Reply 23 of 47
    jdb8167 said:

    jdb8167 said:
    These still won't be processors for the Mac Pro or iMac Pro. Those are probably coming next year.
    The Mac Pro will be using Intel chips for a few more years because Apple's M processor can't beat what is in the Mac Pro as far as memory, storage, features, and performance, especially the ability to run multiple VM environments.

    Second, the iMac Pro is dead.  Apple hasn't done anything with it for four years.  It will likely be discontinued when Apple releases an iMac with hopefully a much better processor than the low end M1 chip.
    Apple has repeatedly stated that the Apple Silicon transition will be over in two years. Since the latest start of the transition would be on the release of the M1 Macs, then logically Mac Pros are going to be announced by November 2022. I expect that we will see a Mac Pro replacement in early 2022 and it will beat the current Mac Pro in performance and match it for RAM, storage and other features. I also expect it will support discrete GPUs.
    If the Mac Pro is the only model that supports 3rd party GPUs in any form that's going to be a really tiny market for graphics card makers. Is it even worth supporting? So it begs the question, could Apple simply scale up their GPU, Neural Engine, etc, units to replace them? Granted it probably wouldn't be on the CPU die but rather on a 5nm GPU co-processor tapped into the shared memory pool. Who says the unified memory concept cannot be extended to the very top of the line?

    What's clear at this point is that Apple is cutting INTEL and AMD out fo their MacBook and iMac development, so why not the MacPro? Why would AMD bother to develop for a single Mac?
    watto_cobra
  • Reply 24 of 47
    jdb8167 said:

    jdb8167 said:
    These still won't be processors for the Mac Pro or iMac Pro. Those are probably coming next year.
    The Mac Pro will be using Intel chips for a few more years because Apple's M processor can't beat what is in the Mac Pro as far as memory, storage, features, and performance, especially the ability to run multiple VM environments.

    Second, the iMac Pro is dead.  Apple hasn't done anything with it for four years.  It will likely be discontinued when Apple releases an iMac with hopefully a much better processor than the low end M1 chip.
    Apple has repeatedly stated that the Apple Silicon transition will be over in two years. Since the latest start of the transition would be on the release of the M1 Macs, then logically Mac Pros are going to be announced by November 2022. I expect that we will see a Mac Pro replacement in early 2022 and it will beat the current Mac Pro in performance and match it for RAM, storage and other features. I also expect it will support discrete GPUs.
    If the Mac Pro is the only model that supports 3rd party GPUs in any form that's going to be a really tiny market for graphics card makers. Is it even worth supporting? So it begs the question, could Apple simply scale up their GPU, Neural Engine, etc, units to replace them? Granted it probably wouldn't be on the CPU die but rather on a 5nm GPU co-processor tapped into the shared memory pool. Who says the unified memory concept cannot be extended to the very top of the line?

    What's clear at this point is that Apple is cutting INTEL and AMD out fo their MacBook and iMac development, so why not the MacPro? Why would AMD bother to develop for a single Mac?
    There are rumors that Apple is developing a discrete GPU. Is that easier or harder than developing drivers for high end AMD GPUs? Seems like developing drivers in cooperation with AMD might be easier and it solves a big problem for the really high end pro systems.

    I'd imagine that Apple would have to negotiate with AMD to get them to support an Apple Silicon Mac Pro and/or iMac Pro. That might entail funding the driver development or some other incentive. Who knows? Again, there are rumors that Apple is going to create their own GPU but there is no way to know. One thing you can be pretty sure of though is that they developed the latest Mac Pro knowing that they were transitioning to Apple Silicon. So they have something in mind and I doubt it is to end of life the Mac Pro after a single hardware generation.
    edited February 2021 watto_cobra
  • Reply 25 of 47
    robabarobaba Posts: 228member
    I have no doubt the Pro will live on, but:

    Will the Pro shrink down into the newly developed mini-modular Or stay in the current full-sized-modular chassis?

    Will the Pro be a chiplet-style cpu/gpu or a huge SoP?

    How will it handle memory?

    Will the Mac Pro stand in for all back room use-cases or will they bring back hardware specialization in the form of Xserve, blades, or other large scale deployments?

    Will Apple integrate the functions of the Afterburner FPGA into the main logic or perhaps introduce other such specialized acceleration cards?

    lots still to find out!
    watto_cobra
  • Reply 26 of 47
    saarek said:
    I hope this isn’t true.

    I’m hoping for a true performance king that humiliates AMD & Intel and sets the bar in terms of performance. Primarily GPU based upgrades over the M1 wouldn’t be the big step up in the true “Pro” Macs over the current M1 that I was hoping for.
    Don't be silly. There is clearly a need for three chips:

    M1 (MBA, small MBPs, iPads)
    M1X (16" MBP, iMac)
    X1 ((iMac Pro, Mac Pro)

    What you are waiting for is the X1, not the M1X.
    watto_cobra
  • Reply 27 of 47


    jdb8167 said:
    These still won't be processors for the Mac Pro or iMac Pro. Those are probably coming next year.
    The Mac Pro will be using Intel chips for a few more years because Apple's M processor can't beat what is in the Mac Pro as far as memory, storage, features, and performance, especially the ability to run multiple VM environments.

    Second, the iMac Pro is dead.  Apple hasn't done anything with it for four years.  It will likely be discontinued when Apple releases an iMac with hopefully a much better processor than the low end M1 chip.
    Nonsense. The M1 beats the Mac Pro on single core performance by 50%. To beat the 28 core Mac Pro would require an 18 core version. That's doable. So is adding RAM. Yes, it may take until next year at the latest given the low priority, but not beyond that. 

    As for the iMac Pro, it's definitely going to stick around. It has extremely high profit margins, or at least it will when they don't have to pay Intel anymore. And the chip from the Mac Pro will work in it just fine, perhaps with a few cores disabled.
    watto_cobra
  • Reply 28 of 47
    jdb8167 said:
    robaba said:
    Limited to 16 gig memory?  Nah, that makes no sense.  As the computing power increases, Apple will need to bump the memory and bandwidth of the on-chip fabric in order to prevent bottlenecking.  I also expect Apple to have an on-package graphics chip for cost and production flexibility reasons.
    I think it means that the graphics are limited to 16GB of memory.
    That still makes no sense if the graphics are the same as the integrated graphics on the M1. The M1 uses a unified memory design where all of RAM is available to the GPU. It is one of the reasons that the M1 is so efficient. I seriously doubt that Apple is changing their unified memory design for this generation.
    This was simply speculation. I know about everything you just said. Makes no sense to me either. I have a M1 with 16GB sitting right in front of me. I've hit a hard limit once doing the normal work I once did on a  13" 32GB Intel model so I know 16GB is not enough. It's hard to believe they would essentially double the CPU/GPU processing power and leave it crippled at 16GB in comparison to a 32GB or 64GB 16" MacBook Pro. There is always the possibility that there will be tiers of memory as an interim workaround. Who knows?

    muthuk_vanalingam
  • Reply 29 of 47

    proline said:
    saarek said:
    I hope this isn’t true.

    I’m hoping for a true performance king that humiliates AMD & Intel and sets the bar in terms of performance. Primarily GPU based upgrades over the M1 wouldn’t be the big step up in the true “Pro” Macs over the current M1 that I was hoping for.
    Don't be silly. There is clearly a need for three chips:

    M1 (MBA, small MBPs, iPads)
    M1X (16" MBP, iMac)
    X1 ((iMac Pro, Mac Pro)

    What you are waiting for is the X1, not the M1X.
    I disagree. I think their will only be two chips for the MacBook Pro and iMacs. The difference is that the M1/M1x chips will have a larger thermal envelop and run at a higher wattage. The iMacPro will likely get the same chip has the MacPro, the M1xa.
  • Reply 30 of 47
    tmaytmay Posts: 6,342member
    jdb8167 said:
    jdb8167 said:

    jdb8167 said:
    These still won't be processors for the Mac Pro or iMac Pro. Those are probably coming next year.
    The Mac Pro will be using Intel chips for a few more years because Apple's M processor can't beat what is in the Mac Pro as far as memory, storage, features, and performance, especially the ability to run multiple VM environments.

    Second, the iMac Pro is dead.  Apple hasn't done anything with it for four years.  It will likely be discontinued when Apple releases an iMac with hopefully a much better processor than the low end M1 chip.
    Apple has repeatedly stated that the Apple Silicon transition will be over in two years. Since the latest start of the transition would be on the release of the M1 Macs, then logically Mac Pros are going to be announced by November 2022. I expect that we will see a Mac Pro replacement in early 2022 and it will beat the current Mac Pro in performance and match it for RAM, storage and other features. I also expect it will support discrete GPUs.
    If the Mac Pro is the only model that supports 3rd party GPUs in any form that's going to be a really tiny market for graphics card makers. Is it even worth supporting? So it begs the question, could Apple simply scale up their GPU, Neural Engine, etc, units to replace them? Granted it probably wouldn't be on the CPU die but rather on a 5nm GPU co-processor tapped into the shared memory pool. Who says the unified memory concept cannot be extended to the very top of the line?

    What's clear at this point is that Apple is cutting INTEL and AMD out fo their MacBook and iMac development, so why not the MacPro? Why would AMD bother to develop for a single Mac?
    There are rumors that Apple is developing a discrete GPU. Is that easier or harder than developing drivers for high end AMD GPUs? Seems like developing drivers in cooperation with AMD might be easier and it solves a big problem for the really high end pro systems.

    I'd imagine that Apple would have to negotiate with AMD to get them to support an Apple Silicon Mac Pro and/or iMac Pro. That might entail funding the driver development or some other incentive. Who knows? Again, there are rumors that Apple is going to create their own GPU but there is no way to know. One thing you can be pretty sure of though is that they developed the latest Mac Pro knowing that they were transitioning to Apple Silicon. So they have something in mind and I doubt it is to end of life the Mac Pro after a single hardware generation.
    I've never considered that Apple would have an issue with developing a discrete GPU. They've been doing GPU's for years now, and GPU's are known to be scalable. 

    What I suspect is that Apple's discrete GPU will not depend on PCIe at all, but rather a proprietary bus of Apple's own design that provides a higher bandwidth and lower latency, and of course, supports Metal. That would imply that there would in fact be no third party GPU's at all.
    jdb8167smalmwatto_cobra
  • Reply 31 of 47
    16 GB maximum memory for a 16" MBP ?
  • Reply 32 of 47
    xyzzy-xxx said:
    16 GB maximum memory for a 16" MBP ?
    This is why we know this 'leak' not based on real information.

    The M1 had a trade-off to make, because of the integrated GPU being decent. GPUs need memory bandwidth. That meant the M1 had to use LPDDR4X memory, which is faster, but you cannot connect as much (and it's always soldered and non-expandable). You will see that Intel's TigerLake and AMD's Renoir (and Cezanne) will use LPDDR4X as well on the better performing systems (their memory controllers support DDR4 as well), but with limited memory as well (some go up to 32GB I believe, so there's room for Apple to improve).

    It's likely the M1X will have to support higher amounts of memory (and not just 32GB, but up to 128GB or more, because of the Pro users), so it will have to use DDR4 or DDR5 (hopefully the latter). This means that the bandwidth will be reduced, and the GPU will need to compromise - either have its own pool of memory (so much for 'unified memory' but a HBM stack could be provided for it) or be less efficient (or have a wider bus, but there are physical limits). There were rumours that the M1X would be paired with a larger discrete Apple GPU, so the integrated GPU could actually be smaller (or a more exotic configuration).

    Other rumours suggest 12 large cores and 4 small cores for the CPU side of things... so I think we have a lot to look forward to finding out in the next couple of months, assuming these devices are being released soon.
    edited February 2021 watto_cobra
  • Reply 33 of 47
    Lol at the people using clocks and core counts as if they really mean anything when it comes to the chip.
    tmaywatto_cobra
  • Reply 34 of 47
    cloudguy said:
    saarek said:
    I hope this isn’t true.

    I’m hoping for a true performance king that humiliates AMD & Intel and sets the bar in terms of performance. Primarily GPU based upgrades over the M1 wouldn’t be the big step up in the true “Pro” Macs over the current M1 that I was hoping for.
    With all due respect why do you believe that this is even possible? As I have stated numerous times, the idea that ARM is inherently superior to x86 was wishful thinking. If it were true, ARM would have more than 3% of the server market. As I have also stated, most of the benchmarking was skewed: it only compared the M1 to the Intel chips that it replaced in macOS devices. Those were mostly 2 and 4 core "mobile" chips. They were also outdated chips: 9th and 10th gen. There were already 11th gen Intel chips on the market when the M1 Macs were introduced.

     Yet all the M1 crushes Intel who is now doomed! "benchmarks" ignored them just as they ignored how comparing 2 and 4 core "mobile" chips to a chip with 4 performance + 4 efficiency cores never made any sense. They just took Apple's claim - unsubstantiated by any data - that the M1 was faster than "80% of Windows PCs" and ran with it. Also, 4Q this year we are going to see 10nm big.LITTLE chips from Intel followed by 5nm Zen 4 chips - Athlon architecture, not big.LITTLE - chips from AMD. Intel hasn't started hyping their 12th gen chips yet, but AMD is claiming that their Zen 4 chips will have up to 40% performance gains over their current chips. 

    Thinking that Apple was going to dominate Intel and AMD in PCs the way they dominate Qualcomm and Samsung in mobile never made any sense. Especially if it was based on the superiority of ARM because Qualcomm and Samsung make ARM chips too. Apple versus Qualcomm was "ARM CPU with laptop performance versus ARM CPU with embedded appliance performance." Fine. Apple versus Intel and AMD: ARM CPU with laptop performance versus x86 CPUs that power 97% of the world's servers. A totally different ballgame.

    This has nothing to do with Isa or even big.little. One apple firestorm core matches or beats Ryzen 3 in SPECCint 2006, an industry standard benchmark. I'm not sure why you bring up big.little, considering it doesn't change the fact apple's core design is superior.

    Oh sure, AMD will have zen 4 by the end of the year, but that's going to be just on paper. AMD still can't ship in any meaningful quantities, given their mobile chips barely compete in the high end of laptops. That's still dominated by intel. Both fail at sustaining clocks, let alone keep power draw at a reasonable level.

    You also seem to believe that x86 being dominant in servers somehow means they're still superior to anything ARM, which is quite absurd. Go look at some of Ampere's server chips.
    watto_cobra
  • Reply 35 of 47
    cloudguy said:
     
    With all due respect why do you believe that this is even possible? As I have stated numerous times, the idea that ARM is inherently superior to x86 was wishful thinking. If it were true, ARM would have more than 3% of the server market. As I have also stated, most of the benchmarking was skewed: it only compared the M1 to the Intel chips that it replaced in macOS devices. Those were mostly 2 and 4 core "mobile" chips. They were also outdated chips: 9th and 10th gen. There were already 11th gen Intel chips on the market when the M1 Macs were introduced.
    You can design any CPU with fixed length instructions much wider than a x86 CPU due to lower decoder complexity.

    M1 has 4 high performance Firestorm and 4 high efficiency Icestorm cores - it was designed for the low-end MacBook Air (fanless) and 13" MacBook Pro models as part of their annual spec bump.

    Rumor has it the M1x slated for later this year will have 8-16 Firestorm cores (depending on binning) and will be targeted at machines like the 14" and 16" MacBook Pros and possibly the low end iMac (and maybe a high end Mac Mini).

    In 2008, Apple acquired PA Semi and worked with cash strapped Intrinsity and Samsung to produce a FastCore Cortex-A8; the frenemies famously split and Apple used their IP and Imagination's PowerVR to create the A4 and Samsung took their tech to produce the Exynos 3. Apple acquired Intrinsity and continued to hire engineering talent from IBM's Cell and XCPU design teams, and hired Johny Srouji from IBM who worked on the POWER7 line to direct the effort.

    This divergence from standard ARM designs was continued by Apple who continued to nurture and build their Silicon Design Team (capitalized out of respect) for a decade, ignoring standard ARM designs building their own architecture, improving and optimizing it year by year for the last decade.

    Whereas other ARM processor makers like Qualcomm and Samsung pretty much now use standard ARM designed cores - Apple has their own designs and architecture and has greatly expanded their own processor acumen to the point where the Firestorm cores in the A14 and M1 are the most sophisticated processors in the world with an eight wide processor design with a 690 instruction execution queue with a massive reorder buffer and the arithmetic units to back it up - which means its out-of-order execution unit can execute up to eight instructions simultaneously.

    x86 processor makers are hampered by the CISC design and a variable instruction length. This means that at most they can produce a three wide general design and even for that the decoder would have to be fiendishly clever, as it would have to guess where one instruction ended and the next began.

    There's a problem shared with x86-64 processor makers and Windows - they never met an instruction or feature they didn't like. What happens then is you get a build-up of crud that no one uses, but it still consumes energy and engineering time to keep working.

    AMD can get better single core speed by pushing up clocks (and dealing with the exponentially increased heat though chiplets are probably much harder to cool), and Intel by reducing the number of cores (the top of 10 core 20 thread 10900K actually had to be shaved to achieve enough surface area to cool the chip so it at 14nm had reached the limits of physics). Both run so hot they are soon in danger of running into Moore's Wall.

    Apple OTOH ruthlessly pares underused or unoptimizable features.

    When Apple determined that ARMv7 (32 bit ARM) was unoptimizable, they wrote it out of iOS, and removed those logic blocks from their CPUs in two years, repurposing the silicon real estate for more productive things. Intel,  AMD, and yes even Qualcomm couldn't do that in a decade.

    Apple continues that with everything - not enough people using Force Touch - deprecate it, remove it from the hardware, and replace it with Haptic Touch. Gone.

    Here's another secret of efficiency - make it a goal. Last year on the A13 Bionic used in the iPhone 11s, the Apple Silicon Team introduced hundreds of voltage domains so they could turn off parts of the chip not in use. Following their annual cadence, they increased the speed of the Lightning high performance and the Thunder high efficiency cores by 20% despite no change in the 7nm mask size. As an aside, they increased the speed of matrix multiplication and division by six times (used in machine learning).

    This year they increased the speed of the Firestorm high performance and Icestorm high efficiency cores by another 20% while dropping the mask size from 7nm to 5nm. That's a hell of a compounding rate and explains how they got to where they are. Rumor has it they've bought all the 3nm capacity from TSMC for the A16 (and probably M2) next year.

    Wintel fans would deny the efficacy of the A series processors and say they were mobile chips, as if they used slower silicon with wheels on the bottom or more sluggish electrons.

    What they were were high efficiency chips which were passively cooled and living in a glass sandwich. Remove them from that environment where they could breathe more easily and boost the clocks a tad and they became a raging beast.

    People say that the other processor makers will catch up in a couple of years, but that's really tough to see. Apple Silicon is the culmination of a decade of intense processor design financed by a company with very deep pockets - who is fully cognizant of the competitive advantage Apple Silicon affords. Here's an article in Anandtech comparing the Firestorm cores to the competing ARM and x86 cores. It's very readable for an article of its ilk.

    Of course these are the Firestorm cores used in the A14, and are not as performant as the cores in the M1 due to the M1's higher 3.2 ghz clock speed.

    tmaytechconcsaarekhypoluxawatto_cobraDetnator
  • Reply 36 of 47
    wizard69wizard69 Posts: 13,377member
    saarek said:
    cloudguy said:
    saarek said:
    I hope this isn’t true.

    I’m hoping for a true performance king that humiliates AMD & Intel and sets the bar in terms of performance. Primarily GPU based upgrades over the M1 wouldn’t be the big step up in the true “Pro” Macs over the current M1 that I was hoping for.
    With all due respect why do you believe that this is even possible? As I have stated numerous times, the idea that ARM is inherently superior to x86 was wishful thinking. If it were true, ARM would have more than 3% of the server market. As I have also stated, most of the benchmarking was skewed: it only compared the M1 to the Intel chips that it replaced in macOS devices. Those were mostly 2 and 4 core "mobile" chips. They were also outdated chips: 9th and 10th gen. There were already 11th gen Intel chips on the market when the M1 Macs were introduced. 

    So just because an M1 with 8 cores - granted only 4 of them performance - crushed a 10th gen 2 core mobile CPU doesn't mean that an M1X is going to crush an 11th gen Intel Core i7 or Intel Core i9. Step away from the guys on the "mainstream" tech sites who write all their articles and blog posts on MacBooks and iPad Pros and update Twitter/Facebook/Instagram from their iPhones. Instead go to Anandtech and Ars Technica. Those guys love their MacBooks, iPads and iPhones too but their articles are technical, not "I am going to cheerlead for this product because I use it." Or you can investigate Gordon Mah Ung, the main Windows guy for PC World. On those you will see that the latest Intel chips rival the M1 in single core score, and that the Core i7 and Core i9 beat it handily on multicore score. You will also see that the latest desktop AMD chips beat the M1 in both single core score (slightly) and multicore score (handily). When I tried to point this out to the MacBook Air fans at some of the other tech sites - including one who literally told me that their tech site wasn't going to pay attention Linux because "Linux is hard" !?!? - they would move the goalposts from "power" to "power per watt." One reviewer on that same site stated that the benchmarks that Intel used to defend itself from reviewers like him were flawed because they didn't factor in the better user experience on the M1 Macs. And realize that this particular reviewer is the one assigned by the site to be their main Windows guy! (Meaning machines that he obviously doesn't have any use for outside of being stuck with dealing with them for his job.) But even the main Windows guy at this alleged tech site didn't point out that the M1 Macs were being compared to 9th gen and 10th gen Intel chips with only 2 and 4 cores.

    If you are disappointed now, wait until the 12th gen Intel chips come out in September. We have only heard details about 2 so far:

    10nm Intel process which is roughly equivalent to an 8nm TSMC process because just as TSMC's foundries achieve better transistor density than Samsung's, Intel's does over TSMC's
    14 core chip with 6 performance multithreaded cores and 8 efficiency single threaded cores in a big.LITTLE architecture (for the first time ever with x86, Intel or AMD)
    16 core chip with 8 performance multithreaded/8 efficiency single threaded in big.LITTLE

    And a couple months later: AMD's Zen 4 chips will come out. They won't have big.LITTLE architecture, but they will be on the same 5nm process as the M1 and M1X from the same foundry. And AMD Zen X performance chips start at 8 performance cores - where Intel's start at 6 - and go up to 16. So yes, in a few months the whole idea of "ARM is better than outdated x86" narrative that were based on terrible comparisons to begin with that never would have been done had they been skewed unfavorably towards Apple will be forgotten and quietly discarded. And we will be back to how Macs have better design, UX/UI, user experience, longevity, support etc. than Wintel. Just like before.
    Well, for a start the M1 matches or exceeds the 10th Gen Intel i7’s in most areas, as you noted. It’s also Apple’s entry level offering and one would assume that Apple would not put their best chips in their cheapest machine.

    But, & this is a big but, if the next computers from Apple are simply slightly faster with better graphics that’d be a HUGE let down.

    Apple will be well aware of Intel’s roadmap, I doubt they’d have been so shortsighted as to release a set of chips that won’t match or exceed what Intel offers.

    We will have to see, but just over a year ago people were saying that Apple would never move the Mac to Apple Silicon because it’d never match Intel X86 & they’ve had a rude awakening.
    I'm not sure what all the hand wringing is about here.   Apples chips do demolish anything intel has to offer, all one has to do is look at the thermals to see that.  

    So why is a laptop chip with better graphics and more cores  a let down?   We are also assuming moe ports and other features will come with the SOC.  Honestly what where you expecting?  

    Yes Apple is very aware of Intels road map but that has little to do with what I would expect the M1X to be.   That is a chip designed for laptops.   It looks like Apple is only going to put it in low end iMacs so there is a desktop version coming.

    As for the people saying never I never understood that either.   The fact is apple has done wonders for the laptop market with the M1 chip and is fielding the best fanless laptop that exists at the moment.   Even so I'm not convinced that Apple will drop AMD for GPU's just yet.   They are gone on low end equipment but I can see future iMacs and Mac Pros with AMD chips.   Well possibly, Apple did hire a lot of AMD GPU engineers so who knows.

    In any event if true this is a 35 watt processor, that is huge.   It would be able to power laptops and effectively compete with intel laptops running at 45 to 65 watts.   Even better would be the possibility of a turbo mode or even a clock rate increase.   People need to remember that early engineering samples are often running lower clocks than the shipping processors.
    techconcwatto_cobra
  • Reply 37 of 47
    wizard69wizard69 Posts: 13,377member
    The more I read of this thread the more I think people are getting excited over nothing.   For one even if the cores run at the same clock rate that does not mean they can't be faster.   If Apple can increase the IPC we get an advantage.   Further if Apple goes to DDR5 or another RAM technology we could see even better performance.

    A better RMA subsystem is almost a certainty because at some point adding more cores to the GPU will be hampered by slow memory.   One just needs to look at the effort AMD and NVidia put into their memory subsystems to understand the importance of bandwidth.   So I can't see Apple adding all of these cores without addressing memory performance in some manner.  DDR5, HBM and other technologies could all be part of the new SoC.   You might say HBM is expensive and yeah it isn't cheap but on the other hand Apple would be putting these chips into high end machines that are seeing a cost savings from dumping Intel

    People forget that there is much more in Apple silicon all of which might see improvements.   The Video CODEC blocks for example could benefit.   Neural Engine and other AI related features certainly need improvement.  Then there is the issue of I/O which of course needs improvement.  

    So all this crying about the new machines not being enough seem me to be a bit silly as this is only a rumor.   Combine that with a bunch of technology about to hit the market and you will have a hard time guessing what Apple will actually ship.   One of those technologies is DDR 5 which is pretty much ready to go, just waiting on processors to implement the interface.

    My only concern is the lack of a lot of CPU cores.   There are a lot of work loads that really do benefit from all of the cores you can throw at the job.   In a Mac Book Pro I'd rather see a processor with 16 high performance cores.   This mainly to do with the great success AMD is seeing with its high core count multi threaded chips.    Sometimes performance is the only thing that matters.
    watto_cobra
  • Reply 38 of 47
    saarek said:
    I hope this isn’t true.

    I’m hoping for a true performance king that humiliates AMD & Intel and sets the bar in terms of performance. Primarily GPU based upgrades over the M1 wouldn’t be the big step up in the true “Pro” Macs over the current M1 that I was hoping for.
    Well, there's a limit on what you can put into a 14-inch chassis without overheating, throttling, increase fan noise of all the above. It's just physics. Everybody is losing their minds about a super incredibly M1X but while Apple Silicon architecture is extremely efficient, it can't do magic. Better more efficient cores will come with the M2, M3 etc, but as of right now, the M1 performance cores is what we have.

    I think we are going to see three SoCs this year:
     - M1:           4E + 4P cores + 7/8 GPU cores -> 15 watts.
     - M1T or S: 4E + 6/8P cores + 10/12 GPU cores -> 25/30 watts -> 14-inch MacBook Pro, some iMacs, Mac mini.
     - M1X:         4E + 12P cores + 16 GPU cores -> 45/50 watts -> 16-inch MacBook Pro, iMacs.

    I'm waiting for the rumoured 14-inch MacBook Pro to replace my current 13-inch and simply speaking I'd be more than happy with an M1Whatever with 6 performance cores. It'd theoretically have more CPU power than every Intel iMac except for the 18-core iMac Pro, it'd be extremely light and have great battery life. What's not to love?

    If you want to have more power, well you'll need to start compromising in regards of size and weight of the laptop, for those scenarios we have the 16-inch MacBook Pro.
    watto_cobra
  • Reply 39 of 47
    Intel’s days are numbered. Also companies like Dell, which I give 3 years before declaring bankruptcy (personal opinion here). Maybe Microsoft with it’s combined software and hardware but they’d have to scrap the crap they have and start writing decent software which is a stretch and then try to make leaps and bounds progress to catch up to Apple hardware/software wise. 
    watto_cobra
  • Reply 40 of 47
    cloudguy said:
    saarek said:
    I hope this isn’t true.

    I’m hoping for a true performance king that humiliates AMD & Intel and sets the bar in terms of performance. Primarily GPU based upgrades over the M1 wouldn’t be the big step up in the true “Pro” Macs over the current M1 that I was hoping for.
    With all due respect why do you believe that this is even possible? As I have stated numerous times, the idea that ARM is inherently superior to x86 was wishful thinking. 
    The firestorm core is demonstrably superior to any Intel core, by far. It is absolutely possible for Apple to make a CPU with more than 4 of them.

    And yes, ARM is inherently superior. The fixed length instruction make it inherently more efficient to to out of order execution. Intel literally has to resort to decoding each instruction multiple times at once in case it happens to be a long instruction which means that most of the power doing that is wasted. So compared to Apple Silicon, Intel can't look as many instructions ahead (slower) and also wastes power (hotter). That's a terrible inherent flaw in x86. 
    techconcwatto_cobratmay
Sign In or Register to comment.