Why the Mac's migration to Apple Silicon is bigger than ARM

12357

Comments

  • Reply 81 of 123
    It's a different world from 2006, when it was most important to run windows natively b/c there weren't many mac apps. Now there will be more useful "mac" apps than there ever have been.... ever.... thanks to iOS (and iPad OS). And the largest user base.... ever... thanks to iPhone and iPad owners. So just because your software is windows-only now, the question to ask is, "do they have iOS developers making iOS or iPad OS versions, now?" Because if they do you will have a native mac version.

    After 15 years with Intel, every company that was going to make an x86 mac version, does, and the vast majority of these will port their stuff. Many other companies are still windows only, but there are companies with windows and iOS (but not x86 mac) products - they have iOS developers, already. So Apple's business decision is betting that these "windows and iOS" companies will assign their existing teams to expand their iOS products into the full mac feature set. For example, Autodesk Revit doesn't work on a mac, so engineering and architecture firms have to use windows, but there is an iOS (iPad OS) version for field use. (Revit is the big-boy-pants evolution of autocad, btw)

    ...and don't forget that Microsoft isn't oblivious to reality, anymore, and is seeing these same intel performance issues. The ARM version of windows sucks right now, but they have 2-5 years to improve it so that it runs great natively/virtually on a mac. Microsoft is more than happy to sell you a windows OS license for your apple silicon - they don't care about Intel they just care about selling their own stuff. I'd also expect ARM windows to imitate "rosetta," so that the ARM windows can run x86 windows. Performance hit, but even non-apple ARM is looking pretty good vs Intel nowadays, and there is no indication that Intel has a new architectural path forward so Microsoft will need to start thinking about this same switch (they just can't do it as nimbly as Apple due to existing code & customer base).
    Rayz2016Dan_Dilgerfastasleepboltaruspatchythepirateraoulduke42spock1234watto_cobraDetnator
  • Reply 82 of 123
    melgrossmelgross Posts: 33,299member
    lmasanti said:
    How would they be named?
    DTK uses an A12z chip because it is the most powerful that the had showed us.

    But for portables and desktop, they could develop different combinations and number of cores… ant they would be called ‘As’.
    Maybe Bs for mac(B)looks, or D for (D)esktop? 
    They stated during the presentation that there would be a special line of chips for the Mac. What they will be called is another question.
    canukstormwatto_cobra
  • Reply 83 of 123
    melgrossmelgross Posts: 33,299member

    rain22 said:
    rain22 said:
    “ but it suggests that new Apple Silicon Macs will not be struggling to keep up with the graphics on Intel Macs.”

    That would be nice - but seems extremely dependent on programs being optimized. The anemic library of titles will probably shrink even further - at least until there is market saturation. 

    Mac users will be stuck using dumbed down iOS software for a long time I feel. 
    After all - This is the motivation isn’t it? Eventually have just 1 OS that can be modded to facilitate the device. 
    iOS and macOS share the same core and were designed to be processor independent. This is why existing apps can be modified and recompiled for ARM in days. I suspect the performance will be far better than you think. That Tome Raider demo spoke volumes. A brand new 2020 13" 10th Gen i7 Quad-Core can't even do that with native code.

    I hope you are right. We got burned on a bunch of G5’s during the last switch. Rosetta didn’t work half the time, crashes, software wasn’t supported, and Apple dropped support on its own suit almost overnight. Peripherals became junk as no drivers were updated and Apple put the whole onus on developers and manufacturers. We ended up tossing the G5’s for a big loss and getting the new Mac Pro’s. At least then we could upgrade our own video cards.
    A lot of difference between then and now. Apple has been running iOS (macOS) on ARM for a decade now and operating has been designed to create a layer of abstraction from the hardware. This is why many apps will only need to be recompiled or can run in translation. That Tome Raider demo was a x86 mac app running without modification in translation via Rosetta 2. The old Rosetta couldn't do anything like that since apps were much closer to the hardware in those days and Metal didn't exist yet.

    As for INTEL updates, I assume any app written natively for ARM can be recompiled for INTEL which means developers will easily be able to support both platforms. That's the beauty of the abstraction layer. The only question then becomes how long will Apple support new versions of macOS on INTEL since that occurs at the hardware level. I suspect it depends on the install base and their traditional obsolete/vintage status for hardware; 5 years of full support and 2 additional years minimum for security updates.

    5-7 years is just about what I expect to get out of a device. Come this Fall my 7+ year old 2013 15" MBP will no longer be supported by the current operating system, which means it's down to security update status. I got my money's worth and the resale value of these machines is likely to remain high since Bootcamp is gone forever.
    I doubt they’ve been running it for a decade. The early chips were nowhere near powerful enough for OS X. I’m betting they began to run it seriously with the A7, dual 64 bit core SoC that Schiller called “Desktop class”. Possible they played around with translating instructions and code before then, but that’s likely when they really believed it could be done.
    muthuk_vanalingamfastasleepjoeljrichardswatto_cobraCGS_in_VA
  • Reply 84 of 123
    melgrossmelgross Posts: 33,299member
    mattinoz said:
    mjtomlin said:
    crowley said:
    I wonder if the higher end Macs will have A chips with integrated GPUs, or if there will be non-GPU variants where Apple have a dedicated separate GPU, presumably from AMD.
    I'm guessing we may have seen the last of AMD / NIVIDA in any future Mac. The ARM GPU cores are no less as impressive or scalable than the CPU cores. That's another significant cost savings for Apple. It also insures that the GPU is optimized for Metal. This was the point of Metal from the beginning and why OpenGL was sent to the dustbin. Apple has been planning this move for almost a decade and I think it runs much deeper and faster than most people suspect; no INTEL/AMD, no AMD/NIVIDA and soon no Qualcomm.

    The end of dependence on competitors or a single source for parts. Complete design and manufacturing freedom.

    I suspect we'll also see the price drop as times goes on since they will be able to maintain margins and expand their base at the same time; maybe -$100 on the low end and $100s of dollars up the chain to the higher spec'd hardware.

    These new Macs will still support discrete GPUs. There’s no reason to think they won’t.

    Would not be at all surprised if the Developer kits have a GPU in them. Given they were supposedly driving the XDR display.

    I would be surprised if they do. This could be one reason Apple had that extra graphics core that no one knew about when it first came out as the A12x the year previously. It could be running at a slightly higher click though. I remember in the iPad presentation that Apple said that the SoC was re engineered for higher efficiency. That seems to also have gotten past most people.

    it seems to me that the A12z was specifically designed for Apple’s Mac development efforts, and to put in these developer machines, as a by product, they put it into the latest iPad Pro, without unlocking a number of features designed for the Mac.

    I now understand why we got this A12 based chip this year for our iPads rather than an A13 based chip. It’s because it took Apple two years to come up with this, so it’s a two year old design. It now makes a lot of sense.
    edited June 2020 patchythepirateraoulduke42muthuk_vanalingamwatto_cobraDetnator
  • Reply 85 of 123
    melgrossmelgross Posts: 33,299member

    crowley said:
    I wonder if the higher end Macs will have A chips with integrated GPUs, or if there will be non-GPU variants where Apple have a dedicated separate GPU, presumably from AMD.
    I'm guessing we may have seen the last of AMD / NIVIDA in any future Mac. The ARM GPU cores are no less as impressive or scalable than the CPU cores. That's another significant cost savings for Apple. It also insures that the GPU is optimized for Metal. This was the point of Metal from the beginning and why OpenGL was sent to the dustbin. Apple has been planning this move for almost a decade and I think it runs much deeper and faster than most people suspect; no INTEL/AMD, no AMD/NIVIDA and soon no Qualcomm.
    I'm wondering if any of the companies working on 3D apps/renderers currently being ported to Metal/AMD knew that they were actually working towards Metal/Apple GPU. 
    Well, Nvidia had pretty strong reasons for refusing to support Apple Metal, as we wrote about:

    It’s all business. Nvidia came up with a proprietary system that earned them a lot of money, and they don’t want to dilute that income stream by supporting alternate technologies. The didn’t support apple’s CL either, which became a standard. AMD did, and that’s the main reason Apple moved to AMD. They were more amenable to Apple’s needs.
    boltarusraoulduke42muthuk_vanalingamwatto_cobraDetnator
  • Reply 86 of 123
    Nice to see a modicum of intelligence in this comment section. I know things have shifted fundamentally on the Mac when the biggest argument against it comes from naysayers clinging to the last vestige of hope that developers won't develop mac games. Although myself and many other pro users can care less about games, the argument that AAA games will not flourish on the Mac is based on some seriously flawed logic.

    Apple has provided so many outstanding developer tools, and emulation even on the lowly A12z is far beyond what many of us imagined. Emulation on Apple's upcoming Z chips may soon outclass Intel's chips running native apps. On top of that, AAA game developers will always target the upper limits of a platform and they will do the same by writing games specifically designed to take advantage of the extra power and functionality provided by insanely fast apple Z chips. Build it and the gamers will come, just for the bragging rights alone.

    Sorry naysayers the writing is on the wall. 
    edited June 2020 fastasleepraoulduke42joeljrichardswatto_cobraDetnatorthtasdasd
  • Reply 87 of 123
    fastasleepfastasleep Posts: 6,146member
    mcdave said:
    mcdave said:
    rain22 said:
    Mac users will be stuck using dumbed down iOS software for a long time I feel. 
    After all - This is the motivation isn’t it? Eventually have just 1 OS that can be modded to facilitate the device. 
    I certainly did not get that impression from what we saw. Not sure why you did. 
    It was the spacing of the toolbar icons, very touchable.
    And what at all does that have to do with being "stuck using dumbed down iOS software"?
    I was going for the second part of the OPs comment which seems to be happening. The UI convergence between iPadOS/macOS is pretty clear with sidebars/toolbars added to the former and spacing/mouse over focus added to the latter.
    I don’t think it’s dumbed down iOS apps we’ll see on the Mac but rich iPadOS/macOS apps feeding each other - absolutely.
    Well, I can agree with that. rain22's comment was just based in FUD, as there's no indication anything is being dumbed down on either platform.
    watto_cobra
  • Reply 88 of 123
    fastasleepfastasleep Posts: 6,146member


    crowley said:
    I wonder if the higher end Macs will have A chips with integrated GPUs, or if there will be non-GPU variants where Apple have a dedicated separate GPU, presumably from AMD.
    I'm guessing we may have seen the last of AMD / NIVIDA in any future Mac. The ARM GPU cores are no less as impressive or scalable than the CPU cores. That's another significant cost savings for Apple. It also insures that the GPU is optimized for Metal. This was the point of Metal from the beginning and why OpenGL was sent to the dustbin. Apple has been planning this move for almost a decade and I think it runs much deeper and faster than most people suspect; no INTEL/AMD, no AMD/NIVIDA and soon no Qualcomm.
    I'm wondering if any of the companies working on 3D apps/renderers currently being ported to Metal/AMD knew that they were actually working towards Metal/Apple GPU. 
    Well, Nvidia had pretty strong reasons for refusing to support Apple Metal, as we wrote about:

    I didn't even mention Nvidia. I'm talking specifically about renderers that have been CUDA-only and are being ported to AMD/Metal, like Redshift and Octane, which were announced last year alongside the Mac Pro, and whether they knew about Apple's GPU plans at that point or not.
    watto_cobra
  • Reply 89 of 123
    mcdavemcdave Posts: 1,919member
    crowley said:
    crowley said:
    ErlendurK said:
    Apple doesn't use ARM cores.  They have a license of the instruction set that they use to develop their own cores, so technically they aren't ARM cores, but Apple cores that are using ARM instruction set.
    That's not a useful distinction; ARM aren't a manufacturer of any processors, so any processors that use the ARM instruction set are going to get called ARM cores for convenience.  Insisting that they be referred to as "Apple cores that are using ARM instruction set" is needlessly long winded pedantry.

    Say what?

    I'm surprised the article referred to Apple processors as using "ARM cores". That's what I expect to see on Android sites where they try to make it appear Apple processors are nothing special and are just ARM cores with a "few tweaks". They are far from it.

    While it's true ARM doesn't make processors, they sure as hell design processor cores. They even give them names (A55, A76 or similar). These cores are what Qualcomm, Huawei, Samsung and others use when they build processors, which saves them a massive amount of work that would be required to design their own custom cores (micro-architecture).

    @ErlendurK is correct. Apple has a license to use the ARM ISA (instruction set architecture) and then builds 100% custom designed cores that run that instruction set. It's the same as AMD building processors that are also 100% custom, but run the x86 ISA. I never hear anyone claim that AMD processors are using "Intel cores" so why should we do it with Apple/ARM?

    It's not pedantic to tell the truth.
    Intel is not an instruction set, x86 and x64 are.  AMD make x64 cores.  ARM is an instruction set, therefore the Apple Silicon can be referred to as ARM cores.

    You know exactly what is being referred to, so this is needless pedantry.
    ARM is not an ISA, ARM is a company, Aarch64 is the ISA. Apple’s silicon extends way beyond the CPU. The OP is correct, you wouldn’t refer to an AMD APU as “Intel”.
    watto_cobra
  • Reply 90 of 123
    RikerRiker Posts: 7member
    rob53 said:
    ...but nothing about which port of Debian was used.

    The Platform State of the Union mentioned the VM was the ARM distribution of Debian.
  • Reply 91 of 123
    RikerRiker Posts: 7member
    crowley said:
    I wonder if the higher end Macs will have A chips with integrated GPUs, or if there will be non-GPU variants where Apple have a dedicated separate GPU, presumably from AMD.

    You'll see what we have now.  Models which have only the on-CPU GPU and those which have that AND a separate, higher performance dedicated GPU.
  • Reply 92 of 123
    RikerRiker Posts: 7member
    Rayz2016 said:
    crowley said:
    ErlendurK said:
    Apple doesn't use ARM cores.  They have a license of the instruction set that they use to develop their own cores, so technically they aren't ARM cores, but Apple cores that are using ARM instruction set.
    That's not a useful distinction; ARM aren't a manufacturer of any processors, so any processors that use the ARM instruction set are going to get called ARM cores for convenience.  Insisting that they be referred to as "Apple cores that are using ARM instruction set" is needlessly long winded pedantry.
    Nope. 

    ARM doesn’t manufacture processors, it licenses reference designs that manufacturers build upon. So that’s the basic design including the instruction set. Apple basically designs it’s own chips and uses the instruction set, which is why no one else can match them. 

    Article makes a good point though: the processor is just a small part of what’s happening here.  Folk seem to be worried that Apple can’t match Intel chips, but they’re not relying on just a basic ARM chip to do the work. 

    So, you don't use an ARM compiler to create binaries?  You don't pick armv7, etc. as the target in Xcode?
  • Reply 93 of 123
    joeljrichardsjoeljrichards Posts: 23unconfirmed, member
    darthw said:
    Will it be possible, eventually, for Apple to make faster SoCs than the fastest most powerful intel Xenon chips?
    Sure. Fujistu and Ampere do it with custom ARM cores. The reference ARM N1 core looks like a contender. The (current) fastest computer on the planet runs on Fujistu's ARM-based chips.

    My understanding is that Apple has the best single threaded performance of ANY version of the ARMv8 core—and until this week it was a mobile-only processor! Even if they only clock it moderately higher, the tech to scale it up to 40+ cores is already accessible to ARM licensees. Imagine 60-80 Lightening cores running on a single SoC. How many Xenon chips would be competitive with that?

    I hear a lot of moaning about "Apple can't compete in the high end" if they transition. There's just not evidence for that. I'm less sure how they'll do it (chiplets, license HPC IP, etc.) but they certainly CAN.
    watto_cobra
  • Reply 94 of 123
    joeljrichardsjoeljrichards Posts: 23unconfirmed, member
    sflocal said:
    narwhal said:
    rain22 said:
    Mac users will be stuck using dumbed down iOS software for a long time I feel.  
    Here's news for you, Rain. Very few new apps are written for macOS (or Windows). Apps today are developed for the web, iOS and Android.

     @narwhal: It may look that way but ALOT more programs are X86/64 The rest of the world decided to settle on x86. See that console gaming industry. Most LOB (Line of business) is x86. Real gaming on the mac is dead. Steam will not run on new ARM macs. So all those games that people bought on steam will be worthless for those who own these new ARM macs. Also when it comes to programs, There is Photoshop for ipad which is what your ARM mac will run and then there is REAL Photoshop with all of the x86/64 plugins that people have made. I wonder if they will port their plugins to the new ARM photoshop. Will the ARM photoshop even run plugins. The same is true for office will you get ipad office on ARM or REAL office that you do now? Also this is after Apple throws you out in the cold by stripping roseta2  away from you as they did is mac OS 10.7 i believe. Lets also not forget that wonderful smooth transition from 32-64 bit only apps that nearly killed Steam on the mac. Also how nice will apple play with developers and users once they have hegemony. Apple silicon is the ultimate lock in. I can see apple using gatekeeper to make their platforms only use the mac app store. That means you new $899-$10,000 mac is a glorified locked in ipad with a keyboard and mouse. At least with intel macs you can run crossover to get older 32-bit versions of windows apps and wine. Will you get that on your ARM mac. Lets not forget this is the same company that will not let you change your default web browser, or maps app in iOS because they want total control of the user experience. I bet you this will be coming to a mac near you, in mac OS 11.3 or whatever they are going to call it. This is a sad day for the computing industry indeed, if the market falls for Apple and their lock in scam.


    Funny... I heard similar things when Apple went off the PowerPC architecture to Intel.  But hey... "Apple is doomed" right?

    I'm a heavy photoshop/lightroom user and while I don't use the iPad version, many friends that do have said great things about it.  Many now primarily use the iPad Pro for their photoshop work. 

    That says something.

    As I understand it, Adobe completely rewrote Photoshop/Lightoom for the iPad and it is a huge improvement in performance compared to MacOS.  My primary reason of replacing my 2015 iMac is to upgrade to a new/faster machine in order to use Lightrooom which runs like crap on my machine, even though it's a Quad-i7 with 64GB of RAM.  It's crazy fast for everything else except this.  I hate that reason as there are no real alternatives (for me) to go to another photo editing platform.

    If performance is as good for MacOS(ARM) and not just marketing speak, I'll upgrade to it as will many others.

    Personally, I wish Apple never discontinued Aperture.  They have FCP which is for many, a market-standard for video, they could have done/kept the same thing for us photography users as "Photos" is nowhere near what Aperture could do.


    Sigh. I also miss Aperture. It had features that still aren't as well implemented in modern DAM systems. I just don't miss the RAW-photo render engine. But, yes, I agree with everything you're saying.

    This transition will be hard from some users, no doubt, but I think it will actually do much smoother than either 68k-PPC or PPC-Intel. For the first time Apple has the resources to truly maximize their hardware-software synthesis. Companies like SGI and Sun did some amazing things in the 90's but couldn't compete once Intel, with their massive resources, caught up in terms of raw power. Well, Apple of today has that old-school approach and resources that dwarf even Intel. 

    A lot of people don't like the iOS/iPhone experience, and that's ok, but I think its an accepted fact that iPhones lead performance. We will finally get that level of optimization on the Mac.  think this is a huge boost to a stagnating industry. In another 15-20 years I could see this creating some unexpected problems, but who can see that far ahead? 15-20 years ago would anyone have believed you if you said Apple would have a higher market cap than MS? That TMSC would be out-fabbing Intel?

    fastasleepmuthuk_vanalingamwatto_cobra
  • Reply 95 of 123
    joeljrichardsjoeljrichards Posts: 23unconfirmed, member

    darthw said:
    Will it be possible, eventually, for Apple to make faster SoCs than the fastest most powerful intel Xenon chips?
    Yes. I just read that the new Japanese super computer, that is the fastest in the world is built using Arm chips.
    You can put enough of any CPU cores on a chip and have something with more computing power than a Xeon, but there are details required to work out there.  You can also have enough chips wired up to be more powerful overall.

    your answer didn’t answer anything in enough detail to be more than an apples to orangutan comparison and doesn’t answer the original question.

    can Apple eventually make their own SoCS to beat Intel Xeons? There are reasons that it could go either way:

    ARM ISA is easier to decode is in its favor.
    intel z86-64 ISA is more compact due to variable length instructions that reduce memory bandwidth required for a given number of of instructions that achieve the same thing.

    we shall see, but for the same process node, it could go either way.
    ...and if you keep moving the goal post you can win almost any argument. What is your criteria for "beating" Xeons? Performance-per-watt? Yes, you could make a case they do that now. A particular benchmark? Workflow? 

    The tired RISC vs CISC debate: Both ARM and X86/z86-64 are essentially RISC code when they run through a cycle—Intel & AMD translate the microcode. Yes, ARM cores tend to have more cache—Apple especially—but so what? It isn't like Xeons don't use a lot cache too. And if it works, it works. I think ARM is misunderstood by a lot of people who see it only as an embedded/mobile ISA. Read up on ARM SIMD. For example: https://blog.cloudflare.com/neon-is-the-new-black/ https://www.linleygroup.com/mpr/article.php?id=11753

    Same node: Well, that's up to Intel. I'm not going to wait for them to catch up to make comparisons. I care about what happens the real-world. If that's not "fair" to Intel, joke's on them.
    fastasleepwatto_cobra
  • Reply 96 of 123
    gremlingremlin Posts: 61member
    The new market-friendly name for Apple silicon has to  be AppleCore 🤔
    fastasleepmuthuk_vanalingamavon b7watto_cobraDetnator
  • Reply 97 of 123
    melgrossmelgross Posts: 33,299member
    darthw said:
    Will it be possible, eventually, for Apple to make faster SoCs than the fastest most powerful intel Xenon chips?
    Sure. Fujistu and Ampere do it with custom ARM cores. The reference ARM N1 core looks like a contender. The (current) fastest computer on the planet runs on Fujistu's ARM-based chips.

    My understanding is that Apple has the best single threaded performance of ANY version of the ARMv8 core—and until this week it was a mobile-only processor! Even if they only clock it moderately higher, the tech to scale it up to 40+ cores is already accessible to ARM licensees. Imagine 60-80 Lightening cores running on a single SoC. How many Xenon chips would be competitive with that?

    I hear a lot of moaning about "Apple can't compete in the high end" if they transition. There's just not evidence for that. I'm less sure how they'll do it (chiplets, license HPC IP, etc.) but they certainly CAN.
    Of course there’s no evidence. I don’t know what people are thinking. Either they didn’t watch the keynote, or just don’t listen carefully. Apple clearly said, as I keep telling people, that they created a chip line SPECIFICALLY for the Mac. That should tell them something. And Craig said recently that the A12z will never be in a shipping product (other than the developer machines). And it’s two years old.
    edited June 2020 watto_cobratenthousandthings
  • Reply 98 of 123
    gremlin said:
    The new market-friendly name for Apple silicon has to  be AppleCore 🤔
    I haven’t had time to watch the keynote, so maybe I missed something, but I hope they go with “M” series for the macOS 11+ Apple silicon!
    watto_cobra
  • Reply 99 of 123

    darthw said:
    Will it be possible, eventually, for Apple to make faster SoCs than the fastest most powerful intel Xenon chips?
    Yes. I just read that the new Japanese super computer, that is the fastest in the world is built using Arm chips.
    You can put enough of any CPU cores on a chip and have something with more computing power than a Xeon, but there are details required to work out there.  You can also have enough chips wired up to be more powerful overall.

    your answer didn’t answer anything in enough detail to be more than an apples to orangutan comparison and doesn’t answer the original question.

    can Apple eventually make their own SoCS to beat Intel Xeons? There are reasons that it could go either way:

    ARM ISA is easier to decode is in its favor.
    intel z86-64 ISA is more compact due to variable length instructions that reduce memory bandwidth required for a given number of of instructions that achieve the same thing.

    we shall see, but for the same process node, it could go either way.
    ...and if you keep moving the goal post you can win almost any argument. What is your criteria for "beating" Xeons? Performance-per-watt? Yes, you could make a case they do that now. A particular benchmark? Workflow? 

    The tired RISC vs CISC debate: Both ARM and X86/z86-64 are essentially RISC code when they run through a cycle—Intel & AMD translate the microcode. Yes, ARM cores tend to have more cache—Apple especially—but so what? It isn't like Xeons don't use a lot cache too. And if it works, it works. I think ARM is misunderstood by a lot of people who see it only as an embedded/mobile ISA. Read up on ARM SIMD. For example: https://blog.cloudflare.com/neon-is-the-new-black/ https://www.linleygroup.com/mpr/article.php?id=11753

    Same node: Well, that's up to Intel. I'm not going to wait for them to catch up to make comparisons. I care about what happens the real-world. If that's not "fair" to Intel, joke's on them.
    I didn't move any goalposts, I explained what the factors are in sufficient detail for those that can reason about reality to understand.

    When it comes to speed, size DOES matter, and it also matters for manufacturing costs as well.  I don't care about CISC/RISC comparisons once they're running inside the CPU core as micro-ops in their intermediate states: that's something that's translated at the decode stage, and every CPU can optimize that in an opaque manner that changes from one CPU version to the next as applicable.  In the end, how much you can cache and decode in a given amount of logic absolutely determines how large the circuitry is, which determines CPU size, which determines defect rate, and top speed within the CPU.  External to the CPU the size of instructions has a very meaningful impact as to how much main memory overhead is involved, and that's even more expensive to overcome: for a given process node, the more compact instruction encoding has an advantage.  Eventually Intel will catch up to others for their process nodes, assuming they don't run out of money to do so first: if one manufacturer can do it, eventually others will be able to do it.  At some point, we will reach the point where we can't economically keep reducing the size of the process node, and then that's where all these factors matter the most, because then every design has the same constraints: size and cost for a given amount of circuitry.  There is an upper limit as to how much cache of any type you can add before the cache speed is lower and the latency is higher, resulting in slower overall speeds, even assuming 100% perfect chip yield at a given process node.

    You didn't reason and explain in writing to enough degree with your response how the goalposts were moved, or anything of value.  I'll go out on a limb and suggest neither the Intel ISA or ARM ISA will long-term be the best speed/size tradeoff for a given process node: there are other ones being worked on, that may or may not pan out, but history is no ISA has lasted forever on top.  I'd suggest you research the Mill architecture: it's a fascinating combination of choices that doesn't look much like x86 or ARM ISA.  I'm hopeful they complete it and get it tested before running out of resources. 
  • Reply 100 of 123
    DetnatorDetnator Posts: 287member
    Beats said:
    rain22 said:
    “ but it suggests that new Apple Silicon Macs will not be struggling to keep up with the graphics on Intel Macs.”

    That would be nice - but seems extremely dependent on programs being optimized. The anemic library of titles will probably shrink even further - at least until there is market saturation. 

    Mac users will be stuck using dumbed down iOS software for a long time I feel. 
    After all - This is the motivation isn’t it? Eventually have just 1 OS that can be modded to facilitate the device. 

    A-Series is closer than you think. We will see A14 this year which will surpass PS4 Pro graphics. Eventually iPad games and Mac games will be next-gen quality. We could potentially see iPad/Mac pass PS5 quality during PS5's lifetime.

    Yes, I know nerdcore gamers will compare this to $5,000 rigs just to sh** on Apple but the reality is, 99% of the population doesn't give a damn at this point. This isn't 1993 when 8-bit and 16-bit was a massive leap and at 7"(iPhone)-24"(iMac) screens there will be no need for 8k or something ridiculous.

    I think a smart move would be for Apple to encourage developers to support A14 games. We need a handful of titles that run better than that Tomb Raider demo even at the cost of leaving iPhone 6s users behind.

    I don't really know the first thing about serious PC gaming, other than that (a) through Apple's history so far, it's about as far from Apple's target market as there is, and (b) as you say, most people don't care...  However, thinking about how MS more or less created Halo to popularize the Xbox (if I recall correctly - someone feel free to tell me if I've got that wrong)...I just wonder...

    Let's say Apple could get even just one (though 2-3 would be even better) serious game developer (or if they could do it themselves) to come out with a really really good AAA game (whatever that really means, but I just mean anything that typical "real" PC gamers would take seriously), built (or re-built) specifically for Apple's tech (Apple Silicon [AS], Metal, etc), without Apple having to build a specific gaming Mac...

    And let's say, on something like a $3000, AS, 6K 32" iMac (so not a MBA, but it doesn't have to be a MP either), such a game(s) screamed, performance-wise by every metric.  Let's say it craps all over any PC that said gamers might work so hard to custom build for even vaguely similar dollars (simply because AS has just nailed it)...

    Then maybe other developers might come on board.  And then I just wonder what that would do to the PC gaming market.  I mean, sure, Apple's got a thriving gaming market now, but it's a significantly different market to the traditional gaming PC world.  And as I understand it, Apple's never put serious gaming GPUs or really cooperated with the gaming-specific technologies (no desire to).

    But if these new Macs scream with Apple's CPUs, GPUs, Metal, etc, without Apple having to doing anything significantly different specifically for AAA gamers, and if someone (Apple or someone else) takes it seriously and builds for it - anything that the PC gaming market would take seriously - then maybe "serious" PC gaming is going in a new direction in the future, leaving MS and Intel behind there also.

    I just wonder.
    fastasleep
Sign In or Register to comment.