Why Apple's Macs can now ditch Intel x86 and shift to ARM

124»

Comments

  • Reply 61 of 75
    BigDannBigDann Posts: 66member
    As much as you beat the drum of Mac's going ARM from Intel it is not going to happen!

    The truth is more powerful iPad's will erode the bottom tier Mac's. Which is why the MacBook was killed to make room for a new clamshell iPad! We've already seen it with the added keyboard cases used on iPad's. Apple will create a hard case version soon.

    As much as I like my iPad it is still very limited due to the ARM CPU architecture. I don't see Apple ditching CISC in the current systems. I do see a possibility of Apple embracing AMD if Intel doesn't straighten its ship very soon.

    But there's another problem here and that gets into the system design and the software that run on it.

    There is a limit on how far you can shrink the logic. Every process node change (10 > 7 > 4 nm ...) becomes more costly not less. AMD's chiplets design is spot on as a way around some of this! Then the next issue is how many chiplets can you fit and leverage all of the cores without becoming bogged down? As we've seen the more cores the clock needs to be dropped. Just like the issue with node shrink there is less of a return as you scale up the number of cores within a single logic board system.

    This is where parallel architectures running Massively Parallel Processor Array (MPPA) becomes the direction we need to embrace going forward. But, we still need a usage for such a powerful design. There is just nothing in the consumer space that screams for such a system yet. There is a need in business and engineering.

    Could ARM work here sure! But, the Apple A Series chips are a very different beast to a pure ARM CPU that could be used in a MPPA. But, just like the issues we face with CISC, RISC can't over come the same architectural limits either.
    edited July 2019
  • Reply 62 of 75
    GbizzlemcgrizzleGbizzlemcgrizzle Posts: 1unconfirmed, member
    Apple won't switch to ARM processors unless possibly Intel starts making them. Also keep in mind what is seen as a slow down of Intel chips is done at the direction of Intel.  They've already said Moore's Law it's ending because it will soon be impossible to shrink transistors to fit more.  Obviously a new technology will have to replace silicon transistors but not because Apple will be able to do a better job than Intel.  
    Also keep in mind that while most consumers no longer use boot camp or emulation of Windows a good amount of business still do and would end up switching back to PCs. And the only reason a lot of companies switched to Apple in the first place was because Windows around the time of Apple switching to x86 was so unstable and so prone to viruses and spyware the amount of money saved on IT work and the ability to use those windows apps was less than the expense of overpriced Apple computer (also Apple had always been good at making deals with big businesses and schools).  Windows isn't so bad anymore, people aren't reformatting the hard drives every two or three months and PCs are much less expensive.  Also the need to have an IT guy in every office in 2006 had changed as most of the work force know how to get the printer to work or restart the computer in their own without putting in a support ticket.
    Anyway this will not happen unless by some stage fate Intel goes bankrupt or is overtaken by another microprocessor company with breakthrough technology before they can buy them.
    edited July 2019
  • Reply 63 of 75
    stevenozstevenoz Posts: 314member
      
    As a corporate Mac user, I depend on being able to boot Windows 10 when I need it. And I need Windows to work fully and fast.

    Apple, please don't make me schlep two laptops!! 

    I have a bad feeling about ARM and Windows now... and going forward.

    Next Apple purchase: Intel ???  Why not? Make all of us Apple users!




    edited July 2019 macplusplus
  • Reply 64 of 75
    mjtomlinmjtomlin Posts: 2,673member
    elijahg said:
    mjtomlin said:
    elijahg said:
    A significant factor in the PPC > x86 switch was Rosetta. It is much easier to emulate RISC PPC with its relatively small instruction set than it is CISC x86, and now x64. PPC apps running in Rosetta weren't much slower than the native ones, but that was also partly offset by the Intel CPUs being much, much faster than PPC ones. The A-series CPUs are quick, and in a less power and thermally constrained environment no doubt even quicker - but CISC emulation on RISC architectures is excruciatingly slow, no matter how fast the native CPU. Remember Connectix's Virtual PC? That emulated an x86 machine on PPC. Installing Win98 took 3 or 4 hours even on a G5. Of course API level emulation a-la Rosetta has less overhead, but it's still slow. 

    Hmm... “RISC” doesn’t mean smaller instruction set, it means that instructions are less complex and optimized to perform a  more finite task. It basically means each instruction takes less cycles than what a CISC instruction would require, which might be much more “complex” in its execution.

    It would be much easier for a RISC ISA to emulate a CISC ISA, because CISC instructions can be easily broken down into smaller instructions. In fact, Intel’s CPUs have RISC cores with a CISC front end to maintain compatibility. So even Intel takes their complex instructions and reduces them to a series of simpler instructions.
    It is both. https://en.wikipedia.org/wiki/Reduced_instruction_set_computer ;

    But no, it is not easier for a RISC ISA to emulate a CISC ISA. There are less instructions in RISC architectures vs CISC architectures. which - as I said before, is why Rosetta wasn't that slow. CISC CPUs also have deep pipelining, making emulation even more complex. Good luck with "easily broken down" CISC instructions. They're complex and instructions interact with eachother, which is exactly why they can't be easily broken down. 

    Yes Intel CPUs are RISC at the core with a CISC interpreter now, because RISC CPUs are much easier to design and optimise because they're much simpler than CISC CPUs. By your own admission, CISC is complex, and RISC is simpler. Ergo, RISC is easier to emulate.

    Rosetta worked at a relatively decent speed because it did not have the overhead of translating ALL the code - The API's on both PPC and Intel were the same, so a lot of time was saved by hooking into and running native API's as it was translating the PPC code. This had nothing to do with CPU architecture (RISC or CISC).

    However, translation is all about mapping... You map this one instruction to a series of instructions, or you map a series of instructions to this one instruction... the latter requires pattern matching, which is much more intensive than the former. It is much more efficient to map a single CISC instruction into multiple RISC instructions, then it would be to search for a series of RISC instructions to map to a single CISC instruction. Again, this is why Intel CPU's can have RISC cores while running CISC instructions and be as fast as they are.

    Also, You're using translation to demonstrate why one direction is better, and using emulation to demonstrate why the other is worse.

    Emulate and translate are not the same thing. Rosetta is not an emulator, it's a translator. It does not emulate hardware or software, it simply translates code. And when translating ANYTHING, it is much easier to break down complex into simple, because every complex task is just a series of simpler tasks.

    You previously mentioned the old PPC Virtual PC... That was an emulator. It had to emulate ALL the hardware in software, then run code on that emulated hardware... that's why it was so slow. Again, that has nothing to do with CPU architecture.


    edited July 2019 knowitall
  • Reply 65 of 75
    knowitallknowitall Posts: 1,648member
    mjtomlin said:
    elijahg said:
    mjtomlin said:
    elijahg said:
    A significant factor in the PPC > x86 switch was Rosetta. It is much easier to emulate RISC PPC with its relatively small instruction set than it is CISC x86, and now x64. PPC apps running in Rosetta weren't much slower than the native ones, but that was also partly offset by the Intel CPUs being much, much faster than PPC ones. The A-series CPUs are quick, and in a less power and thermally constrained environment no doubt even quicker - but CISC emulation on RISC architectures is excruciatingly slow, no matter how fast the native CPU. Remember Connectix's Virtual PC? That emulated an x86 machine on PPC. Installing Win98 took 3 or 4 hours even on a G5. Of course API level emulation a-la Rosetta has less overhead, but it's still slow. 

    Hmm... “RISC” doesn’t mean smaller instruction set, it means that instructions are less complex and optimized to perform a  more finite task. It basically means each instruction takes less cycles than what a CISC instruction would require, which might be much more “complex” in its execution.

    It would be much easier for a RISC ISA to emulate a CISC ISA, because CISC instructions can be easily broken down into smaller instructions. In fact, Intel’s CPUs have RISC cores with a CISC front end to maintain compatibility. So even Intel takes their complex instructions and reduces them to a series of simpler instructions.
    It is both. https://en.wikipedia.org/wiki/Reduced_instruction_set_computer ;

    But no, it is not easier for a RISC ISA to emulate a CISC ISA. There are less instructions in RISC architectures vs CISC architectures. which - as I said before, is why Rosetta wasn't that slow. CISC CPUs also have deep pipelining, making emulation even more complex. Good luck with "easily broken down" CISC instructions. They're complex and instructions interact with eachother, which is exactly why they can't be easily broken down. 

    Yes Intel CPUs are RISC at the core with a CISC interpreter now, because RISC CPUs are much easier to design and optimise because they're much simpler than CISC CPUs. By your own admission, CISC is complex, and RISC is simpler. Ergo, RISC is easier to emulate.

    Rosetta worked at a relatively decent speed because it did not have the overhead of translating ALL the code - The API's on both PPC and Intel were the same, so a lot of time was saved by hooking into and running native API's as it was translating the PPC code. This had nothing to do with CPU architecture (RISC or CISC).

    However, translation is all about mapping... You map this one instruction to a series of instructions, or you map a series of instructions to this one instruction... the latter requires pattern matching, which is much more intensive than the former. It is much more efficient to map a single CISC instruction into multiple RISC instructions, then it would be to search for a series of RISC instructions to map to a single CISC instruction. Again, this is why Intel CPU's can have RISC cores while running CISC instructions and be as fast as they are.

    Also, You're using translation to demonstrate why one direction is better, and using emulation to demonstrate why the other is worse.

    Emulate and translate are not the same thing. Rosetta is not an emulator, it's a translator. It does not emulate hardware or software, it simply translates code. And when translating ANYTHING, it is much easier to break down complex into simple, because every complex task is just a series of simpler tasks.

    You previously mentioned the old PPC Virtual PC... That was an emulator. It had to emulate ALL the hardware in software, then run code on that emulated hardware... that's why it was so slow. Again, that has nothing to do with CPU architecture.


    Amen
  • Reply 66 of 75
    knowitallknowitall Posts: 1,648member
    elijahg said:
    mjtomlin said:
    elijahg said:
    A significant factor in the PPC > x86 switch was Rosetta. It is much easier to emulate RISC PPC with its relatively small instruction set than it is CISC x86, and now x64. PPC apps running in Rosetta weren't much slower than the native ones, but that was also partly offset by the Intel CPUs being much, much faster than PPC ones. The A-series CPUs are quick, and in a less power and thermally constrained environment no doubt even quicker - but CISC emulation on RISC architectures is excruciatingly slow, no matter how fast the native CPU. Remember Connectix's Virtual PC? That emulated an x86 machine on PPC. Installing Win98 took 3 or 4 hours even on a G5. Of course API level emulation a-la Rosetta has less overhead, but it's still slow. 

    Hmm... “RISC” doesn’t mean smaller instruction set, it means that instructions are less complex and optimized to perform a  more finite task. It basically means each instruction takes less cycles than what a CISC instruction would require, which might be much more “complex” in its execution.

    It would be much easier for a RISC ISA to emulate a CISC ISA, because CISC instructions can be easily broken down into smaller instructions. In fact, Intel’s CPUs have RISC cores with a CISC front end to maintain compatibility. So even Intel takes their complex instructions and reduces them to a series of simpler instructions.
    It is both. https://en.wikipedia.org/wiki/Reduced_instruction_set_computer ;

    But no, it is not easier for a RISC ISA to emulate a CISC ISA. There are less instructions in RISC architectures vs CISC architectures. which - as I said before, is why Rosetta wasn't that slow. CISC CPUs also have deep pipelining, making emulation even more complex. Good luck with "easily broken down" CISC instructions. They're complex and instructions interact with eachother, which is exactly why they can't be easily broken down. 

    Yes Intel CPUs are RISC at the core with a CISC interpreter now, because RISC CPUs are much easier to design and optimise because they're much simpler than CISC CPUs. By your own admission, CISC is complex, and RISC is simpler. Ergo, RISC is easier to emulate.
    As I mentioned before, RISC and CISC are outdated, nowadays 64 bit architectures borrow a lot from both.
    The main point is - I think - to be able to effectively translate 64 bit instruction sets. And 64 bit architectures look a lot more alike than x86 and ARM (32 bit) and are a lot easier to translate.
    Its true that the internal state of the processor (and multiple instructions that have an effect on it) are somewhat difficult while translating. But using a translation scheme for this and sometime ‘state clearing instructions’ (to bring the internal state to a known default state) helps to handle that effectively.
    As for which instruction set translates easier to the other, no definite answer can be given until a very detailed analysis is made of the specific instruction sets involved. The number and regularity of instructions is one parameter and internal state and instruction dependency is another.
    Anyway, it doesn't really matter which translation is easiest, because its clear that 64 bit instruction sets can reasonably easy be translated into the other.
    Its also clever not to look at the current status quo, but look at what Apple could do to make translation a lot easier (and faster). For example, Apple could add instructions and state to the A15(etc) soc to handle the translation better or even be able to run the A processor in a translation mode to be very very fast in converting x64 instructions, comparable to some processors that had bytecode acceleration instructions to handle (for example) Java code in a much faster way.
  • Reply 67 of 75
    knowitallknowitall Posts: 1,648member
    elijahg said:

    The point is that it is an instruction translation and when done in hardware you have little flexibility and room for logic so the translation scheme is simple which means easy to implement in software and with little overhead.

    When Apples libraries are translated in advance, code will run at native speed when within the library (this can be an significant part of the apps runtime) (think of Rosetta).
    Its even possible to translate the app binary in advance which will improve performance even further. 
    Of course if the source code is available, a recompile will make the app run native.
    Another way to run windows apps on Mac is to use Wine, its impressive to see what works nowadays.
    I expect Wine to be running on ARM (because Linux runs on ARM), so you can try this.
    Its also possible to ask the app developer to produce an ARM version of his software, why wouldn't he be willing to do that, especially because windows 10 runs on ARM now.
      
      
    The inflexibility with hardware logic blocks has absolutely nothing to do with the complexity of a translation scheme, or how difficult it is to implement in software. Hardware FPUs for example are complex but fast. And software emulation of a FPU is excruciatingly slow. Hardware inflexibility does not mean hardware has to be simple. Logic blocks can be as complex as required and will always be faster than software emulation. 

    Sure they can be complex, but not at the CPUs frequency, because extra logic means longer electron run time and more silicon real estate, which means more heat and more time to ‘settle’ the logic (you would like to have a definitive state from a circuit not a random generator). At 0.3 ns even electrons have very little space to move.
    "When Apples libraries are translated in advance, code will run at native speed when within the library (this can be an significant part of the apps runtime) (think of Rosetta)." It doesn't work like that. You can't link to libraries across different architectures.

    You don't understand, nobody is linking. Rosetta works by translating the libraries in advance, and while running the “on the fly translation” of instructions not within the library jumping (or calling) the “in advance translated library code” when appropriate.
    If you think WINE could run on an ARM Mac that proves your limited understanding of the subject. WINE = Wine Is Not an Emulator. There is no emulation whatsoever. Despite  your self proclaimed "Knowitall"... you don't.
    Ah, yes, I have deep knowledge on the subjects at hand (not a quick Wikipedia look up), no need to get upset by “just a name”.
    I just mentioned alternatives running Windows apps. The point is that I presupposed the Windows app to be ARM (Windows runs on ARM, remember); this is a way to run apps without Windows (because Wine is a re implementation of the Windows APIs).

  • Reply 68 of 75
    HwGeekHwGeek Posts: 15member
    What stopping APPLE from asking AMD for custom APU/CPU designing like Sony and MS developed for their consoles?
    It will be cheap, and with current Chiplet design they can design multi chip design with CPU chiplet and and maybe even custom ASIC for like they will use in new MAC PRO.
    Also maybe AMD can also let them use one of APPLE's custom Chips as another Chiplet? 


    edited July 2019
  • Reply 69 of 75
    netroxnetrox Posts: 1,422member
    Nonsense and typical columnist speculation. There is no evidence that any of this is true or that it would make any sense to do.
    And now everything is predicted to be accurate. What do you have to say?  
    alexonline
  • Reply 70 of 75
    Nonsense and typical columnist speculation. There is no evidence that any of this is true or that it would make any sense to do.
    Well, quakerotis... that aged well. The evidence was in the article, and the M1 is M1LES ahead before anyone is using it. 

    DED really is one of the best tech writers in the world, powered by an obviously impressive brain that researches the evidence and can see the wood for the trees.

    Your comment proved to be absolute nonsense and typical anti-Apple vom. Well done!
  • Reply 71 of 75
    neilm said:
    I can only imagine that the author is being paid by the word.
    Judging by the brevity of your post, it’s evident that you’re not. DED presumably gets paid for ideas, thinking and work - you should try that sometime neilm.
  • Reply 72 of 75
    BigDann said:
    As much as you beat the drum of Mac's going ARM from Intel it is not going to happen!

    The truth is more powerful iPad's will erode the bottom tier Mac's. Which is why the MacBook was killed to make room for a new clamshell iPad! We've already seen it with the added keyboard cases used on iPad's. Apple will create a hard case version soon.

    As much as I like my iPad it is still very limited due to the ARM CPU architecture. I don't see Apple ditching CISC in the current systems. I do see a possibility of Apple embracing AMD if Intel doesn't straighten its ship very soon.

    But there's another problem here and that gets into the system design and the software that run on it.

    There is a limit on how far you can shrink the logic. Every process node change (10 > 7 > 4 nm ...) becomes more costly not less. AMD's chiplets design is spot on as a way around some of this! Then the next issue is how many chiplets can you fit and leverage all of the cores without becoming bogged down? As we've seen the more cores the clock needs to be dropped. Just like the issue with node shrink there is less of a return as you scale up the number of cores within a single logic board system.

    This is where parallel architectures running Massively Parallel Processor Array (MPPA) becomes the direction we need to embrace going forward. But, we still need a usage for such a powerful design. There is just nothing in the consumer space that screams for such a system yet. There is a need in business and engineering.

    Could ARM work here sure! But, the Apple A Series chips are a very different beast to a pure ARM CPU that could be used in a MPPA. But, just like the issues we face with CISC, RISC can't over come the same architectural limits either.
    Where’s your November 2020 edit?
  • Reply 73 of 75
    karmadave said:
    Every few months one of these types of editorials appears on AI. I call the same BS every time. Apple doesn’t need to use ARM in Mac. It’s not gonna shift their marketshare in any direction. Instead they are focused on making iPad more ‘Mac-like’. Witness the upcoming version of iPadOS. 

    All this makes for interesting hypotheticals, but it’s a non-starter IMHO...
    Maybe your guesstimating is a non stater? Dan seems a lot smarter than you give him credit for, eh?
  • Reply 74 of 75
    Nonsense and typical columnist speculation. There is no evidence that any of this is true or that it would make any sense to do.
    So, about this comment...
  • Reply 75 of 75
    Kind of fascinating to read all those old posts. Nearly all of them were negative or claiming that Apple would never do the Arm transition for one mistaken belief or another. 

    This was a very good way to get some perspective on the Arm transition and to see how Apple resolved the issues raised. If the posters had just said that something was an issue that needed to be solved instead of insisting that no resolution was possible they wouldn’t look so dumb now. 
Sign In or Register to comment.