You have a different understanding of what unified means in the context of a codebase, we get it. Since you seem to agree on the actual practical implications, can you just agree to disagree about the precise meaning of the word "unified" and get over it?
So tedious.
It's not a different definition, it's an erroneous premise. One poster is even trying move the discussion from the OS to high-level apps, when they should be moving to components that make up the OS, like frameworks and the kernel when referring unification across all OSes as that has been happening for over a decade. That's a relevant discussion, not Photoshop.
In all the Mac CPU shifts, IMO, what made the Intel shift the most significant is it provided a gateway for switchers - a crutch for people to use Windows while slowly over months learning the Mac. While all Macs have had some sort of Windows emulation (remember those 486 cards), this was the first time you could truly get native Windows speeds. It may have also been a marketing win, since people could understand how powerful the CPU was.
However, there are mitigating factors now:
-> There are a LOT more Mac users - it's much more common to have a Mac, whether it's at home, school or business. There's a lot more help out there. -> There are a lot less people trying to run the latest version of Windows, and older versions of Windows run fast enough for menial tasks. -> AFAIK, ARMv8.2 onward contains virtualization instructions baked into the mandatory ISA. And Microsoft is giving ARM their full support. So ultimately, future windows might run faster on ARM than on Intel. Especially on laptops, where power = performance.
Then, all that will be left is the old marketing problem - once again, people will say "How does a Threadripper laptop compare to an Apple A12x laptop?" But now that we're at the end of Moore's Law, do people really care so much about how fast their CPU runs? And might Microsoft start to make the SAME shift away from x86?
You have a different understanding of what unified means in the context of a codebase, we get it. Since you seem to agree on the actual practical implications, can you just agree to disagree about the precise meaning of the word "unified" and get over it?
So tedious.
It's not a different definition, it's an erroneous premise. One poster is even trying move the discussion from the OS to high-level apps, when they should be moving to components that make up the OS, like frameworks and the kernel when referring unification across all OSes as that has been happening for over a decade. That's a relevant discussion, not Photoshop.
Oh give it a rest. Same old Soli, ever the bore. No one cares.
I think a lot of people are assuming things in a negative way that just won't be true. This is all a rumor in the first place, not a fact. Maybe Apple is working on ARM Mac's. Then again Apple works on all kinds of things doing R&D and many things never see the light of day for one reason or another. Some of it gets leaked out but never happens.
Maybe Apple is working on it, but they're not just going to move to it without a solid plan for everything. I think people are worrying a little too much.
I think it is more of a case of people being ignorant. First off as you said nothing is shipping yet. So we have zero idea as to performance nor capability.
When you consider some if the comments made here it really makes you wonder if the posters are the claimed professionals they claim to be. Claiming your code base relys upon decades old code does not inspire confidence. Claiming you wont buy the machines due to a lack of Linux running in a VM is also assininr considering we dont even know if the new hardware contains virtualization, plus Linux has been on ARM for years now.
Frankly this so called panic is some of the most immature talk ive heard in years. We have people swearing to business decisions based on nothing but rumor. We literally know nithing about these processors so why the panic.
What people should be asking is why Apple might do this. I honestly dont believe that it has anything to do with the CPU as it isnt even a consideration these days to platform success. Rather Apples departure from Intel is more about emerging technologies and the ability to support those technologies. The first thing that comes to mind is hardware to support AI like technologies. We cant dismiss ither tech like support for 3D touchless interfacing, camera processing and anything else Apple has up their sleeves. In a nut shell you can panic all you want but if people follow through and leave the platform they will likely be dropping if the technology band wagon. Effectively left behind.
As someone that does cross architecture development it's sometimes fine but often a pain in the ass. I have a Jetson sitting on the bench as I type.
The toolchains target x86 because that's the dominant platform for everything that isn't a phone or tablet. That's not opinion, that's just the way it is.
As a note, that while docker can run on a Pi (or a Jetson) pretty much none of the apps packaged for x86 will work. That seriously breaks the docker workflow and deployment and nobody will likely fix it for ARM based Macs because all the primary deployment platforms are all x86. Intel may have failed in getting any mobile traction but it did manage to defend it's server market. Cloud management software has been built with x86 as the primary architecture. Using docker Multi-Arch support is just asking for your private parts to be dragged over razor blades for little good reason when the easier path is to bid MacOS a reluctant goodbye and just get an x86 laptop for linux.
That means that so long as Intel owns servers no ARM based Macs will be very useful for dev shops that use Docker as a core part of their devops. And before Mike jumps in with "that's not an important demographic" I recall reading here on AI that the largest pool of pro's using the Mac are devs and not content creators.
You have a different understanding of what unified means in the context of a codebase, we get it. Since you seem to agree on the actual practical implications, can you just agree to disagree about the precise meaning of the word "unified" and get over it?
So tedious.
It's not a different definition, it's an erroneous premise. One poster is even trying move the discussion from the OS to high-level apps, when they should be moving to components that make up the OS, like frameworks and the kernel when referring unification across all OSes as that has been happening for over a decade. That's a relevant discussion, not Photoshop.
You’re wrong. Apps are the fundamental reason why we have an OS. Whether, and how those apps function on a particular OS, and how that OS is served to our devices are entwined. If you don’t get that, then you don’t understand anything about this.
I think a lot of people are assuming things in a negative way that just won't be true. This is all a rumor in the first place, not a fact. Maybe Apple is working on ARM Mac's. Then again Apple works on all kinds of things doing R&D and many things never see the light of day for one reason or another. Some of it gets leaked out but never happens.
Maybe Apple is working on it, but they're not just going to move to it without a solid plan for everything. I think people are worrying a little too much.
I think it is more of a case of people being ignorant. First off as you said nothing is shipping yet. So we have zero idea as to performance nor capability.
When you consider some if the comments made here it really makes you wonder if the posters are the claimed professionals they claim to be. Claiming your code base relys upon decades old code does not inspire confidence. Claiming you wont buy the machines due to a lack of Linux running in a VM is also assininr considering we dont even know if the new hardware contains virtualization, plus Linux has been on ARM for years now.
Frankly this so called panic is some of the most immature talk ive heard in years. We have people swearing to business decisions based on nothing but rumor. We literally know nithing about these processors so why the panic.
What people should be asking is why Apple might do this. I honestly dont believe that it has anything to do with the CPU as it isnt even a consideration these days to platform success. Rather Apples departure from Intel is more about emerging technologies and the ability to support those technologies. The first thing that comes to mind is hardware to support AI like technologies. We cant dismiss ither tech like support for 3D touchless interfacing, camera processing and anything else Apple has up their sleeves. In a nut shell you can panic all you want but if people follow through and leave the platform they will likely be dropping if the technology band wagon. Effectively left behind.
As someone that does cross architecture development it's sometimes fine but often a pain in the ass. I have a Jetson sitting on the bench as I type.
The toolchains target x86 because that's the dominant platform for everything that isn't a phone or tablet. That's not opinion, that's just the way it is.
As a note, that while docker can run on a Pi (or a Jetson) pretty much none of the apps packaged for x86 will work. That seriously breaks the docker workflow and deployment and nobody will likely fix it for ARM based Macs because all the primary deployment platforms are all x86. Intel may have failed in getting any mobile traction but it did manage to defend it's server market. Cloud management software has been built with x86 as the primary architecture. Using docker Multi-Arch support is just asking for your private parts to be dragged over razor blades for little good reason when the easier path is to bid MacOS a reluctant goodbye and just get an x86 laptop for linux.
That means that so long as Intel owns servers no ARM based Macs will be very useful for dev shops that use Docker as a core part of their devops. And before Mike jumps in with "that's not an important demographic" I recall reading here on AI that the largest pool of pro's using the Mac are devs and not content creators.
But what if Apple enabled those ARM chips to directly run x86 software?
You have a different understanding of what unified means in the context of a codebase, we get it. Since you seem to agree on the actual practical implications, can you just agree to disagree about the precise meaning of the word "unified" and get over it?
So tedious.
It's not a different definition, it's an erroneous premise. One poster is even trying move the discussion from the OS to high-level apps, when they should be moving to components that make up the OS, like frameworks and the kernel when referring unification across all OSes as that has been happening for over a decade. That's a relevant discussion, not Photoshop.
You’re wrong. Apps are the fundamental reason why we have an OS. Whether, and how those apps function on a particular OS, and how that OS is served to our devices are entwined. If you don’t get that, then you don’t understand anything about this.
I think a lot of people are assuming things in a negative way that just won't be true. This is all a rumor in the first place, not a fact. Maybe Apple is working on ARM Mac's. Then again Apple works on all kinds of things doing R&D and many things never see the light of day for one reason or another. Some of it gets leaked out but never happens.
Maybe Apple is working on it, but they're not just going to move to it without a solid plan for everything. I think people are worrying a little too much.
I think it is more of a case of people being ignorant. First off as you said nothing is shipping yet. So we have zero idea as to performance nor capability.
When you consider some if the comments made here it really makes you wonder if the posters are the claimed professionals they claim to be. Claiming your code base relys upon decades old code does not inspire confidence. Claiming you wont buy the machines due to a lack of Linux running in a VM is also assininr considering we dont even know if the new hardware contains virtualization, plus Linux has been on ARM for years now.
Frankly this so called panic is some of the most immature talk ive heard in years. We have people swearing to business decisions based on nothing but rumor. We literally know nithing about these processors so why the panic.
What people should be asking is why Apple might do this. I honestly dont believe that it has anything to do with the CPU as it isnt even a consideration these days to platform success. Rather Apples departure from Intel is more about emerging technologies and the ability to support those technologies. The first thing that comes to mind is hardware to support AI like technologies. We cant dismiss ither tech like support for 3D touchless interfacing, camera processing and anything else Apple has up their sleeves. In a nut shell you can panic all you want but if people follow through and leave the platform they will likely be dropping if the technology band wagon. Effectively left behind.
As someone that does cross architecture development it's sometimes fine but often a pain in the ass. I have a Jetson sitting on the bench as I type.
The toolchains target x86 because that's the dominant platform for everything that isn't a phone or tablet. That's not opinion, that's just the way it is.
As a note, that while docker can run on a Pi (or a Jetson) pretty much none of the apps packaged for x86 will work. That seriously breaks the docker workflow and deployment and nobody will likely fix it for ARM based Macs because all the primary deployment platforms are all x86. Intel may have failed in getting any mobile traction but it did manage to defend it's server market. Cloud management software has been built with x86 as the primary architecture. Using docker Multi-Arch support is just asking for your private parts to be dragged over razor blades for little good reason when the easier path is to bid MacOS a reluctant goodbye and just get an x86 laptop for linux.
That means that so long as Intel owns servers no ARM based Macs will be very useful for dev shops that use Docker as a core part of their devops. And before Mike jumps in with "that's not an important demographic" I recall reading here on AI that the largest pool of pro's using the Mac are devs and not content creators.
But what if Apple enabled those ARM chips to directly run x86 software?
It’s cerrainly possible, but how often has Apple done half measures when they can both simply and speed up a transition when it comes to pulling off the proverbial Band-Aid?
Roestta comes to mind, but since we’re talking about having fat binaries and dual architectures whereas that was a very fast change over from the already slow PPC to the much faster Intel chips it’s a different scenario. It’s also a easier transition in many ways due to all the advancements they’ve made since the last transition, including iOS support by IBM, Adobe, MS and many other major firms that are notoriously slow to adapt that currently have a heavy presence on the App Store, there’s a case to be made that Apple won’t need to support AArch64 and x86_64 on the same chip -or- a Rosetta 2.0 if and when this comes pass.
I keep wondering if it would make sense for a Mac (and macOS) to support configurations with:
an ARM CPU and an Intel CPU
multiple ARM CPUs
multiple ARM CPUs and an Intel CPU
Maybe it need not be all or nothing?
I think it does. Including an Intel CPU defeats the purpose (getting rid of Intel).
Why assume that Apple would be getting rid of Intel because they wanted to use an ARM-based Mac for, say, a new MacBook Air that was basically the 12" MacBook but running an Apple-designed chip? Do you really think there's an ARM-equivlenet that will work for the Mac Pro? I don't see the Pro-line being affected by this until such time as most people are instead bitching that Apple isn't moving fast enough to switch their high-end machines to to ARM.
Prescient!
Sure. If Apple can get the ARM chips to run x86 software natively, as I’m proposing, there is no way they could consider it for a high end machine. While some say that Apple could force its users and developers down that road again, I’m not so sure.
whike users don’t care what in the machine, as long as it works, developers do. There are all too many ignorant people out there who believe the solution is to “just have it go through a recompile!” Sure, if you have a flashlight app, that will work. But no decently complex software will ever work properly, if at all, with “just” a recompile. It’s months of major work, at least, and mammoth amounts of money. Developers have to be taken off other projects, etc.
having said that, desktop chips don’t have the power constraints mobile chips do. Even the Macbook Pro uses chips up to a 35 watt power draw. Compare that to the 6 watt draw of the A series for the iPad, or the M series of Intel chips for the Macbook. There’s plenty Apple could do just by going to 12 Watts. But at some point there’s a limit. As you go up the power scale, you actually have less options, because when you hit these power levels, you find that you’re competing with really high end chips. Right now, the A series can compete with the M series easily, and some other ultralow power Intel chips for mobile. But Desktop chips are different. Apple may still have an advantage, but by how much?
It’s will be a recompile for pretty much everything that uses objective c, swift, even c or c++ code that compiles already in Xcode. What compiles for ARM or x86 now for iOS can work for the Mac in future.
It might, it might not, depending on how Apple handles it. Some code may not be doing things correctly, such as using pointers and memory locations they know Apple has told them not to. All that will change.
Sorry that’s technically illiterate. Yes during the carbon transformation - which was the move of the OS 9 to a reduced api set that would compile for OS X developers had to do some work, and some developers were using memory incorrectly. That’s because the old OS 9 api allowed access to the memory behind pointers, and carbon replaced that with references which were opaque. Also developers back then were hackers.
Thats not the case now. There’s no way of writing swift, objective C, c or C++ in Xcode in such a way that it compiles incorrectly in ARM rather than x86. High level code is abstracted away from such considerations. You can cause memory issues ie overflows on c arrays but it would happen on both processors.
I keep wondering if it would make sense for a Mac (and macOS) to support configurations with:
an ARM CPU and an Intel CPU
multiple ARM CPUs
multiple ARM CPUs and an Intel CPU
Maybe it need not be all or nothing?
I think it does. Including an Intel CPU defeats the purpose (getting rid of Intel).
Why assume that Apple would be getting rid of Intel because they wanted to use an ARM-based Mac for, say, a new MacBook Air that was basically the 12" MacBook but running an Apple-designed chip? Do you really think there's an ARM-equivlenet that will work for the Mac Pro? I don't see the Pro-line being affected by this until such time as most people are instead bitching that Apple isn't moving fast enough to switch their high-end machines to to ARM.
Prescient!
Sure. If Apple can get the ARM chips to run x86 software natively, as I’m proposing, there is no way they could consider it for a high end machine. While some say that Apple could force its users and developers down that road again, I’m not so sure.
whike users don’t care what in the machine, as long as it works, developers do. There are all too many ignorant people out there who believe the solution is to “just have it go through a recompile!” Sure, if you have a flashlight app, that will work. But no decently complex software will ever work properly, if at all, with “just” a recompile. It’s months of major work, at least, and mammoth amounts of money. Developers have to be taken off other projects, etc.
having said that, desktop chips don’t have the power constraints mobile chips do. Even the Macbook Pro uses chips up to a 35 watt power draw. Compare that to the 6 watt draw of the A series for the iPad, or the M series of Intel chips for the Macbook. There’s plenty Apple could do just by going to 12 Watts. But at some point there’s a limit. As you go up the power scale, you actually have less options, because when you hit these power levels, you find that you’re competing with really high end chips. Right now, the A series can compete with the M series easily, and some other ultralow power Intel chips for mobile. But Desktop chips are different. Apple may still have an advantage, but by how much?
It’s will be a recompile for pretty much everything that uses objective c, swift, even c or c++ code that compiles already in Xcode. What compiles for ARM or x86 now for iOS can work for the Mac in future.
It might, it might not, depending on how Apple handles it. Some code may not be doing things correctly, such as using pointers and memory locations they know Apple has told them not to. All that will change.
Sorry that’s technically illiterate. Yes during the carbon transformation - which was the move of the OS 9 to a reduced api set that would compile for OS X developers had to do some work, and some developers were using memory incorrectly. That’s because the old OS 9 api allowed access to the memory behind pointers, and carbon replaced that with references which were opaque. Also developers back then were hackers.
Thats not the case now. There’s no way of writing swift, objective C, c or C++ in Xcode in such a way that it compiles incorrectly in ARM rather than x86. High level code is abstracted away from such considerations. You can cause memory issues ie overflows on c arrays but it would happen on both processors.
I think a lot of people are assuming things in a negative way that just won't be true. This is all a rumor in the first place, not a fact. Maybe Apple is working on ARM Mac's. Then again Apple works on all kinds of things doing R&D and many things never see the light of day for one reason or another. Some of it gets leaked out but never happens.
Maybe Apple is working on it, but they're not just going to move to it without a solid plan for everything. I think people are worrying a little too much.
I think it is more of a case of people being ignorant. First off as you said nothing is shipping yet. So we have zero idea as to performance nor capability.
When you consider some if the comments made here it really makes you wonder if the posters are the claimed professionals they claim to be. Claiming your code base relys upon decades old code does not inspire confidence. Claiming you wont buy the machines due to a lack of Linux running in a VM is also assininr considering we dont even know if the new hardware contains virtualization, plus Linux has been on ARM for years now.
Frankly this so called panic is some of the most immature talk ive heard in years. We have people swearing to business decisions based on nothing but rumor. We literally know nithing about these processors so why the panic.
What people should be asking is why Apple might do this. I honestly dont believe that it has anything to do with the CPU as it isnt even a consideration these days to platform success. Rather Apples departure from Intel is more about emerging technologies and the ability to support those technologies. The first thing that comes to mind is hardware to support AI like technologies. We cant dismiss ither tech like support for 3D touchless interfacing, camera processing and anything else Apple has up their sleeves. In a nut shell you can panic all you want but if people follow through and leave the platform they will likely be dropping if the technology band wagon. Effectively left behind.
As someone that does cross architecture development it's sometimes fine but often a pain in the ass. I have a Jetson sitting on the bench as I type.
The toolchains target x86 because that's the dominant platform for everything that isn't a phone or tablet. That's not opinion, that's just the way it is.
As a note, that while docker can run on a Pi (or a Jetson) pretty much none of the apps packaged for x86 will work. That seriously breaks the docker workflow and deployment and nobody will likely fix it for ARM based Macs because all the primary deployment platforms are all x86. Intel may have failed in getting any mobile traction but it did manage to defend it's server market. Cloud management software has been built with x86 as the primary architecture. Using docker Multi-Arch support is just asking for your private parts to be dragged over razor blades for little good reason when the easier path is to bid MacOS a reluctant goodbye and just get an x86 laptop for linux.
That means that so long as Intel owns servers no ARM based Macs will be very useful for dev shops that use Docker as a core part of their devops. And before Mike jumps in with "that's not an important demographic" I recall reading here on AI that the largest pool of pro's using the Mac are devs and not content creators.
But what if Apple enabled those ARM chips to directly run x86 software?
You assume that emulating the x86 environment is adding a handful of instructions. I don’t believe that to be so simple from either a licensing or technically perspective.
Given that folks need this to work well for professional use this is like claiming you don’t need Windows because Wine. Yah, some things work quite well but a lot of stuff doesn’t. Emulating x86 by adding a few instructions might work okay for some things but for professional use it’s gotta run well enough not to cause unnecessary disruption.
I keep wondering if it would make sense for a Mac (and macOS) to support configurations with:
an ARM CPU and an Intel CPU
multiple ARM CPUs
multiple ARM CPUs and an Intel CPU
Maybe it need not be all or nothing?
I think it does. Including an Intel CPU defeats the purpose (getting rid of Intel).
Why assume that Apple would be getting rid of Intel because they wanted to use an ARM-based Mac for, say, a new MacBook Air that was basically the 12" MacBook but running an Apple-designed chip? Do you really think there's an ARM-equivlenet that will work for the Mac Pro? I don't see the Pro-line being affected by this until such time as most people are instead bitching that Apple isn't moving fast enough to switch their high-end machines to to ARM.
Prescient!
Sure. If Apple can get the ARM chips to run x86 software natively, as I’m proposing, there is no way they could consider it for a high end machine. While some say that Apple could force its users and developers down that road again, I’m not so sure.
whike users don’t care what in the machine, as long as it works, developers do. There are all too many ignorant people out there who believe the solution is to “just have it go through a recompile!” Sure, if you have a flashlight app, that will work. But no decently complex software will ever work properly, if at all, with “just” a recompile. It’s months of major work, at least, and mammoth amounts of money. Developers have to be taken off other projects, etc.
having said that, desktop chips don’t have the power constraints mobile chips do. Even the Macbook Pro uses chips up to a 35 watt power draw. Compare that to the 6 watt draw of the A series for the iPad, or the M series of Intel chips for the Macbook. There’s plenty Apple could do just by going to 12 Watts. But at some point there’s a limit. As you go up the power scale, you actually have less options, because when you hit these power levels, you find that you’re competing with really high end chips. Right now, the A series can compete with the M series easily, and some other ultralow power Intel chips for mobile. But Desktop chips are different. Apple may still have an advantage, but by how much?
It’s will be a recompile for pretty much everything that uses objective c, swift, even c or c++ code that compiles already in Xcode. What compiles for ARM or x86 now for iOS can work for the Mac in future.
It might, it might not, depending on how Apple handles it. Some code may not be doing things correctly, such as using pointers and memory locations they know Apple has told them not to. All that will change.
Sorry that’s technically illiterate. Yes during the carbon transformation - which was the move of the OS 9 to a reduced api set that would compile for OS X developers had to do some work, and some developers were using memory incorrectly. That’s because the old OS 9 api allowed access to the memory behind pointers, and carbon replaced that with references which were opaque. Also developers back then were hackers.
Thats not the case now. There’s no way of writing swift, objective C, c or C++ in Xcode in such a way that it compiles incorrectly in ARM rather than x86. High level code is abstracted away from such considerations. You can cause memory issues ie overflows on c arrays but it would happen on both processors.
What is technically illiterate is claiming anything non-trivial is a simple recompile...better today than before with arm now little endian...but shit, look at the lack of aTV ports of iOS apps that could reuse 80-90% of the UI. Going from iOS to macOS is also not that common.
Going from native x86 macOS to arm macOS will mostly compile clean the first go, and maybe pass your unit tests but a lot of stuff out there isn’t native. Java, python, matlab, mono, games, etc that uses third party tool chains will require the toolmakers to port and they often need to code closer to the OS and lower levels than application code.
These apps have a bigger footprint that you would assume in the scientific and commercial worlds.
First concern is the ability to run virtual Linux and Windows OS's under the new hardware. This is key to having only a souped up Macbook Pro to support multiple platforms and clients
Why is that a concern? If you're doing that now why can't you continue doing that in the future? A low-end Mac running ARM will not make your Intel Mac stop working.
It's a concern because virtualization software as implemented today relies on Intel CPU's virtualization instructions to make virtualized code run near native speed. So if you say use Mac to run a virtual machine to emulate Linux, Intel CPU makes it possible to run Linux in the virtual environment nearly as fast as if you installed actual Linux on your Mac's hard drive.
Switch to Arm would change this, since obviously ARM doesn't have Intel's CPU virtualization instructions, and for licensing reasons might never have them.
Plenty of CPUs prior to X86 were virtualizable. Indeed, nearly all other server-class CPUs ever made have been quite virtualizable. The reason Intel needed special instructions was because it wasn't properly designed to be virtualizable and prior to Intel adding those instructions, VMware and other virtualization engines had to go through really annoying hoops to make that stuff work, particularly with Windows which also wasn't designed to be virtualized well at all.
There are no patenting issues with virtualizing ARM, since there is no reason whatsoever for ARM to virtualize using Intel's instructions or following Intel's memory and page table and exception models.
The problem is not going to be whether ARM-based virtual machines or ARM-based Docker containers can or cannot run on ARM-based Macs (I would be surprised if they couldn't). The problem is simply that emulating a modern 64-bit Intel chip with all of its screwy extended instructions is going to be ridiculously slow unless instructions are added to ARM to somehow make that faster. The old Rosetta software mentioned elsewhere was built for a long-dead version of the Intel chip that was quite substantially less advanced than the chips Intel makes now. Remember, that emulated only 32-bit instructions, didn't have to support virtualization, and probably didn't even have to support many MMX or other extended Intel instructions.
I would guess that Apple can indeed make a chip that will blow away Intel for Apple's use. They can better integrate GPU-based acceleration. They can integrate better memory handling. The register set for ARM chips is larger. ARM chips are simpler so you can incorporate many more of them on the same die. If Apple wanted to (and they probably don't) they could make a truly sweet server chip that would end Intel's dominance once and for all. Fortunately for Intel, Apple is unlikely to be interested in doing so, since they like the markets they have and don't want to go after markets where their only advantage is a better CPU architecture.
I keep wondering if it would make sense for a Mac (and macOS) to support configurations with:
an ARM CPU and an Intel CPU
multiple ARM CPUs
multiple ARM CPUs and an Intel CPU
Maybe it need not be all or nothing?
I think it does. Including an Intel CPU defeats the purpose (getting rid of Intel).
Why assume that Apple would be getting rid of Intel because they wanted to use an ARM-based Mac for, say, a new MacBook Air that was basically the 12" MacBook but running an Apple-designed chip? Do you really think there's an ARM-equivlenet that will work for the Mac Pro? I don't see the Pro-line being affected by this until such time as most people are instead bitching that Apple isn't moving fast enough to switch their high-end machines to to ARM.
Prescient!
Sure. If Apple can get the ARM chips to run x86 software natively, as I’m proposing, there is no way they could consider it for a high end machine. While some say that Apple could force its users and developers down that road again, I’m not so sure.
whike users don’t care what in the machine, as long as it works, developers do. There are all too many ignorant people out there who believe the solution is to “just have it go through a recompile!” Sure, if you have a flashlight app, that will work. But no decently complex software will ever work properly, if at all, with “just” a recompile. It’s months of major work, at least, and mammoth amounts of money. Developers have to be taken off other projects, etc.
having said that, desktop chips don’t have the power constraints mobile chips do. Even the Macbook Pro uses chips up to a 35 watt power draw. Compare that to the 6 watt draw of the A series for the iPad, or the M series of Intel chips for the Macbook. There’s plenty Apple could do just by going to 12 Watts. But at some point there’s a limit. As you go up the power scale, you actually have less options, because when you hit these power levels, you find that you’re competing with really high end chips. Right now, the A series can compete with the M series easily, and some other ultralow power Intel chips for mobile. But Desktop chips are different. Apple may still have an advantage, but by how much?
It’s will be a recompile for pretty much everything that uses objective c, swift, even c or c++ code that compiles already in Xcode. What compiles for ARM or x86 now for iOS can work for the Mac in future.
It might, it might not, depending on how Apple handles it. Some code may not be doing things correctly, such as using pointers and memory locations they know Apple has told them not to. All that will change.
Sorry that’s technically illiterate. Yes during the carbon transformation - which was the move of the OS 9 to a reduced api set that would compile for OS X developers had to do some work, and some developers were using memory incorrectly. That’s because the old OS 9 api allowed access to the memory behind pointers, and carbon replaced that with references which were opaque. Also developers back then were hackers.
Thats not the case now. There’s no way of writing swift, objective C, c or C++ in Xcode in such a way that it compiles incorrectly in ARM rather than x86. High level code is abstracted away from such considerations. You can cause memory issues ie overflows on c arrays but it would happen on both processors.
What is technically illiterate is claiming anything non-trivial is a simple recompile...better today than before with arm now little endian...but shit, look at the lack of aTV ports of iOS apps that could reuse 80-90% of the UI. Going from iOS to macOS is also not that common.
I'm certainly no Intel chip/ISA expert... but, have some honest questions.
What is X86, X64, X86-64, AMD64 and Intel64? What's the difference between them?
They are all names for the Instruction Set Architecture (ISA) of a processor. x86: This is the original 32-bit Intel x86 instruction set that has come to dominate the world.
x86-64, X64: These are the generic names for the 64-bit extension to x86 that is fully backwards compatible with x86. These are specified by Intel but based on AMD's design.
AMD64: When 64-bit processors were first coming to market, AMD devised the 64-bit extension to the x86 instruction set which maintained backwards compatibility with all the existing 32-bit programs. This was AMD64, it's more or less the same as the current x86-64 specification.
Intel64: This term is ambiguous but could refer to the x86 64-bit extension or it could refer to Intel Itanium (IA-64). IA-64 was Intel's original 64-bit offering to compete with AMD64. It was a complete overhaul of the instruction set that ruined backwards compatibility with all applications for x86. It was a complete failure for Intel and they ended up releasing new processors based on AMD64 which became today's x86 64-bit extension.
AMD64 is backwards compatible with Intel X86 and prior Intel ISAs going all the way back to 8-bit, 16-bit, 32-bit
Intel 64-bit implementation is uses the AMD64 specs
AMD64 runs on ARM
If these are true, wouldn't your following assertions, largely, be moot?
nht said: Going from native x86 macOS to arm macOS will mostly compile clean the first go, and maybe pass your unit tests but a lot of stuff out there isn’t native. Java, python, matlab, mono, games, etc that uses third party tool chains will require the toolmakers to port and they often need to code closer to the OS and lower levels than application code.
These apps have a bigger footprint that you would assume in the scientific and commercial worlds.
Plenty of CPUs prior to X86 were virtualizable. Indeed, nearly all other server-class CPUs ever made have been quite virtualizable. The reason Intel needed special instructions was because it wasn't properly designed to be virtualizable and prior to Intel adding those instructions, VMware and other virtualization engines had to go through really annoying hoops to make that stuff work, particularly with Windows which also wasn't designed to be virtualized well at all.
There are no patenting issues with virtualizing ARM, since there is no reason whatsoever for ARM to virtualize using Intel's instructions or following Intel's memory and page table and exception models.
The problem is not going to be whether ARM-based virtual machines or ARM-based Docker containers can or cannot run on ARM-based Macs (I would be surprised if they couldn't). The problem is simply that emulating a modern 64-bit Intel chip with all of its screwy extended instructions is going to be ridiculously slow unless instructions are added to ARM to somehow make that faster. The old Rosetta software mentioned elsewhere was built for a long-dead version of the Intel chip that was quite substantially less advanced than the chips Intel makes now. Remember, that emulated only 32-bit instructions, didn't have to support virtualization, and probably didn't even have to support many MMX or other extended Intel instructions.
I would guess that Apple can indeed make a chip that will blow away Intel for Apple's use. They can better integrate GPU-based acceleration. They can integrate better memory handling. The register set for ARM chips is larger. ARM chips are simpler so you can incorporate many more of them on the same die. If Apple wanted to (and they probably don't) they could make a truly sweet server chip that would end Intel's dominance once and for all. Fortunately for Intel,Apple is unlikely to be interested in doing so, since they like the markets they have and don't want to go after markets where their only advantage is a better CPU architecture.
Hmm...
I can visualize several reasons that Apple might want to make a truly sweet server chip:
Apple's own internal server farms
Apple's iCloud offerings to customers -- ApplePay, iTunes, Music, Content Streaming, etc.
Caching and decrypting Apple data currently stored (encrypted) on AWS and Google servers
Apple offering additional and more efficient Cloud Services for AI, AR, ML, IOT, etc.
Apple collaborating with IBM Cloud offerings
Possibly, the above are some of the reasons that Apple acquired FoundationDB in 2015.
FoundationDB Cluster does 14M Random Writes Per Second
FoundationDB announced version 3.0 of the FoundationDB Key-Value Store transactional NoSQL database. The new version adds monitoring and improved performance to the company’s highly-scalable, ACID compliant database system.
The combination of high scalability, transactional integrity, and high performance have helped FoundationDB Key-Value Store become one of the fastest growing databases in the past 12 months, according to 451 Research.
"When we started FoundationDB, many experts thought it was impossible to build a distributed database with ACID transactions," said Dave Rosenthal, CEO of FoundationDB. "But after years of work, we proved that it could be done."
"Then, they said that it would never scale," he added. "Today, version 3.0 has eliminated single-machine bottlenecks and delivers a scalable, transactional database at industry-leading performance levels that the competition can't even achieve without transactional guarantees"
Key-value stores are among the simplest NoSQL database architectures. Data is stored in key-value pairs and retrieval is done via a known key. Key-value stores can store high volumes of structured, semi-structured, and unstructured data with dynamic rather than pre-determined schemas. FoundationDB is unique in that it adds ACID compliant transactional features normally associated with traditional RDBMS platforms to its underlying key-value database. FoundationDB's "layered" architecture allows applications to interface with data through a SQL layer, a graph layer, or directly write to the key-value store.
In all the Mac CPU shifts, IMO, what made the Intel shift the most significant is it provided a gateway for switchers - a crutch for people to use Windows while slowly over months learning the Mac. While all Macs have had some sort of Windows emulation (remember those 486 cards), this was the first time you could truly get native Windows speeds. It may have also been a marketing win, since people could understand how powerful the CPU was...
Then, all that will be left is the old marketing problem - once again,
people will say "How does a Threadripper laptop compare to an Apple A12x
laptop?" ...
However, there are mitigating factors now:
-> There are a LOT more Mac users - it's much more common to have a Mac, whether it's at home, school or business. There's a lot more help out there...
* You are right that Intel on the Mac helped switchers to go to the Mac side. - It is a major reason why Mac OS worldwide marketshare was able to go above 5% (from a bottom of about 2%). - Using Windows on an Intel Mac became easy to do with Boot Camp or emulation and importantly performance was fast enough to even use Windows games on a Mac.
* Where I think you are wrong is that you assume that Windows on a Mac is only useful for switching. Instead using Windows on Mac can be ongoing. - Examples; let's say a person wants to play Fallout 4 on a Mac? Easy. Use Boot Camp and install Windows. - And what if a job requires complex formatted documents and the equivalent Mac software doesn't get the formatting right? Run Boot Camp or an emulator on the Mac to use Windows business software.
* I remember those 486 cards in the 90s. They made Macs much more expensive. Windows emulation on the Power PC CPU was slow. In early 2000s at my house I had to have both a Windows PC and Macs to meet my family's computing needs.
Not going with Intel on the Mac led to the Apple's decline in the 90s. - Now with the success of the iPhone, the situation today is different. But isolating Macs again (for the average user) by using a different CPU is risky.
Wouldn’t Apple start this with the MacBook as a test bed/proof of concept? I can’t imagine the iMac Pro or MacPro or even the higher end MacBook Pro dumping Intel anytime soon.
FTA: "The shift won't be immediate, and will likely start on Apple's low-end, like the MacBook and possibly a Mac mini migration."
ARM mini would be my Mac OS X exit. Period. That's going to make me shift even more towards MS. I don't have an apple laptop anymore. Corporate machines are MS and surface looks like a more viable option. What Apple needs to understand is that the iPad is also starting to hang loose for some people. Whats the point in OS X if you can't use the full value out of it(the pure differentiation) . Features eg. like moving files over wifi, moving content to be used and consumed via OS features on your other (apple) devices. Seamless integration.
I keep wondering if it would make sense for a Mac (and macOS) to support configurations with:
an ARM CPU and an Intel CPU
multiple ARM CPUs
multiple ARM CPUs and an Intel CPU
Maybe it need not be all or nothing?
I think it does. Including an Intel CPU defeats the purpose (getting rid of Intel).
Why assume that Apple would be getting rid of Intel because they wanted to use an ARM-based Mac for, say, a new MacBook Air that was basically the 12" MacBook but running an Apple-designed chip? Do you really think there's an ARM-equivlenet that will work for the Mac Pro? I don't see the Pro-line being affected by this until such time as most people are instead bitching that Apple isn't moving fast enough to switch their high-end machines to to ARM.
Prescient!
Sure. If Apple can get the ARM chips to run x86 software natively, as I’m proposing, there is no way they could consider it for a high end machine. While some say that Apple could force its users and developers down that road again, I’m not so sure.
whike users don’t care what in the machine, as long as it works, developers do. There are all too many ignorant people out there who believe the solution is to “just have it go through a recompile!” Sure, if you have a flashlight app, that will work. But no decently complex software will ever work properly, if at all, with “just” a recompile. It’s months of major work, at least, and mammoth amounts of money. Developers have to be taken off other projects, etc.
having said that, desktop chips don’t have the power constraints mobile chips do. Even the Macbook Pro uses chips up to a 35 watt power draw. Compare that to the 6 watt draw of the A series for the iPad, or the M series of Intel chips for the Macbook. There’s plenty Apple could do just by going to 12 Watts. But at some point there’s a limit. As you go up the power scale, you actually have less options, because when you hit these power levels, you find that you’re competing with really high end chips. Right now, the A series can compete with the M series easily, and some other ultralow power Intel chips for mobile. But Desktop chips are different. Apple may still have an advantage, but by how much?
It’s will be a recompile for pretty much everything that uses objective c, swift, even c or c++ code that compiles already in Xcode. What compiles for ARM or x86 now for iOS can work for the Mac in future.
It might, it might not, depending on how Apple handles it. Some code may not be doing things correctly, such as using pointers and memory locations they know Apple has told them not to. All that will change.
Sorry that’s technically illiterate. Yes during the carbon transformation - which was the move of the OS 9 to a reduced api set that would compile for OS X developers had to do some work, and some developers were using memory incorrectly. That’s because the old OS 9 api allowed access to the memory behind pointers, and carbon replaced that with references which were opaque. Also developers back then were hackers.
Thats not the case now. There’s no way of writing swift, objective C, c or C++ in Xcode in such a way that it compiles incorrectly in ARM rather than x86. High level code is abstracted away from such considerations. You can cause memory issues ie overflows on c arrays but it would happen on both processors.
What is technically illiterate is claiming anything non-trivial is a simple recompile...better today than before with arm now little endian...but shit, look at the lack of aTV ports of iOS apps that could reuse 80-90% of the UI. Going from iOS to macOS is also not that common.
I'm certainly no Intel chip/ISA expert... but, have some honest questions.
What is X86, X64, X86-64, AMD64 and Intel64? What's the difference between them?
They are all names for the Instruction Set Architecture (ISA) of a processor. x86: This is the original 32-bit Intel x86 instruction set that has come to dominate the world.
x86-64, X64: These are the generic names for the 64-bit extension to x86 that is fully backwards compatible with x86. These are specified by Intel but based on AMD's design.
AMD64: When 64-bit processors were first coming to market, AMD devised the 64-bit extension to the x86 instruction set which maintained backwards compatibility with all the existing 32-bit programs. This was AMD64, it's more or less the same as the current x86-64 specification.
Intel64: This term is ambiguous but could refer to the x86 64-bit extension or it could refer to Intel Itanium (IA-64). IA-64 was Intel's original 64-bit offering to compete with AMD64. It was a complete overhaul of the instruction set that ruined backwards compatibility with all applications for x86. It was a complete failure for Intel and they ended up releasing new processors based on AMD64 which became today's x86 64-bit extension.
AMD64 is backwards compatible with Intel X86 and prior Intel ISAs going all the way back to 8-bit, 16-bit, 32-bit
Intel 64-bit implementation is uses the AMD64 specs
AMD64 runs on ARM
If these are true, wouldn't your following assertions, largely, be moot?
nht said: Going from native x86 macOS to arm macOS will mostly compile clean the first go, and maybe pass your unit tests but a lot of stuff out there isn’t native. Java, python, matlab, mono, games, etc that uses third party tool chains will require the toolmakers to port and they often need to code closer to the OS and lower levels than application code.
These apps have a bigger footprint that you would assume in the scientific and commercial worlds.
It would be except for #4 isn't true. arm64 is ARM.
Wouldn’t Apple start this with the MacBook as a test bed/proof of concept? I can’t imagine the iMac Pro or MacPro or even the higher end MacBook Pro dumping Intel anytime soon.
FTA: "The shift won't be immediate, and will likely start on Apple's low-end, like the MacBook and possibly a Mac mini migration."
ARM mini would be my Mac OS X exit. Period. That's going to make me shift even more towards MS. I don't have an apple laptop anymore. Corporate machines are MS and surface looks like a more viable option. What Apple needs to understand is that the iPad is also starting to hang loose for some people. Whats the point in OS X if you can't use the full value out of it(the pure differentiation) . Features eg. like moving files over wifi, moving content to be used and consumed via OS features on your other (apple) devices. Seamless integration.
... why would you not be able to use those features on an ARM Mac?
I keep wondering if it would make sense for a Mac (and macOS) to support configurations with:
an ARM CPU and an Intel CPU
multiple ARM CPUs
multiple ARM CPUs and an Intel CPU
Maybe it need not be all or nothing?
I think it does. Including an Intel CPU defeats the purpose (getting rid of Intel).
Why assume that Apple would be getting rid of Intel because they wanted to use an ARM-based Mac for, say, a new MacBook Air that was basically the 12" MacBook but running an Apple-designed chip? Do you really think there's an ARM-equivlenet that will work for the Mac Pro? I don't see the Pro-line being affected by this until such time as most people are instead bitching that Apple isn't moving fast enough to switch their high-end machines to to ARM.
Prescient!
Sure. If Apple can get the ARM chips to run x86 software natively, as I’m proposing, there is no way they could consider it for a high end machine. While some say that Apple could force its users and developers down that road again, I’m not so sure.
whike users don’t care what in the machine, as long as it works, developers do. There are all too many ignorant people out there who believe the solution is to “just have it go through a recompile!” Sure, if you have a flashlight app, that will work. But no decently complex software will ever work properly, if at all, with “just” a recompile. It’s months of major work, at least, and mammoth amounts of money. Developers have to be taken off other projects, etc.
having said that, desktop chips don’t have the power constraints mobile chips do. Even the Macbook Pro uses chips up to a 35 watt power draw. Compare that to the 6 watt draw of the A series for the iPad, or the M series of Intel chips for the Macbook. There’s plenty Apple could do just by going to 12 Watts. But at some point there’s a limit. As you go up the power scale, you actually have less options, because when you hit these power levels, you find that you’re competing with really high end chips. Right now, the A series can compete with the M series easily, and some other ultralow power Intel chips for mobile. But Desktop chips are different. Apple may still have an advantage, but by how much?
It’s will be a recompile for pretty much everything that uses objective c, swift, even c or c++ code that compiles already in Xcode. What compiles for ARM or x86 now for iOS can work for the Mac in future.
It might, it might not, depending on how Apple handles it. Some code may not be doing things correctly, such as using pointers and memory locations they know Apple has told them not to. All that will change.
Sorry that’s technically illiterate. Yes during the carbon transformation - which was the move of the OS 9 to a reduced api set that would compile for OS X developers had to do some work, and some developers were using memory incorrectly. That’s because the old OS 9 api allowed access to the memory behind pointers, and carbon replaced that with references which were opaque. Also developers back then were hackers.
Thats not the case now. There’s no way of writing swift, objective C, c or C++ in Xcode in such a way that it compiles incorrectly in ARM rather than x86. High level code is abstracted away from such considerations. You can cause memory issues ie overflows on c arrays but it would happen on both processors.
What is technically illiterate is claiming anything non-trivial is a simple recompile...better today than before with arm now little endian...but shit, look at the lack of aTV ports of iOS apps that could reuse 80-90% of the UI. Going from iOS to macOS is also not that common.
I'm certainly no Intel chip/ISA expert... but, have some honest questions.
What is X86, X64, X86-64, AMD64 and Intel64? What's the difference between them?
They are all names for the Instruction Set Architecture (ISA) of a processor. x86: This is the original 32-bit Intel x86 instruction set that has come to dominate the world.
x86-64, X64: These are the generic names for the 64-bit extension to x86 that is fully backwards compatible with x86. These are specified by Intel but based on AMD's design.
AMD64: When 64-bit processors were first coming to market, AMD devised the 64-bit extension to the x86 instruction set which maintained backwards compatibility with all the existing 32-bit programs. This was AMD64, it's more or less the same as the current x86-64 specification.
Intel64: This term is ambiguous but could refer to the x86 64-bit extension or it could refer to Intel Itanium (IA-64). IA-64 was Intel's original 64-bit offering to compete with AMD64. It was a complete overhaul of the instruction set that ruined backwards compatibility with all applications for x86. It was a complete failure for Intel and they ended up releasing new processors based on AMD64 which became today's x86 64-bit extension.
AMD64 is backwards compatible with Intel X86 and prior Intel ISAs going all the way back to 8-bit, 16-bit, 32-bit
Intel 64-bit implementation is uses the AMD64 specs
AMD64 runs on ARM
If these are true, wouldn't your following assertions, largely, be moot?
nht said: Going from native x86 macOS to arm macOS will mostly compile clean the first go, and maybe pass your unit tests but a lot of stuff out there isn’t native. Java, python, matlab, mono, games, etc that uses third party tool chains will require the toolmakers to port and they often need to code closer to the OS and lower levels than application code.
These apps have a bigger footprint that you would assume in the scientific and commercial worlds.
It would be except for #4 isn't true. arm64 is ARM.
Ahh... My bad! I was surfing all over the place and conflated some things:
Intel chips convert CISC instructions to RISC instructions for execution
AMD developed the AMD64 ISA which is a superset of Intel x86 ISA
AMD was/is developing X-gene 64-bit ARM servers
Intel threatened to sue to prevent Microsoft, Qualcomm (or others) from appropriating or emulating its ISA IP
What I don't understand is how can AMD use Intel ISA others cannot.
Edit: Apparently, AMD has a non-transferrable license to use the Intel x86 ISA.
Comments
However, there are mitigating factors now:
-> There are a LOT more Mac users - it's much more common to have a Mac, whether it's at home, school or business. There's a lot more help out there.
-> There are a lot less people trying to run the latest version of Windows, and older versions of Windows run fast enough for menial tasks.
-> AFAIK, ARMv8.2 onward contains virtualization instructions baked into the mandatory ISA. And Microsoft is giving ARM their full support. So ultimately, future windows might run faster on ARM than on Intel. Especially on laptops, where power = performance.
Then, all that will be left is the old marketing problem - once again, people will say "How does a Threadripper laptop compare to an Apple A12x laptop?" But now that we're at the end of Moore's Law, do people really care so much about how fast their CPU runs? And might Microsoft start to make the SAME shift away from x86?
The toolchains target x86 because that's the dominant platform for everything that isn't a phone or tablet. That's not opinion, that's just the way it is.
As a note, that while docker can run on a Pi (or a Jetson) pretty much none of the apps packaged for x86 will work. That seriously breaks the docker workflow and deployment and nobody will likely fix it for ARM based Macs because all the primary deployment platforms are all x86. Intel may have failed in getting any mobile traction but it did manage to defend it's server market. Cloud management software has been built with x86 as the primary architecture. Using docker Multi-Arch support is just asking for your private parts to be dragged over razor blades for little good reason when the easier path is to bid MacOS a reluctant goodbye and just get an x86 laptop for linux.
That means that so long as Intel owns servers no ARM based Macs will be very useful for dev shops that use Docker as a core part of their devops. And before Mike jumps in with "that's not an important demographic" I recall reading here on AI that the largest pool of pro's using the Mac are devs and not content creators.
But what if Apple enabled those ARM chips to directly run x86 software?
Roestta comes to mind, but since we’re talking about having fat binaries and dual architectures whereas that was a very fast change over from the already slow PPC to the much faster Intel chips it’s a different scenario. It’s also a easier transition in many ways due to all the advancements they’ve made since the last transition, including iOS support by IBM, Adobe, MS and many other major firms that are notoriously slow to adapt that currently have a heavy presence on the App Store, there’s a case to be made that Apple won’t need to support AArch64 and x86_64 on the same chip -or- a Rosetta 2.0 if and when this comes pass.
Thats not the case now. There’s no way of writing swift, objective C, c or C++ in Xcode in such a way that it compiles incorrectly in ARM rather than x86. High level code is abstracted away from such considerations. You can cause memory issues ie overflows on c arrays but it would happen on both processors.
Given that folks need this to work well for professional use this is like claiming you don’t need Windows because Wine. Yah, some things work quite well but a lot of stuff doesn’t. Emulating x86 by adding a few instructions might work okay for some things but for professional use it’s gotta run well enough not to cause unnecessary disruption.
What is technically illiterate is claiming anything non-trivial is a simple recompile...better today than before with arm now little endian...but shit, look at the lack of aTV ports of iOS apps that could reuse 80-90% of the UI. Going from iOS to macOS is also not that common.
Going from native x86 macOS to arm macOS will mostly compile clean the first go, and maybe pass your unit tests but a lot of stuff out there isn’t native. Java, python, matlab, mono, games, etc that uses third party tool chains will require the toolmakers to port and they often need to code closer to the OS and lower levels than application code.
These apps have a bigger footprint that you would assume in the scientific and commercial worlds.
There are no patenting issues with virtualizing ARM, since there is no reason whatsoever for ARM to virtualize using Intel's instructions or following Intel's memory and page table and exception models.
The problem is not going to be whether ARM-based virtual machines or ARM-based Docker containers can or cannot run on ARM-based Macs (I would be surprised if they couldn't). The problem is simply that emulating a modern 64-bit Intel chip with all of its screwy extended instructions is going to be ridiculously slow unless instructions are added to ARM to somehow make that faster. The old Rosetta software mentioned elsewhere was built for a long-dead version of the Intel chip that was quite substantially less advanced than the chips Intel makes now. Remember, that emulated only 32-bit instructions, didn't have to support virtualization, and probably didn't even have to support many MMX or other extended Intel instructions.
I would guess that Apple can indeed make a chip that will blow away Intel for Apple's use. They can better integrate GPU-based acceleration. They can integrate better memory handling. The register set for ARM chips is larger. ARM chips are simpler so you can incorporate many more of them on the same die. If Apple wanted to (and they probably don't) they could make a truly sweet server chip that would end Intel's dominance once and for all. Fortunately for Intel, Apple is unlikely to be interested in doing so, since they like the markets they have and don't want to go after markets where their only advantage is a better CPU architecture.
I'm certainly no Intel chip/ISA expert... but, have some honest questions.
As I understand the above:
If these are true, wouldn't your following assertions, largely, be moot?
Hmm...
I can visualize several reasons that Apple might want to make a truly sweet server chip:
Possibly, the above are some of the reasons that Apple acquired FoundationDB in 2015.
- It is a major reason why Mac OS worldwide marketshare was able to go above 5% (from a bottom of about 2%).
- Using Windows on an Intel Mac became easy to do with Boot Camp or emulation and importantly performance was fast enough to even use Windows games on a Mac.
* Where I think you are wrong is that you assume that Windows on a Mac is only useful for switching.
Instead using Windows on Mac can be ongoing.
- Examples; let's say a person wants to play Fallout 4 on a Mac?
Easy. Use Boot Camp and install Windows.
- And what if a job requires complex formatted documents and the equivalent Mac software doesn't get the formatting right?
Run Boot Camp or an emulator on the Mac to use Windows business software.
* I remember those 486 cards in the 90s. They made Macs much more expensive. Windows emulation on the Power PC CPU was slow.
In early 2000s at my house I had to have both a Windows PC and Macs to meet my family's computing needs.
In the late 80s Apple had the opportunity to switch to Intel. It didn't and John Sculley admits that this was his biggest mistake at Apple.
https://www.networkworld.com/article/2337951/wireless/former-apple-ceo-reveals-his-biggest-mistake.html
Not going with Intel on the Mac led to the Apple's decline in the 90s.
- Now with the success of the iPhone, the situation today is different. But isolating Macs again (for the average user) by using a different CPU is risky.
What I don't understand is how can AMD use Intel ISA others cannot.
Edit: Apparently, AMD has a non-transferrable license to use the Intel x86 ISA.