Hopefully it comes a day where Apple SoC CPU is faster than the most advance Intel CPU and Apple SoC GPU is better than the latest Radeon/NVidia GPU.
Just look at what's happened to Android with Samsung and Qualcomm at the wheel. the $400 iPhone SE beats a $1300 Android smartphone. The same will likely happen to INTEL and AMD/NIVDA who lack the vertical advantage of custom hardware and software integration.
Mac users will be stuck using dumbed down iOS software for a long time I feel.
After all - This is the motivation isn’t it? Eventually have just 1 OS that can be modded to facilitate the device.
I certainly did not get that impression from what we saw. Not sure why you did.
It was the spacing of the toolbar icons, very touchable.
And what at all does that have to do with being "stuck using dumbed down iOS software"?
I was going for the second part of the OPs comment which seems to be happening. The UI convergence between iPadOS/macOS is pretty clear with sidebars/toolbars added to the former and spacing/mouse over focus added to the latter. I don’t think it’s dumbed down iOS apps we’ll see on the Mac but rich iPadOS/macOS apps feeding each other - absolutely.
Arm designs cores that are being used in processors. Those are therefore Arm based processors. Apple is one of a very few companies that have a license to develop their own cores using the Arm instruction set, i.e., their cores are Apple cores and therefore their processors Apple processors.
There’s a difference between architecture and micro-architecture. Apple uses the Arm architecture, the instruction set. However, it designs it’s own micro-architecture, it doesn’t just reuse a reference design. And, it was a founder of ARM along with Acorn and helped design the original Arm architecture.
Apple doesn't use ARM cores. They have a license of the instruction set that they use to develop their own cores, so technically they aren't ARM cores, but Apple cores that are using ARM instruction set.
That's not a useful distinction; ARM aren't a manufacturer of any processors, so any processors that use the ARM instruction set are going to get called ARM cores for convenience. Insisting that they be referred to as "Apple cores that are using ARM instruction set" is needlessly long winded pedantry.
Say what?
I'm surprised the article referred to Apple processors as using "ARM cores". That's what I expect to see on Android sites where they try to make it appear Apple processors are nothing special and are just ARM cores with a "few tweaks". They are far from it.
While it's true ARM doesn't make processors, they sure as hell design processor cores. They even give them names (A55, A76 or similar). These cores are what Qualcomm, Huawei, Samsung and others use when they build processors, which saves them a massive amount of work that would be required to design their own custom cores (micro-architecture).
@ErlendurK is correct. Apple has a license to use the ARM ISA (instruction set architecture) and then builds 100% custom designed cores that run that instruction set. It's the same as AMD building processors that are also 100% custom, but run the x86 ISA. I never hear anyone claim that AMD processors are using "Intel cores" so why should we do it with Apple/ARM?
It's not pedantic to tell the truth.
Intel is not an instruction set, x86 and x64 are. AMD make x64 cores. ARM is an instruction set, therefore the Apple Silicon can be referred to as ARM cores.
You know exactly what is being referred to, so this is needless pedantry.
My wonder is how Apple will manage feature parity. Apple announced Intel models of Mac are in the pipeline, I imagine Mac Pros and some high end configuration of other MacBook Pros for the time being.
But the question is how will Apple introduce the benefits of Apple Silicon on some models and leave the Intel ones out. Imagine a MacBook Pro with Apple Silicon and FaceID, bionic processor, W1 processor and the rest. And the "higher performance model" with Intel CPU and none of the previous features. Because, if Apple adds them on Intel based Macs it will do a ton of work for 24 months only (which is the timeframe it provided for the switch to be complete).
The same goes for "universal 2" software. Some tasks can only be performed on Apple Silicon based machines, forcing developers to think about workarounds which might or might not be possible. Since Apple Silicon, differently than the switch between PowerPC and Intel, is not only about the CPU, some software features might be tied to Apple Silicon leaving the nominally more powerful machines out in the cold.
Would it be possible (with development) to plug your phone into your Mac to double the processor power similar to how external GPUs work today?
Not really, if for no other reason than USB 2.0 would be a really slow backplane (!)
Apple could, conceivably, offer Macs with multiple SoCs, or offer new specialized coprocessor units similar to the Mac Pro Afterburner, an expansion card (on the very fast PCIe bus) that can rapidly accelerate tasks such as video rendering.
That said, the Holy Grail of "upgradable CPU engines" hasn't often delivered on expectations outside of the basic PC box, because modern MacBooks and mobile devices are deeply integrated internally and their parts wear out or grow obsolete basically in tandem.
A big gaming PC might easily benefit from a faster CPU or a graphics card update, but trying to add that sort of modularity to a light, thin device makes a lot less sense and creates new drawbacks. By the time your CPU is feeling slow, the memory and storage on your notebook are probably also due for an upgrade; the screen and keyboard and wireless and lots of other features also likely need to be replaced. So it's generally more useful to hand your old Mac to a user with more basic needs and buy a new one.
One exception to that might be the new Mac Pro, which is designed for upgradability. Apple could even sell upgrade logic boards for it, along with faster CPUs and GPU cards. These might even be more cost-effective than selling buyers new cases and shipping around those big heavy aluminum boxes. This would certainly be useful in places where people are installing rack-mounted units.
These new Macs will still support discrete GPUs. There’s no reason to think they won’t.
Would not be at all surprised if the Developer kits have a GPU in them. Given they were supposedly driving the XDR display.
The new iPad Pro already supports 4K resolution output over USB-C (DisplayPort). It doesn't even need Thunderbolt.
The existing developer transition "Apple Silicon Mac" has USB-C (no thunderbolt), and is likely a modified iPad Pro with more RAM, rather than a Mac mini adapted to use Apple Silicon.
The standard Mac mini has Thunderbolt 3 supporting 4 x 4K displays, and also supports TB3 RAID & eGPUs. TB3 is basically PCIe over a cable. TB3 and PCIe are both tied to Intel processors. iOS devices don't support either.
However, the upcoming "USB 4" is supposed to roll in the features of TB3, meaning that Apple Silicon Macs (and iPad Pros) could support USB 4 in the future without needing an Intel CPU.
I wonder if the higher end Macs will have A chips with integrated GPUs, or if there will be non-GPU variants where Apple have a dedicated separate GPU, presumably from AMD.
I'm guessing we may have seen the last of AMD / NIVIDA in any future Mac. The ARM GPU cores are no less as impressive or scalable than the CPU cores. That's another significant cost savings for Apple. It also insures that the GPU is optimized for Metal. This was the point of Metal from the beginning and why OpenGL was sent to the dustbin. Apple has been planning this move for almost a decade and I think it runs much deeper and faster than most people suspect; no INTEL/AMD, no AMD/NIVIDA and soon no Qualcomm.
I'm wondering if any of the companies working on 3D apps/renderers currently being ported to Metal/AMD knew that they were actually working towards Metal/Apple GPU.
Well, Nvidia had pretty strong reasons for refusing to support Apple Metal, as we wrote about:
How can you expect anyone to take you seriously when you don’t have the slightest clue what you’re talking about?
Apple designs their own cores. All they license from ARM is the instruction set.
You know nothing, Apple Insider!
Yes we've written extensively about this. Referring to the custom arm64 CPU cores in Apple's SoCs as "ARM cores" does not mean that they are based on ARM Ltd's reference designs. It means they use the arm64 ISA.
The fact that ARM has named its licensing company, its reference designs and its chip architecture ISA all the same thing is perhaps a point of confusion, but it appears you are not actually confused but rather just screaming to hear noise.
My wonder is how Apple will manage feature parity. Apple announced Intel models of Mac are in the pipeline, I imagine Mac Pros and some high end configuration of other MacBook Pros for the time being.
But the question is how will Apple introduce the benefits of Apple Silicon on some models and leave the Intel ones out. Imagine a MacBook Pro with Apple Silicon and FaceID, bionic processor, W1 processor and the rest. And the "higher performance model" with Intel CPU and none of the previous features. Because, if Apple adds them on Intel based Macs it will do a ton of work for 24 months only (which is the timeframe it provided for the switch to be complete).
The same goes for "universal 2" software. Some tasks can only be performed on Apple Silicon based machines, forcing developers to think about workarounds which might or might not be possible. Since Apple Silicon, differently than the switch between PowerPC and Intel, is not only about the CPU, some software features might be tied to Apple Silicon leaving the nominally more powerful machines out in the cold.
Yeah will be interesting to see how that works out.
Apple is already putting some of its SoC elements into the T2 chips on modern Macs to provide support for features coming from iOS. It's likely that future Intel Macs will get an expanding set of these features in a "T3" etc. But with all the dev tools in place and Universal binaries, it might even be cost-effective for Apple to replace the T2 with an actual A13 alongside the Core i5, providing a full set of Apple Silicon features.
The cost of silicon is mostly related to the design and initial production of each chip; cranking out volume copies makes each copy that much cheaper.
So it could end up being cheaper to mass-produce a few million extra copies of a full "A14 Bionic" SoC and use them in new Intel Macs vs. developing a custom T3 and T4 chip that would only end up being used in a few million Intel Macs over the next couple years.
Apple pays a lot of money to use Intel and Qualcomm chips, but those chips are mostly pure profit for the vendor. Same as software.
Mac users will be stuck using dumbed down iOS software for a long time I feel.
After all - This is the motivation isn’t it? Eventually have just 1 OS that can be modded to facilitate the device.
I certainly did not get that impression from what we saw. Not sure why you did.
It was the spacing of the toolbar icons, very touchable.
Take a look at the WWDC video presentation "Adopt the new look of macOS" to understand why Apple is changing the toolbar layouts.
This is all part of an attempt to both simplify, standardize, and add consistency to the look and feel of common system GUI controls and layouts across all Apple apps on all platforms. Apple also provides underlying APIs and resources in XCode to allow third party apps to use the same layouts and behaviors that Apple uses as well as providing ways for third party apps to personalize some aspects of their look & feel for branding purposes.
There's nothing in there that I would interpret as a move to explicitly promote touch based interactions where they are not currently being used. All of the new toolbar buttons react to pointer hovering, which is something that's not supported on touch-only interactions. All of this is absolutely about providing much better consistency for users of macOS both at the operating system level and across all apps, whether developed by Apple or third party ISVs. Since many macOS users are also using iOS and iPadOS apps those platforms are equally well served by following the same models that help deliver user interface consistency and familiarity across the entire user experience, and at all touch points, in Apple's larger ecosystem.
Holy crap. I just realized Apple could now trivially implement booting MacOS from an iPhone. You could just dock your phone in a laptop enclosure, or use a desktop setup. Also, with VDI, corporations issuing iPhones would not need to buy PCs Or Terminals to run Windows.
Mac users will be stuck using dumbed down iOS software for a long time I feel.
Here's news for you, Rain. Very few new apps are written for macOS (or Windows). Apps today are developed for the web, iOS and Android.
I don't believe this notion that Mac will be dumbed down. With a common platform -- it will make it easier for developers to make applications across Apple's platforms. That ease will attract developers. There is nothing about Apple's chips that would indicate applications will have to be dumbed down. MS is porting the full Office suite to Apples platform.
Ok, so again, mostly accurate. Apple recently signed a major deal with Imagination that both companies have been touting. So the idea of independence on others, particularly those controlled, even partly by Chinese interests isn’t true. Apple doesn’t use ARM reference designs at all, and hasn’t for years. They do, with their architectural license, use the instruction set.
the reason why Microsoft isn’t being successful, so far at least, with Windows on ARM, is several fold. The major reasons have to do with no ecosystem because of their failed phone initiative, which is a long and interesting story in itself, and the fact that they’ve chosen to make as many bad choices as there are bad choices to make.
first, there has been no good SoC for Microsoft to use. The Qualcomm SoC they do use has had some slight modification at Microsoft’s request, but it’s nothing major. It’s still the relatively slow chip Qualcomm is famous for. Secondly, with most software being 64 bit on their own platform, Microsoft chose to go with the 32 bit mode only. Maybe that will change sometime, but for now it results in a much smaller library of software, which in many cases is older, and less feature rich. But none of that runs directly anyway, except under very slow emulation, and not all 32 bit apps run under that emulation.
now, remember that Microsoft started out as a software tools company, so they should know this, but apparently have forgotten. This is something that Apple knows very well.
you don’t come out with what is effectively a new OS, which realistically Windows on ARM is, as it can’t run any Windows software out of the box, without having a way for software to run natively on it. Which is exactly what Microsoft did. Visual studio wasn’t available for months after the OS and first devices were out. So no new software. There was no effective way for developers to rewrite their x86 software without .net, and that won’t be out until the end of this year, assuming any more software delays won’t occur.
Microsoft itself still hasn’t written, or rewritten, most of their own software to run on it, which isn’t a shining example to third party developers.
So, no software from phones or tablets, and almost nothing else other than Office and a few other apps. The hardware has gotten bad reviews because of poor performance, and what is the reason for buying into this?
Will it be possible, eventually, for Apple to make faster SoCs than the fastest most powerful intel Xenon chips?
Yes. There is nothing preventing that. The newest supercomputer, which is now rated as the world’s fastest, runs on a vast number of 32 core ARM chips. There are ARM server chips out from several companies.
10 years ago, people were saying that it would be impossible for ARM, which revolved around power efficiency, not performance, to challenge x86 Desktop chips. Obviously, that was wrong. Moving from where ARM was then to where Apple has taken it now, was a far longer road than from where it is now to Xeon performance levels.
Good article but you left out how virtualization works and after doing a quick search of Apple's new Developer app, nothing showed up. Nothing for Rosetta either. Doesn't surprise me because it's probably a very proprietary set of software and hardware, which is fine with me.
I saw that Parallels released a blog article mentioned their prototype version of Parallels was used for the demo but nothing about which port of Debian was used. Their blog keeps saying to check back later. Has anyone been able to ask this question of any of the macOS Big Sur Apple people during a WWDC conference call? If Parallels only had to create an Apple Silicon version of Parallels (ok, Im stopping to say ARM because it's obviously a lot more than just ARM developers will have to content with) while macOS Big Sur using Rosetta did the "conversion" then that is also a huge step by Apple.
It’s not virtualization as much as it is emulation. Basically what happens is that Rosetta looks at an x86 app as it’s installed, and translates the x86 based instructions to ARM based instructions. That’s the most basic level. But a bunch of x86 instructions don’t directly translate to ARM, because there’s no direct equivalent. Those must be emulated in software, which is what slows emulation down so much. Those instructions can run from 5 to 100 times slower. Apple has stated that the neural engine is involved here, so presumably some of that software burden might be moving through that very fast hardware, I imagine Apple is using the machine learning hardware and software for that as well.
its also possible, because the emulation I saw was so smooth, that Apple is using, as I’ve suggested, some native x86 instructions in hardware. That would speed things up. We’ll have to see once word gets out. It’s possible that the reason we got the A12z this year in our iPads is because Apple knew they would be using a “special” chip in their development units, and it has extra hardware in it that previous A series chips don’t have, and Apple didn’t want to make a special very small run of chips just for internal development and developer machines. Some of that hardware could only be unlocked under macOS, and the development tools. That’s just my assumption.
It's a different world from 2006, when it was most important to run
windows natively b/c there weren't many mac apps. Now there will be
more useful "mac" apps than there ever have been.... ever.... thanks to
iOS (and iPad OS). And the largest user base.... ever... thanks to
iPhone and iPad owners. So just because your software is windows-only
now, the question to ask is, "do they have iOS developers making iOS or
iPad OS versions, now?" Because if they do you will have a native mac
version.
After 15 years with Intel, every company that was going to make an x86 mac version, does, and the vast majority of these will port their stuff. Many other companies are still windows only, but there are companies with windows and iOS (but not x86 mac) products - they have iOS developers, already. So Apple's business decision is betting that these "windows and iOS" companies will assign their existing teams to expand their iOS products into the full mac feature set. For example, Autodesk Revit doesn't work on a mac, so engineering and architecture firms have to use windows, but there is an iOS (iPad OS) version for field use. (Revit is the big-boy-pants evolution of autocad, btw)
...and don't forget that Microsoft isn't oblivious to reality, anymore, and is seeing these same intel performance issues. The ARM version of windows sucks right now, but they have 2-5 years to improve it so that it runs great natively/virtually on a mac. Microsoft is more than happy to sell you a windows OS license for your apple silicon - they don't care about Intel they just care about selling their own stuff. I'd also expect ARM windows to imitate "rosetta," so that the ARM windows can run x86 windows. Performance hit, but even non-apple ARM is looking pretty good vs Intel nowadays, and there is no indication that Intel has a new architectural path forward so Microsoft will need to start thinking about this same switch (they just can't do it as nimbly as Apple due to existing code & customer base).
Comments
Just look at what's happened to Android with Samsung and Qualcomm at the wheel. the $400 iPhone SE beats a $1300 Android smartphone. The same will likely happen to INTEL and AMD/NIVDA who lack the vertical advantage of custom hardware and software integration.
I don’t think it’s dumbed down iOS apps we’ll see on the Mac but rich iPadOS/macOS apps feeding each other - absolutely.
Apple designs their own cores. All they license from ARM is the instruction set.
You know nothing, Apple Insider!
You know exactly what is being referred to, so this is needless pedantry.
My wonder is how Apple will manage feature parity. Apple announced Intel models of Mac are in the pipeline, I imagine Mac Pros and some high end configuration of other MacBook Pros for the time being.
But the question is how will Apple introduce the benefits of Apple Silicon on some models and leave the Intel ones out.
Imagine a MacBook Pro with Apple Silicon and FaceID, bionic processor, W1 processor and the rest. And the "higher performance model" with Intel CPU and none of the previous features. Because, if Apple adds them on Intel based Macs it will do a ton of work for 24 months only (which is the timeframe it provided for the switch to be complete).
Since Apple Silicon, differently than the switch between PowerPC and Intel, is not only about the CPU, some software features might be tied to Apple Silicon leaving the nominally more powerful machines out in the cold.
Apple could, conceivably, offer Macs with multiple SoCs, or offer new specialized coprocessor units similar to the Mac Pro Afterburner, an expansion card (on the very fast PCIe bus) that can rapidly accelerate tasks such as video rendering.
That said, the Holy Grail of "upgradable CPU engines" hasn't often delivered on expectations outside of the basic PC box, because modern MacBooks and mobile devices are deeply integrated internally and their parts wear out or grow obsolete basically in tandem.
A big gaming PC might easily benefit from a faster CPU or a graphics card update, but trying to add that sort of modularity to a light, thin device makes a lot less sense and creates new drawbacks. By the time your CPU is feeling slow, the memory and storage on your notebook are probably also due for an upgrade; the screen and keyboard and wireless and lots of other features also likely need to be replaced. So it's generally more useful to hand your old Mac to a user with more basic needs and buy a new one.
One exception to that might be the new Mac Pro, which is designed for upgradability. Apple could even sell upgrade logic boards for it, along with faster CPUs and GPU cards. These might even be more cost-effective than selling buyers new cases and shipping around those big heavy aluminum boxes. This would certainly be useful in places where people are installing rack-mounted units.
The standard Mac mini has Thunderbolt 3 supporting 4 x 4K displays, and also supports TB3 RAID & eGPUs. TB3 is basically PCIe over a cable. TB3 and PCIe are both tied to Intel processors. iOS devices don't support either.
However, the upcoming "USB 4" is supposed to roll in the features of TB3, meaning that Apple Silicon Macs (and iPad Pros) could support USB 4 in the future without needing an Intel CPU.
The fact that ARM has named its licensing company, its reference designs and its chip architecture ISA all the same thing is perhaps a point of confusion, but it appears you are not actually confused but rather just screaming to hear noise.
Apple is already putting some of its SoC elements into the T2 chips on modern Macs to provide support for features coming from iOS. It's likely that future Intel Macs will get an expanding set of these features in a "T3" etc. But with all the dev tools in place and Universal binaries, it might even be cost-effective for Apple to replace the T2 with an actual A13 alongside the Core i5, providing a full set of Apple Silicon features.
The cost of silicon is mostly related to the design and initial production of each chip; cranking out volume copies makes each copy that much cheaper.
Apple pays a lot of money to use Intel and Qualcomm chips, but those chips are mostly pure profit for the vendor. Same as software.
This is all part of an attempt to both simplify, standardize, and add consistency to the look and feel of common system GUI controls and layouts across all Apple apps on all platforms. Apple also provides underlying APIs and resources in XCode to allow third party apps to use the same layouts and behaviors that Apple uses as well as providing ways for third party apps to personalize some aspects of their look & feel for branding purposes.
There's nothing in there that I would interpret as a move to explicitly promote touch based interactions where they are not currently being used. All of the new toolbar buttons react to pointer hovering, which is something that's not supported on touch-only interactions. All of this is absolutely about providing much better consistency for users of macOS both at the operating system level and across all apps, whether developed by Apple or third party ISVs. Since many macOS users are also using iOS and iPadOS apps those platforms are equally well served by following the same models that help deliver user interface consistency and familiarity across the entire user experience, and at all touch points, in Apple's larger ecosystem.
the reason why Microsoft isn’t being successful, so far at least, with Windows on ARM, is several fold. The major reasons have to do with no ecosystem because of their failed phone initiative, which is a long and interesting story in itself, and the fact that they’ve chosen to make as many bad choices as there are bad choices to make.
first, there has been no good SoC for Microsoft to use. The Qualcomm SoC they do use has had some slight modification at Microsoft’s request, but it’s nothing major. It’s still the relatively slow chip Qualcomm is famous for. Secondly, with most software being 64 bit on their own platform, Microsoft chose to go with the 32 bit mode only. Maybe that will change sometime, but for now it results in a much smaller library of software, which in many cases is older, and less feature rich. But none of that runs directly anyway, except under very slow emulation, and not all 32 bit apps run under that emulation.
now, remember that Microsoft started out as a software tools company, so they should know this, but apparently have forgotten. This is something that Apple knows very well.
you don’t come out with what is effectively a new OS, which realistically Windows on ARM is, as it can’t run any Windows software out of the box, without having a way for software to run natively on it. Which is exactly what Microsoft did. Visual studio wasn’t available for months after the OS and first devices were out. So no new software. There was no effective way for developers to rewrite their x86 software without .net, and that won’t be out until the end of this year, assuming any more software delays won’t occur.
Microsoft itself still hasn’t written, or rewritten, most of their own software to run on it, which isn’t a shining example to third party developers.
Yes. There is nothing preventing that. The newest supercomputer, which is now rated as the world’s fastest, runs on a vast number of 32 core ARM chips. There are ARM server chips out from several companies.
10 years ago, people were saying that it would be impossible for ARM, which revolved around power efficiency, not performance, to challenge x86 Desktop chips. Obviously, that was wrong. Moving from where ARM was then to where Apple has taken it now, was a far longer road than from where it is now to Xeon performance levels.
It’s not virtualization as much as it is emulation. Basically what happens is that Rosetta looks at an x86 app as it’s installed, and translates the x86 based instructions to ARM based instructions. That’s the most basic level. But a bunch of x86 instructions don’t directly translate to ARM, because there’s no direct equivalent. Those must be emulated in software, which is what slows emulation down so much. Those instructions can run from 5 to 100 times slower. Apple has stated that the neural engine is involved here, so presumably some of that software burden might be moving through that very fast hardware, I imagine Apple is using the machine learning hardware and software for that as well.
its also possible, because the emulation I saw was so smooth, that Apple is using, as I’ve suggested, some native x86 instructions in hardware. That would speed things up. We’ll have to see once word gets out. It’s possible that the reason we got the A12z this year in our iPads is because Apple knew they would be using a “special” chip in their development units, and it has extra hardware in it that previous A series chips don’t have, and Apple didn’t want to make a special very small run of chips just for internal development and developer machines. Some of that hardware could only be unlocked under macOS, and the development tools. That’s just my assumption.