Intel did not so much lose out to A9 chips, as A9 chips are only used in Apple products. Instead, Intel - and AMD - lost out to Asian manufacturers of ARM chips such as Samsung, Nvidia, Qualcomm and especially MediaTek.
Good post. Close, but no cigar. Intel did lose out to the chip manufacturers not just Apple. But the David who has really slain Goliath Intel is a little Cambridge company.
Apple likes to call it 'their' A9 processor - and kudos to them for what they've done in 'designing' the A series chips with all the advantages of optimisation for iOS devices that this has yielded. But the underlying technology for the processor was, and continues to be, developed by and licensed from ARM Holdings.
Apple has very little profit on their Processors. So this article is extremely deceiving. The profit generated from the iPhone/iPad etc is combination of small gains from many parts AAPL buys from desperate majority Chinese producers.
And? That's precisely point, even if Intel did a Ax clone they would still fail.
Network security "appliances" and things like that should just move to ARM based processors - a long of Network "appliances" work on a Linux base anyway.
Lots of them already have. Incidentally Linux and ARM have nothing intrinsically to do with each other. Instead, it is merely that UNIX - and Linux - workstations, mainframes and servers used to run on high performance RISC CPUs because Intel and AMD did not make high-performance CPUs, preferring instead to focus on the more lucrative general purpose CPU market (desktops, laptops and later smaller servers for Windows Server and SQL Server). ARM just happens to be one of the many RISC-type CPUs (along with MIPS that was used for most second-gen gaming consoles as well as graphics workstations) plus Sun SPARC and IBM Power (workstations). ARM was used for mobile primarily because it used less power - and is generally cheaper - than other architectures, but the tradeoff is that it was much less powerful. Which is an interesting contradiction: Linux and UNIX used RISC-based servers in the 80s, 90s and 00s because it was more powerful than x86, and now Linux-based mobile/IoT operating systems like iOS, Android and Chrome OS (and Tizen, Ubuntu Mobile, Firefox OS and LG OS formerly webOS) use RISC-based devices because they are less powerful than x86.
Bottom line: Linux works - and works well - on practically any architecture, including even 30 years old - and possibly older - mainframes that have been repurposed. So there is no reason to favor ARM or disfavor x86 on any hardware that will run Linux. Instead, the decision would have to be made on things like cost, scalability, power etc. So if your network security appliance runs a lightweight application and only requires adequate speeds for a small network then ARM is fine. But if the security application is heavy-duty, high-speed, high-bandwidth or for a large network then you are going to need a server type x86 or Oracle/IBM RISC CPU.
x86 is never a good choice, it's a waste of energy and money and that isn't acceptable anymore. Current ARM cores scale from server clusters to high end math number crunchers to low power super high performance iPhone processors. There simply is no need for another architecture.
Intel did not so much lose out to A9 chips, as A9 chips are only used in Apple products. Instead, Intel - and AMD - lost out to Asian manufacturers of ARM chips such as Samsung, Nvidia, Qualcomm and especially MediaTek.
Good post. Close, but no cigar. Intel did lose out to the chip manufacturers not just Apple. But the David who has really slain Goliath Intel is a little Cambridge company.
Apple likes to call it 'their' A9 processor - and kudos to them for what they've done in 'designing' the A series chips with all the advantages of optimisation for iOS devices that this has yielded. But the underlying technology for the processor was, and continues to be, developed by and licensed from ARM Holdings.
I agree for the most part, except for the fact that Intel was an ARM licensee and sold off Xscale, in favor of an x86 low power roadmap. That was the decision point that led to Intel exiting the mobile market a decade later.
Apple's AX not only crushed the Atom, but also brought ARM architecture into play against Core M, where this battle will be fought on low power laptops, tablets and hybrids.
The secret is simple, design the CPU and have the guys next door in your office make the OS and APIs built for it.
Exactly. It's not like some kind of deep dark secret Apple keeps buried under 1 infinite loop (possibly already moved to the new campus). It's called direct cooperation. Apples hardware and software integration is only going to further distance them from their competitors in the coming years, especially in light of their new chip fab. Now the details of what's happening there, that is a secret.
So true. This is Apple core competence and it has a bright future.
The Surface 3 is the only Surface tablet to use an atom chip and it was fanless.
The Surface Pro 4 is fanless for the Core m3 version. Only the Core i5 and Core i7 models have a discrete fan, but it only turns on when the device is at high load.
I work together with several people that use Surface's. More often than not you hear fans going.
wouldnt surprise me -- my rMBP fan rarely ever spins up. my work-issued Dell laptop (also an SSD) is constantly whirling, non-stop. i dont think other companies are as good at keeping things cool and efficient as well as Apple is.
Plain wrong, eh. Well, instead of just making that statement, why don't you actually say something? Give some good, cogent reasons why. And I'm not talking about ARM SoCs getting better because Apple is working on them. That's not a real reason.
Why, maybe dictate it for me, your already doing that.
Again, you seem to have nothing to say. You response is what we get when someone has no answer. Just supply a real answer instead.
forget them. We've heard so many things in the last several years about how AMD's new chips are going to make the difference, that I've lost count. The only thing that's keeping AMD going now are the chips they're selling to the console game manufacturers. Those are anything but leading edge chips. AMD has failed in every other category, and I don't see that changing.
What we're seeing is that the newer category of the 2 in 1's are selling. So, Windows tablets are not selling well, even though Surface Pro sales are increasing, but those 2 in 1's are seeing sales increases.
just like anything else in this business, it's complex, and anyone who thinks they have a simple answer is wrong.
While everything is "complex", the fact Intel's profits are going to go away (no matter how it happens), is not really in dispute. The replacement is no replacement for their lost profits, even if they got the same revenues out of it (which is doubtful).
AMD will hurt Intel in the mid to low range and changing tastes will hurt Intel in the High End. AMD doesn't need to eat all Intel's sales to hurt them, if they gain 30% of Intel's mid range it will hurt too.
It's basically a tag team of many factors that's the cause of their problems.
The low end will be occupied by ARM based tablets and things like Chromebooks.
There is not enough market in high end laptops from vendors other than Apple to sustain their profits and Apple will move away at least part of their product line from them (the low end laptop, high end tablets).
Apple's huge volume drive process upgrades at TSMC and Samsung, Intel right now has to continue doing all of that for its own chips; if sales falter in the high end, their process advantage that's been slipping might disappear pretty fast.
As I said, Intel in 5 years will likely be a shadow of itself.
Fighting for margins, is constant battle and Intel has been through a miracle been able to do it for 45 years with a product that's very hard to really differentiate (they have been a master of marketing as much as tech).
Apple has been in the same battle for profits for nearly as long, it is helped by being able to differentiate its product a bit more and being a bit more diversified. People bitch about Apple's pricing, but keeping their price up, and keeping the perceived value up is very hard; any step towards commodity is what leads fortune 500 company to eventual oblivion.
We've been hearing this about AMD for years now. Even when Intel was lagging when Netburst was clogging, and the Athlon was king, AMD couldn't break Intel's hold. Since then, it's been nothing but downhill for AMD. There is no way that they will ever get 30% of Intel's mid line share.
I agree with you about the low end ARM chip advantage in tablets. I've always said that.
we just have to hope that this terrible quarter for Apple hardware sales doesn't define their future.
Re: "Microsoft's UWP hasn't exactly seen enthusiastic adoption either, with legacy software still bound to mouse driven desktop PCs running x86, and mobile devices running Windows 10 so commercially irrelevant that developers have little reason to bother targeting them. "
I would not count out UWP at this point. Microsoft is betting the farm on UWP and across platform support for all of their frameworks and applications that aren't running in a datacenter.
Good. Now Intel can focus on Core M, i5 & i7. I don't think a Core M will ever be viable in a smartphone - leave that segment to ARM, but it could be successful in tablets and ultra thin laptops, etc.
Why, maybe dictate it for me, your already doing that.
Again, you seem to have nothing to say. You response is what we get when someone has no answer. Just supply a real answer instead.
I don't like your insinuations, keep them to yourself (and who is 'we'?). Its beyond doubt that Intel moved heaven and earth to get its processors in the iPad, read the SJ biography .
Well, Apple is moving to Intel modems. The new phones should have at least 40% using Intel modems. It doesn't matter that intel bought Infinion, it's a long time past, and Intel has been making major improvements to their line. If Apple thinks they good enough, then likely they are, or Apple wouldn't move to them.
maybe Apple is investigating making their own chips there, but it will take years, and the first ones out won't be great.
Hmm, I don't think so, Apple has lots of problems because of modem quality issues, and that's not new at all. I expect them to have an alternative already. I don't expect Infinions code to be changed a lot, code bases like that don't move fast, it will take Intel years to fix what's wrong and I don't expect them to be ready. Apple does know what it's doing of course, but judging a code base like that is very difficult. But we will see how happy Apple is with Infinion (nee Intel) modems in its products.
You might not think so, but a number of publications have already mentioned this as happening, and Qualcomm has stated that a major customer, assumed to be Apple, has cut orders by almost 50%. You need to stop talking about something that isn't really true. I don't know you get the idea that Intel's modems are the same as the old infinion's.
Perhaps writers should stop having Apple as the cause of everything that happens. Even before the slump Apple had this last quarter, iPhones composed just 16.3% of worldwide smartphone sales. Intel didn't look at that and decide to discontunue its products in the mobile area. They looked at Android device sales and the lack of movement there in the direction of x86 SoCs.
they also looked at the continuing drop in the sales of Win Phone , and area in which it looked as though x86 has a chance. And as the very first post here said, the M series chops are what Intel is pushing for tablets, such as the lower end Surface models, and light weight notebooks. No doubt, if there is an interest in it, we'll see an SoC with an M series chip.
It most certainly is an Apple story!
Apple was successful, derived most of the profits mobile and with those, created a SoC line around ARM that still defines the high end of mobile. I don't know what factor that Intel looked at, but I'd guess that they looked at their pathetic penetration in smartphones and tablets, and the fact that they couldn't compete.
Intel made a string of bad decisions beginning a decade ago, and did not or could not acknowledge that there was a mobile market for other than x86 processors. Apple's iPhone was the prototypical new device, followed by iPad, that did not require x86, and since Intel sold Xscale to Marvel, Intel was committed to a competitive low power processor family, Atom, that ultimately could not compete on price, power efficiency or performance with the rate of ARM advance, the choice of almost 100% of the mobile market, including ubiquitous Android devices.
Apple innovation drove the "ARM' race well past what Atom what able to deliver, so yeah, this is correctly an Apple story, and frankly, Intel's vaunted technical abilities haven't been a factor in that; they picked the wrong horse, same as MS.
Sucks to be Intel, but they have never been successful with any architecture other than x86 so why would we even expect them to be competitive in mobile?
As for Core M, it's just going to drive x86 hybrids, and how's that working out?
Core M is working out pretty well. It's one reason why Intel is abandoning some of its Atom lines, though not all. Core M is a 5 watt part, on average. It's being used where Intel intended the high end Atom parts to be used. But Atom was never really that competitive in smartphones. Hybrid sales are the bright spot in Windows computer sales. It's the only area that's up.
But, as with most everything else, it's not that simple. So yes, Apple has driven the performance increase in ARM SoCs. But eventually, that would have happened anyway. It would have just taken more time. But development would have slowed all around. That is, smartphone sales would have been much less. While ARM development would have been slower, in a much smaller smartphone market, so would have Intel's own efforts in this area have been. So most things would have had the same relative positions. It's easy to forget this.
Growth in one area spurs growth in another. The growth of higher performance ARM spurred the development and growth of Atom. Intel was too late to the game here. At the time they sold off their own ARM chip division, phone sales were much slower, and few were making money on ARM chips. It seemed like a good idea at the time.
Don't know about special pipeline scheduling or special pipelines compared to other ARM processors, but Apples Ax chips are very efficient (like other ARM cores) because they are designed from the ground up to be so. The step to 64 bit is most recent and did chance a lot, more registers etc, my guess is that pipelining behavior changed also ...
Intel chips have one huge disadvantage compared to ARM (and other suitable processors) in that they use a RISC (like) core (like ARM) and a translation (hardware) layer (unlike ARM) that translates x86 code to the internal RISC equivalent which makes predicting pipeline behavior from x86 source code a nightmare. ARM has no such problem and a good compiler can layout the code in such a way that pipelining goes smooth, which is impossible (much more difficult) for x86 code.
Maybe Intel doesn't know what's going on because compiler technology isn't there strong point? (Yes I know Intel has the most capable compiler officially, it could very well be that they do know the problem, but cannot fix it because of the reason I gave).
Edit: Intel tried much earlier to get rid of its ancient x86 architecture by defining a 64 bit architecture without backward compatibility, but that failed miserably and they had to follow AMD into 64 bit land, but that included 32bit x86 compatibility ... (rats)
So why stick to x86 instruction set? Couldn't they work an open source complier like LLVM to bypass x86 code and compile straight to their internal rise instruction set. Get control over the pipeline that way.
I know I'm oversimplifying here, but there are really space limits. I don't want to have a post as long as an article.
an instruction set is very close to what we call a chip "architecture". You can design the hardware on the chip differently, but for software compatibility, it must conform to the instruction set. This is why it's so difficult to emulate across different chip architectures. To a large extent, the instruction set determine the way the hardware works. So all x86 chips work the same way, but are organized differently.
when Apple came out with the A7, they were using the ARM instruction set and architecture that the company, ARM, provided. Apple was first, because they decided that they need 64 bits, when others hadn't decided that yet.
a problem that some here aren't aware of, or simply don't want to acknowledge, is that unlike ARM, x86 needs backwards compatibility with custom software, companies and governments have written, going back decades. While Intel eliminated 8 bit code some years ago, their chips still need to run 16 bit code well. When ARM went to 64 bits, they dropped everything below 32 bits. It allowed them to redo the architecture so that both 32 bits and 64 bits run concurrently, and equally.
this is a big problem for Intel, and it's a problem they can't simply shrug off. ARM chip manufacturers, device makers, and OS developers don't have these problems. No one has cared if apps written five, or more, years ago are obsoleted. Now, that situation is beginning to change. But these apps aren't nearly as large, and complex, as the x86 apps that often run an entire company. Maybe someday.
another thing that hinders x86 is that it's a very well developed technology. All the low hanging fruit has been picked. But ARM is still in the middle of its development, so increases in performance are still easier to come by. That will change. As all companies bump the laws of physics in a few years, there will be few advantages to anyone. Intel still has the best process technology. Whether they will be able to say that in five, or more years, is hard to say. I believe that as chip process technology stops at about the 7nm mode, eventually, everyone will bump into each other. That's Intel's real worry.
Network security "appliances" and things like that should just move to ARM based processors - a long of Network "appliances" work on a Linux base anyway.
Lots of them already have. Incidentally Linux and ARM have nothing intrinsically to do with each other. Instead, it is merely that UNIX - and Linux - workstations, mainframes and servers used to run on high performance RISC CPUs because Intel and AMD did not make high-performance CPUs, preferring instead to focus on the more lucrative general purpose CPU market (desktops, laptops and later smaller servers for Windows Server and SQL Server). ARM just happens to be one of the many RISC-type CPUs (along with MIPS that was used for most second-gen gaming consoles as well as graphics workstations) plus Sun SPARC and IBM Power (workstations). ARM was used for mobile primarily because it used less power - and is generally cheaper - than other architectures, but the tradeoff is that it was much less powerful. Which is an interesting contradiction: Linux and UNIX used RISC-based servers in the 80s, 90s and 00s because it was more powerful than x86, and now Linux-based mobile/IoT operating systems like iOS, Android and Chrome OS (and Tizen, Ubuntu Mobile, Firefox OS and LG OS formerly webOS) use RISC-based devices because they are less powerful than x86.
Bottom line: Linux works - and works well - on practically any architecture, including even 30 years old - and possibly older - mainframes that have been repurposed. So there is no reason to favor ARM or disfavor x86 on any hardware that will run Linux. Instead, the decision would have to be made on things like cost, scalability, power etc. So if your network security appliance runs a lightweight application and only requires adequate speeds for a small network then ARM is fine. But if the security application is heavy-duty, high-speed, high-bandwidth or for a large network then you are going to need a server type x86 or Oracle/IBM RISC CPU.
There are no true RISC or CISC chip around anymore, and hadn't been for a long time. Every major processor family today is a combination of the two.
Again, you seem to have nothing to say. You response is what we get when someone has no answer. Just supply a real answer instead.
I don't like your insinuations, keep them to yourself (and who is 'we'?). Its beyond doubt that Intel moved heaven and earth to get its processors in the iPad, read the SJ biography .
Oh please, stop making things up. When you present some real information to me here, in your responses, I'll happily read it.
Hmm, I don't think so, Apple has lots of problems because of modem quality issues, and that's not new at all. I expect them to have an alternative already. I don't expect Infinions code to be changed a lot, code bases like that don't move fast, it will take Intel years to fix what's wrong and I don't expect them to be ready. Apple does know what it's doing of course, but judging a code base like that is very difficult. But we will see how happy Apple is with Infinion (nee Intel) modems in its products.
You might not think so, but a number of publications have already mentioned this as happening, and Qualcomm has stated that a major customer, assumed to be Apple, has cut orders by almost 50%. You need to stop talking about something that isn't really true. I don't know you get the idea that Intel's modems are the same as the old infinion's.
You don't know that either, so stop taking about that yourself. But my reasoning is sound, code bases like that have a long shelf life, if Intel could make the modems themselfs buying Infinion (modem division) makes no sense. I'm talking from experience and I know one or two thing about this, but I never stated it is a fact (otherwise I would have done so), It's my opinion and I voice than when I want.
I don't like your insinuations, keep them to yourself (and who is 'we'?). Its beyond doubt that Intel moved heaven and earth to get its processors in the iPad, read the SJ biography .
Oh please, stop making things up. When you present some real information to me here, in your responses, I'll happily read it.
I don't, you seem to have problems with reality, I cannot help you with that.
Don't know about special pipeline scheduling or special pipelines compared to other ARM processors, but Apples Ax chips are very efficient (like other ARM cores) because they are designed from the ground up to be so. The step to 64 bit is most recent and did chance a lot, more registers etc, my guess is that pipelining behavior changed also ...
Intel chips have one huge disadvantage compared to ARM (and other suitable processors) in that they use a RISC (like) core (like ARM) and a translation (hardware) layer (unlike ARM) that translates x86 code to the internal RISC equivalent which makes predicting pipeline behavior from x86 source code a nightmare. ARM has no such problem and a good compiler can layout the code in such a way that pipelining goes smooth, which is impossible (much more difficult) for x86 code.
Maybe Intel doesn't know what's going on because compiler technology isn't there strong point? (Yes I know Intel has the most capable compiler officially, it could very well be that they do know the problem, but cannot fix it because of the reason I gave).
Edit: Intel tried much earlier to get rid of its ancient x86 architecture by defining a 64 bit architecture without backward compatibility, but that failed miserably and they had to follow AMD into 64 bit land, but that included 32bit x86 compatibility ... (rats)
So why stick to x86 instruction set? Couldn't they work an open source complier like LLVM to bypass x86 code and compile straight to their internal rise instruction set. Get control over the pipeline that way.
When you have the source code of an application you can compile directly to RISC code, but the problem is that most sources are not open and you have to wait for the company that owns it to recompile, it could also very well be that such a company doesn't exist anymore.
Without the source you can translate the binaries of an existing program ahead of time (instead of on chip on the fly like it's done now) and maybe improve upon the pipeline prediction (and emulation) because it is possible to spend a lot of time on the translation; the system can cache the result somewhere on the file system (as a shadow program) so you can execute it directly a second time.
But the resulting RISC code is still not as optimal as it could be because the translation is only based on low level code, high level code info is also needed to get the best out of pipelining.
Comments
Apple likes to call it 'their' A9 processor - and kudos to them for what they've done in 'designing' the A series chips with all the advantages of optimisation for iOS devices that this has yielded. But the underlying technology for the processor was, and continues to be, developed by and licensed from ARM Holdings.
Current ARM cores scale from server clusters to high end math number crunchers to low power super high performance iPhone processors.
There simply is no need for another architecture.
Apple's AX not only crushed the Atom, but also brought ARM architecture into play against Core M, where this battle will be fought on low power laptops, tablets and hybrids.
So true. This is Apple core competence and it has a bright future.
We've been hearing this about AMD for years now. Even when Intel was lagging when Netburst was clogging, and the Athlon was king, AMD couldn't break Intel's hold. Since then, it's been nothing but downhill for AMD. There is no way that they will ever get 30% of Intel's mid line share.
I agree with you about the low end ARM chip advantage in tablets. I've always said that.
we just have to hope that this terrible quarter for Apple hardware sales doesn't define their future.
I would not count out UWP at this point. Microsoft is betting the farm on UWP and across platform support for all of their frameworks and applications that aren't running in a datacenter.
Its beyond doubt that Intel moved heaven and earth to get its processors in the iPad, read the SJ biography .
Core M is working out pretty well. It's one reason why Intel is abandoning some of its Atom lines, though not all. Core M is a 5 watt part, on average. It's being used where Intel intended the high end Atom parts to be used. But Atom was never really that competitive in smartphones. Hybrid sales are the bright spot in Windows computer sales. It's the only area that's up.
But, as with most everything else, it's not that simple. So yes, Apple has driven the performance increase in ARM SoCs. But eventually, that would have happened anyway. It would have just taken more time. But development would have slowed all around. That is, smartphone sales would have been much less. While ARM development would have been slower, in a much smaller smartphone market, so would have Intel's own efforts in this area have been. So most things would have had the same relative positions. It's easy to forget this.
Growth in one area spurs growth in another. The growth of higher performance ARM spurred the development and growth of Atom. Intel was too late to the game here. At the time they sold off their own ARM chip division, phone sales were much slower, and few were making money on ARM chips. It seemed like a good idea at the time.
I know I'm oversimplifying here, but there are really space limits. I don't want to have a post as long as an article.
an instruction set is very close to what we call a chip "architecture". You can design the hardware on the chip differently, but for software compatibility, it must conform to the instruction set. This is why it's so difficult to emulate across different chip architectures. To a large extent, the instruction set determine the way the hardware works. So all x86 chips work the same way, but are organized differently.
when Apple came out with the A7, they were using the ARM instruction set and architecture that the company, ARM, provided. Apple was first, because they decided that they need 64 bits, when others hadn't decided that yet.
a problem that some here aren't aware of, or simply don't want to acknowledge, is that unlike ARM, x86 needs backwards compatibility with custom software, companies and governments have written, going back decades. While Intel eliminated 8 bit code some years ago, their chips still need to run 16 bit code well. When ARM went to 64 bits, they dropped everything below 32 bits. It allowed them to redo the architecture so that both 32 bits and 64 bits run concurrently, and equally.
this is a big problem for Intel, and it's a problem they can't simply shrug off. ARM chip manufacturers, device makers, and OS developers don't have these problems. No one has cared if apps written five, or more, years ago are obsoleted. Now, that situation is beginning to change. But these apps aren't nearly as large, and complex, as the x86 apps that often run an entire company. Maybe someday.
another thing that hinders x86 is that it's a very well developed technology. All the low hanging fruit has been picked. But ARM is still in the middle of its development, so increases in performance are still easier to come by. That will change. As all companies bump the laws of physics in a few years, there will be few advantages to anyone. Intel still has the best process technology. Whether they will be able to say that in five, or more years, is hard to say. I believe that as chip process technology stops at about the 7nm mode, eventually, everyone will bump into each other. That's Intel's real worry.
There are no true RISC or CISC chip around anymore, and hadn't been for a long time. Every major processor family today is a combination of the two.
But my reasoning is sound, code bases like that have a long shelf life, if Intel could make the modems themselfs buying Infinion (modem division) makes no sense.
I'm talking from experience and I know one or two thing about this, but I never stated it is a fact (otherwise I would have done so), It's my opinion and I voice than when I want.
I don't, you seem to have problems with reality, I cannot help you with that.
Without the source you can translate the binaries of an existing program ahead of time (instead of on chip on the fly like it's done now) and maybe improve upon the pipeline prediction (and emulation) because it is possible to spend a lot of time on the translation; the system can cache the result somewhere on the file system (as a shadow program) so you can execute it directly a second time.
But the resulting RISC code is still not as optimal as it could be because the translation is only based on low level code, high level code info is also needed to get the best out of pipelining.