CPU architectures have always been one of my favorite tech topics by the way, I graduated university on reconfigurable VLIW processor architectures, and after that I've worked in the EDA industry (software for IC design and production), and now for the worlds largest supplier of optical lithography gear, so I think I can safely say I've been always been pretty close to the fire ;-)
There really is only one way a 64-bit CPU would significantly affect performance (measurable or perceived), and that would be when the combined memory requirements of all the tasks you have running on the system exceeds 4 GB, and there is no way to stuff more RAM into the machine. I think you can safely say this only becomes an issue if you are a power user, but it's an issue anyway.
That said, there are ways to address over 4 GB of RAM on 32-bit CPU's, such as physical address extension (PAE) on x86. I'm pretty sure something similar could be included in future 32-bit ARM designs.
Some very specific tasks will benefit from 64-bit registers by the way, but I honestly don't think the lack of 64-bit ARM designs is holding back the platform. Sure enough when ARM goes 64-bit, it will make another big step up in performance, but it won't be because of the 64-bitness. Just progress as usual .
This definitely true. However, for now, Apples OpenCL and GCD efforts are not really paying off yet. There's not a whole lot of applications using OpenCL, and the ones that do are relatively specific (video coding, some forms of scientific computing, etc). GCD is mainly a tool to make it easier for developers to write code that would benefit from multi-core/multi-processor setups, but it doesn't allow anything that wasn't already possible otherwise. For example Intel itself has their Threading Building Blocks technology, which is more or less meant to solve the same problem.
Very interesting thought. I was about to write how I personally don't share the same vision, but now that I come to think of it, what you describe would definitely have some interesting applications, and we may actually see something similar in the future.
I don't think it would be something people would have at home though, it makes much more sense in the context of the typical distributed computing that big-iron compute clusters are used for now. The idea of clustering computers, adding and removing nodes to increase the total computational capabilities of the cluster are not new of course, but with a ridiculously fast port like optical Thunderbolt (100 GBps) you could imagine 'miniature' compute clusters that don't take racks of big computers and expensive network infrastructure, yet largely eliminate the difficulties that compute clusters have. In a traditional compute cluster nodes don't share the same memory, so every job has to be chopped up, distributed and the results have to be assembled afterwards, which means the network infrastructure between the nodes becomes a huge complication and bottleneck, to the extent that it rules out many interesting use cases because the communication overhead far outweighs the computational gains. Thunderbolt could greatly increase the set of viable distributed computing use cases, because it reduces communication overhead by a very large factor.
That said, stuff like this only makes sense for computing tasks that take a long time and can easily be chopped up smaller pieces, or if you have many tasks running concurrently. With 'running concurrently' in this case, I don't mean just having 20 applications sitting on your desktop, because usually only one or two will actually be using the CPU, with the rest sitting idle most of the time. I can't really imagine a whole lot of garden-variety use cases that require massive parallel CPU power, except if you like doing video editing at home and such.
For speeding up single, dual, quad or even 8-threaded applications, a CPU with multiple cores/multiple hardware threads will always be faster than a cluster of compute boxes, no matter how fast the interconnect is. Nothing beats on-die scheduling and execution of tasks, with a direct data path to shared system memory. Even if I really try my best to get my quad-core i7 iMac to its knees doing large multi-process Xcode compiles with a Linux VM running in the background while playing a video, the CPU is hardly the bottleneck. Ivy Bridge will provide a comparable performance level at a TDP around 40 Watts next year, even with the cheaper entry-level parts.
Of course nobody knows what kind of applications we might see in the future that would map favorably to a compute cluster, but right now, my impression is that we've already eliminated the requirement for faster CPU's for home/office use a few years ago. This is exactly why Intel should be worried about losing out on the 'low-end' and mobile side of computing, because soon, any ARM CPU will reach the same baseline level of performance, and Intel will have a really hard time convincing people they need faster CPU's.
Personally I think tablets and possibly Chromebook-like devices (the 'low-end') will replace laptops for many tasks, with 'real' laptops/desktops marginalized for all the other, 'serious' computer tasks (the 'mid-range/high-end'). This will almost inevitably mean ARM for 'low-end' computing, x86 for everything else. So I'm not sure if the lack of backwards compatibility with x86 desktop Windows is really going to be a big deal in the future.
If I were in charge at Microsoft I would restrict Windows 8 for ARM to Metro apps, and force all Metro apps to be universal binaries (i.e.: ARM + x86 compatible) by the way. I don't see why anyone would want to run x86 desktop apps on mobile hardware.
Yes, there's no denying that. Apple has proven they know how to switch architectures and are not afraid to actually do it. I would not be surprised if at some point, they would release a MacBook with an ARM CPU in it, just not one replacing the x86 CPU, but adding it alongside, allowing you to boot or even switch on the fly to ARM or x86 OS X, depending on your computing needs. Imagine a MacBook Air that would do 12 to 20 hours in 'ARM-mode', but still have the performance of a fast laptop in 'x86-mode', using universal binaries and/or ARM emulation for interoperability.
I'm not sure we'll see any big breakthroughs in computer architectures anytime soon though. You'd expect at least some hints pointing in this direction in the form of research papers and such, but apart from quantum computers, I'm not aware of any such thing. But who knows what could happen...
Thanks for the considered answers... it really helps. That's one of the reasons I frequent sites like AI.
Sorry I didn't get back earlier to engage in a dialog and pick your brain a bit more...
But, I've been all over the FCPX upgrade today...
One thing I would have offered as a challenge is, to paraphrase an old N'awlins saying:
"Too much [compute power] is never enough".
You mentioned that the compute power isn't needed unless you are doing something like video editing.
Exactly!
The social consumer will take video with his iPhone or iPad, edit it then publish it to who knows where. Right now, they are using iMovie. A year from now, they will be using Final Cut Pro X on the iPad. The pieces are already in place to do that on the iPad -- the UI of FCPX needs to be reworked for touch. My 16-year-old granddaughter can do more, faster with FCPX than she ever could do with iMovie.
That does not mean that FCPX need be relegated to the prosumer -- Today, it is possible to capture live video from a very expensive camera on the iPad -- say of a soccer goal or a touchdown pass. Then, within seconds, literally, you can turn out a highlight video of the "event". You've seen Madden telestrate a few static screen shots of a play a few minutes after it happens. A Mac or the next iPad can do that faster and better within seconds. You will be able to highlight the video and run continuous or start-stop action anywhere during the action of the play.
All it takes is a little [more] compute power on the Mac or iPad.
Some other pithy paraphrases:
C. Nothcote Parkinson: "Work expands to fill the compute power available for its completion".
And one of my favorites from my days at IBM: "There's no substitute for cubic inches".
The quick and dirty short-term solution: make Apple pay more for their Intel chips.
The long-term solution: none.
Why is there no long-term solution? Because sooner or later (probably sooner) Apple will transition their MacBook Air line to ARM-based CPUs.
I highly doubt this, x86 is far too important right now for anything called a Mac. Rather I see Apple coming out with a completely different set of machines to exploit ARM and an iOS variant.
Quote:
No, that won't hurt Intel much. Apple is just one of many Intel customers. But establishing a trend toward ARM among mobile device vendors will hurt Intel greatly. Even Microsoft is threatening to port Windows 8 Tablet (and Office) to ARM.
Beautiful to see Intel under pressure isn't it?
Quote:
Apple designs their own ARM-based chips, which allows them to avoid the off-the-shelf pricing for Intel chips, and they're going to leverage that advantage. ARM-based chips give longer battery life and they cost Apple less. Those are huge advantages.
That is all true but they also have significant disadvantages.
Quote:
Transitioning the MacBook Air to ARM will give the consumer a better experience (longer battery life, cooler operation) and allow Apple to maintain their profit margins for another decade. It's a no brainer. It *will* happen.
Not with Mac OS/X they won't. OS/X is now a high performance operating system that people regularly use to exploit the full potential of the current Intel hardware. ARM is a very noticeable step backwards in performance.
Quote:
Let's pre-empt a few arguments against an ARM-based MacBook Air:
1. "ARM-based chips don't have enough power"
The A6 (or A7) will have quad cores, which will provide enough CPU power to run OS X just as well as Intel's power-hungry, hot-running CPUs.
Even with Quad cores you would be luck to get a quarter of the performance of today's Intel chips.
Quote:
And sharing the processing workload with the GPU through OpenCL will help immensely, as it does now in OS X. There's no hurry. Apple can pick their spot. They can choose the most convenient time to transition to ARM-based chips.
I'm a big fan of OpenCL but I'm also disappointed when people look at it as a replacement for conventional cores running everyday apps. OpenCL is only of a benefit if the task assigned to the GPU is optimal for the GPUs structure.
Quote:
2. "It would be too hard for Apple to port OS X from CISC to RISC"
Apple has already ported OS X from RISC (PowerPC) to CISC (Intel x86/x64). I wouldn't be surprised if Apple has maintained a RISC version of OS X since the Intel transition. And Apple has proven again and again that they can manage CPU and OS transitions brilliantly. Been there, done that. Oh, and if 3rd party developers (lookin' right atcha, Adobe) drag their feet, then Apple can and will develop their own apps to replace theirs.
Actually we are in agreement here. IOS is basically Mac OS with a new GUI layer and some changes to process handling. However the ease of getting Mac OS to run on ARM has nothing to do with the bad idea it is. Basically you are moving back to 68040 or early power PC class performance. If that!
Now I'd be the first to agree that the extra cores, extra execution units and even the GPU will help some. In fact I can see very usable machines. These will not be machines that can address the current Intel users already looking to make an upgrade. if you or a perspective purchaser isn't happy with current laptop performance you won't be happy with ARM based machines.
Quote:
3. "ARM would mean no Windows on MacBook Air"
Probably not, unless users are willing to put up with a horrendously slow hardware architecture emulator. Microsoft has done that in the past with Virtual PC, which ran Windows on PowerPC machines. But really, the vast majority of MacBook Air customers are consumers, and they don't care about running Windows native or emulated. The original MacBook Air fell into the "frequent flier and executive status symbol" category. Not the new one. The 11.6" MBA is Apple's entry-level laptop now.
This is a huge factor, without i86 support Apples laptop sales would crash. This is one of the reasons I suspect that the coming ARM machines will represent a new family of devices at Apple. The market simply isn't ready for non I 86 Macs.
Quote:
4. "But what about Office and all those other Windows apps?"
How many of you consumers out there are actually running Office on your MacBook Air? Hold up your hands!
I think you mis one huge factor here, it is the pro user that drives Mac Book sales these days. VM technology is a wonderful thing.
Quote:
Yeah, didn't think so. Again, if Apple sees that users are waving flaming pitchforks and screaming for a particular 3rd party app that only runs on x86 (highly unlikely), then Apple could do their own version. For ARM. Or they could wait for a less-entrenched developer to do it for them and fill in the vacuum. Whatever.
You live in a dream land. Apple isn't going to write any of the specialized apps that are Windows only.
Quote:
It's not "if" but "when." Apple will ship ARM-based MacBook Airs. It's just a matter of fitting it into their schedule.
I'd be extremely surprised if they called them Mac Books. As to fitting them into a schedule the need better hardware than they have now. A5 won't do the trick, I'm not convinced the A6 will either.
Quote:
Oh, and there's one more thing. Let's say that the $300 million Intel lavished on their Ultrabook "initiative" actually works. Say it creates a huge market for Ultrabooks. Let's play along for a second and assume that a new "race to the bottom" will start at the high end of the Wintel laptop market. (And yes, I am the new King of Sweden!) Well, there will be the inevitable price wars between the Ultrabook makers. They'll cut corners everywhere, they'll try to nickel-and-dime each other off the low-margin cliff.
Let's imagine that in 2 years, all those Ultrabook competitors manage to bring the price down from $1000 to $800. They will have finally managed to undercut the MacBook Air price by 2013. Unfortunately for them, by that time Apple will have released an A7-based MacBook Air that allows them to maintain a 30% margin while selling the 11.6" model for $800 too. It'll be next-gen thin, next-gen light, next-gen cool, and the Wintel crowd will never be able to catch up. Not while they use Intel chips.
You are overly optimistic about ARM performance and the markets desire for intel is understated. I expect i86 hardware based Mac Books to be around for a very long time. Still I expect an ARM play from Apple. I just don't exspect a straight up move into Mac Book territory.
Comments
Sure, no problem
CPU architectures have always been one of my favorite tech topics by the way, I graduated university on reconfigurable VLIW processor architectures, and after that I've worked in the EDA industry (software for IC design and production), and now for the worlds largest supplier of optical lithography gear, so I think I can safely say I've been always been pretty close to the fire ;-)
There really is only one way a 64-bit CPU would significantly affect performance (measurable or perceived), and that would be when the combined memory requirements of all the tasks you have running on the system exceeds 4 GB, and there is no way to stuff more RAM into the machine. I think you can safely say this only becomes an issue if you are a power user, but it's an issue anyway.
That said, there are ways to address over 4 GB of RAM on 32-bit CPU's, such as physical address extension (PAE) on x86. I'm pretty sure something similar could be included in future 32-bit ARM designs.
Some very specific tasks will benefit from 64-bit registers by the way, but I honestly don't think the lack of 64-bit ARM designs is holding back the platform. Sure enough when ARM goes 64-bit, it will make another big step up in performance, but it won't be because of the 64-bitness. Just progress as usual
This definitely true. However, for now, Apples OpenCL and GCD efforts are not really paying off yet. There's not a whole lot of applications using OpenCL, and the ones that do are relatively specific (video coding, some forms of scientific computing, etc). GCD is mainly a tool to make it easier for developers to write code that would benefit from multi-core/multi-processor setups, but it doesn't allow anything that wasn't already possible otherwise. For example Intel itself has their Threading Building Blocks technology, which is more or less meant to solve the same problem.
Very interesting thought. I was about to write how I personally don't share the same vision, but now that I come to think of it, what you describe would definitely have some interesting applications, and we may actually see something similar in the future.
I don't think it would be something people would have at home though, it makes much more sense in the context of the typical distributed computing that big-iron compute clusters are used for now. The idea of clustering computers, adding and removing nodes to increase the total computational capabilities of the cluster are not new of course, but with a ridiculously fast port like optical Thunderbolt (100 GBps) you could imagine 'miniature' compute clusters that don't take racks of big computers and expensive network infrastructure, yet largely eliminate the difficulties that compute clusters have. In a traditional compute cluster nodes don't share the same memory, so every job has to be chopped up, distributed and the results have to be assembled afterwards, which means the network infrastructure between the nodes becomes a huge complication and bottleneck, to the extent that it rules out many interesting use cases because the communication overhead far outweighs the computational gains. Thunderbolt could greatly increase the set of viable distributed computing use cases, because it reduces communication overhead by a very large factor.
That said, stuff like this only makes sense for computing tasks that take a long time and can easily be chopped up smaller pieces, or if you have many tasks running concurrently. With 'running concurrently' in this case, I don't mean just having 20 applications sitting on your desktop, because usually only one or two will actually be using the CPU, with the rest sitting idle most of the time. I can't really imagine a whole lot of garden-variety use cases that require massive parallel CPU power, except if you like doing video editing at home and such.
For speeding up single, dual, quad or even 8-threaded applications, a CPU with multiple cores/multiple hardware threads will always be faster than a cluster of compute boxes, no matter how fast the interconnect is. Nothing beats on-die scheduling and execution of tasks, with a direct data path to shared system memory. Even if I really try my best to get my quad-core i7 iMac to its knees doing large multi-process Xcode compiles with a Linux VM running in the background while playing a video, the CPU is hardly the bottleneck. Ivy Bridge will provide a comparable performance level at a TDP around 40 Watts next year, even with the cheaper entry-level parts.
Of course nobody knows what kind of applications we might see in the future that would map favorably to a compute cluster, but right now, my impression is that we've already eliminated the requirement for faster CPU's for home/office use a few years ago. This is exactly why Intel should be worried about losing out on the 'low-end' and mobile side of computing, because soon, any ARM CPU will reach the same baseline level of performance, and Intel will have a really hard time convincing people they need faster CPU's.
Personally I think tablets and possibly Chromebook-like devices (the 'low-end') will replace laptops for many tasks, with 'real' laptops/desktops marginalized for all the other, 'serious' computer tasks (the 'mid-range/high-end'). This will almost inevitably mean ARM for 'low-end' computing, x86 for everything else. So I'm not sure if the lack of backwards compatibility with x86 desktop Windows is really going to be a big deal in the future.
If I were in charge at Microsoft I would restrict Windows 8 for ARM to Metro apps, and force all Metro apps to be universal binaries (i.e.: ARM + x86 compatible) by the way. I don't see why anyone would want to run x86 desktop apps on mobile hardware.
Yes, there's no denying that. Apple has proven they know how to switch architectures and are not afraid to actually do it. I would not be surprised if at some point, they would release a MacBook with an ARM CPU in it, just not one replacing the x86 CPU, but adding it alongside, allowing you to boot or even switch on the fly to ARM or x86 OS X, depending on your computing needs. Imagine a MacBook Air that would do 12 to 20 hours in 'ARM-mode', but still have the performance of a fast laptop in 'x86-mode', using universal binaries and/or ARM emulation for interoperability.
I'm not sure we'll see any big breakthroughs in computer architectures anytime soon though. You'd expect at least some hints pointing in this direction in the form of research papers and such, but apart from quantum computers, I'm not aware of any such thing. But who knows what could happen...
Thanks for the considered answers... it really helps. That's one of the reasons I frequent sites like AI.
Sorry I didn't get back earlier to engage in a dialog and pick your brain a bit more...
But, I've been all over the FCPX upgrade today...
One thing I would have offered as a challenge is, to paraphrase an old N'awlins saying:
"Too much [compute power] is never enough".
You mentioned that the compute power isn't needed unless you are doing something like video editing.
Exactly!
The social consumer will take video with his iPhone or iPad, edit it then publish it to who knows where. Right now, they are using iMovie. A year from now, they will be using Final Cut Pro X on the iPad. The pieces are already in place to do that on the iPad -- the UI of FCPX needs to be reworked for touch. My 16-year-old granddaughter can do more, faster with FCPX than she ever could do with iMovie.
That does not mean that FCPX need be relegated to the prosumer -- Today, it is possible to capture live video from a very expensive camera on the iPad -- say of a soccer goal or a touchdown pass. Then, within seconds, literally, you can turn out a highlight video of the "event". You've seen Madden telestrate a few static screen shots of a play a few minutes after it happens. A Mac or the next iPad can do that faster and better within seconds. You will be able to highlight the video and run continuous or start-stop action anywhere during the action of the play.
All it takes is a little [more] compute power on the Mac or iPad.
Some other pithy paraphrases:
C. Nothcote Parkinson: "Work expands to fill the compute power available for its completion".
And one of my favorites from my days at IBM: "There's no substitute for cubic inches".
The quick and dirty short-term solution: make Apple pay more for their Intel chips.
The long-term solution: none.
Why is there no long-term solution? Because sooner or later (probably sooner) Apple will transition their MacBook Air line to ARM-based CPUs.
I highly doubt this, x86 is far too important right now for anything called a Mac. Rather I see Apple coming out with a completely different set of machines to exploit ARM and an iOS variant.
No, that won't hurt Intel much. Apple is just one of many Intel customers. But establishing a trend toward ARM among mobile device vendors will hurt Intel greatly. Even Microsoft is threatening to port Windows 8 Tablet (and Office) to ARM.
Beautiful to see Intel under pressure isn't it?
Apple designs their own ARM-based chips, which allows them to avoid the off-the-shelf pricing for Intel chips, and they're going to leverage that advantage. ARM-based chips give longer battery life and they cost Apple less. Those are huge advantages.
That is all true but they also have significant disadvantages.
Transitioning the MacBook Air to ARM will give the consumer a better experience (longer battery life, cooler operation) and allow Apple to maintain their profit margins for another decade. It's a no brainer. It *will* happen.
Not with Mac OS/X they won't. OS/X is now a high performance operating system that people regularly use to exploit the full potential of the current Intel hardware. ARM is a very noticeable step backwards in performance.
Let's pre-empt a few arguments against an ARM-based MacBook Air:
1. "ARM-based chips don't have enough power"
The A6 (or A7) will have quad cores, which will provide enough CPU power to run OS X just as well as Intel's power-hungry, hot-running CPUs.
Even with Quad cores you would be luck to get a quarter of the performance of today's Intel chips.
And sharing the processing workload with the GPU through OpenCL will help immensely, as it does now in OS X. There's no hurry. Apple can pick their spot. They can choose the most convenient time to transition to ARM-based chips.
I'm a big fan of OpenCL but I'm also disappointed when people look at it as a replacement for conventional cores running everyday apps. OpenCL is only of a benefit if the task assigned to the GPU is optimal for the GPUs structure.
2. "It would be too hard for Apple to port OS X from CISC to RISC"
Apple has already ported OS X from RISC (PowerPC) to CISC (Intel x86/x64). I wouldn't be surprised if Apple has maintained a RISC version of OS X since the Intel transition. And Apple has proven again and again that they can manage CPU and OS transitions brilliantly. Been there, done that. Oh, and if 3rd party developers (lookin' right atcha, Adobe) drag their feet, then Apple can and will develop their own apps to replace theirs.
Actually we are in agreement here. IOS is basically Mac OS with a new GUI layer and some changes to process handling. However the ease of getting Mac OS to run on ARM has nothing to do with the bad idea it is. Basically you are moving back to 68040 or early power PC class performance. If that!
Now I'd be the first to agree that the extra cores, extra execution units and even the GPU will help some. In fact I can see very usable machines. These will not be machines that can address the current Intel users already looking to make an upgrade. if you or a perspective purchaser isn't happy with current laptop performance you won't be happy with ARM based machines.
3. "ARM would mean no Windows on MacBook Air"
Probably not, unless users are willing to put up with a horrendously slow hardware architecture emulator. Microsoft has done that in the past with Virtual PC, which ran Windows on PowerPC machines. But really, the vast majority of MacBook Air customers are consumers, and they don't care about running Windows native or emulated. The original MacBook Air fell into the "frequent flier and executive status symbol" category. Not the new one. The 11.6" MBA is Apple's entry-level laptop now.
This is a huge factor, without i86 support Apples laptop sales would crash. This is one of the reasons I suspect that the coming ARM machines will represent a new family of devices at Apple. The market simply isn't ready for non I 86 Macs.
4. "But what about Office and all those other Windows apps?"
How many of you consumers out there are actually running Office on your MacBook Air? Hold up your hands!
I think you mis one huge factor here, it is the pro user that drives Mac Book sales these days. VM technology is a wonderful thing.
Yeah, didn't think so. Again, if Apple sees that users are waving flaming pitchforks and screaming for a particular 3rd party app that only runs on x86 (highly unlikely), then Apple could do their own version. For ARM. Or they could wait for a less-entrenched developer to do it for them and fill in the vacuum. Whatever.
You live in a dream land. Apple isn't going to write any of the specialized apps that are Windows only.
It's not "if" but "when." Apple will ship ARM-based MacBook Airs. It's just a matter of fitting it into their schedule.
I'd be extremely surprised if they called them Mac Books. As to fitting them into a schedule the need better hardware than they have now. A5 won't do the trick, I'm not convinced the A6 will either.
Oh, and there's one more thing. Let's say that the $300 million Intel lavished on their Ultrabook "initiative" actually works. Say it creates a huge market for Ultrabooks. Let's play along for a second and assume that a new "race to the bottom" will start at the high end of the Wintel laptop market. (And yes, I am the new King of Sweden!) Well, there will be the inevitable price wars between the Ultrabook makers. They'll cut corners everywhere, they'll try to nickel-and-dime each other off the low-margin cliff.
Let's imagine that in 2 years, all those Ultrabook competitors manage to bring the price down from $1000 to $800. They will have finally managed to undercut the MacBook Air price by 2013. Unfortunately for them, by that time Apple will have released an A7-based MacBook Air that allows them to maintain a 30% margin while selling the 11.6" model for $800 too. It'll be next-gen thin, next-gen light, next-gen cool, and the Wintel crowd will never be able to catch up. Not while they use Intel chips.
You are overly optimistic about ARM performance and the markets desire for intel is understated. I expect i86 hardware based Mac Books to be around for a very long time. Still I expect an ARM play from Apple. I just don't exspect a straight up move into Mac Book territory.