I want to discuss your answers further -- you appear to have hardware expertise that I lack. I don't want to challenge your statements, rather to understand how they affect the computing world as I see it. My questions are interspersed.
Originally Posted by d-range
I would say yes: a midrange to high-end Intel chip could be considered worth 4-8 times compared to a midrange to high-end ARM chip, and no: a 4-ARM motherboard (8 cores) would not compare favourably even to a quad-core i7 setup. 64-bit is completely irrelevant to performance, but ARM will have to go there anyway to be able to support systems with 4GB+ of memory.
I realize that 64-bit has little or no effect on hardware
performance. But it can have a significant effect on OS and app performance (paging memory, video rendering, parallel operation, etc.). Based on the work being done by the user, 64-bit and additional RAM can affect the perceived power and speed of the "hardware".
As much as I like ARM chips and the clean architectural design they are built on, they are simply no match to your typical desktop x86 chip. Not even nearly. Trying to make up for this by putting multiple ARM CPU's on the same motherboard does not help a lot, because many tasks don't scale very well using parallel execution. It would also kind of defeat the purpose of using ARM in the first place, as 4 dual core ARM CPU's in a single system would use more power than an Intel CPU anyway.
I think I knew the answer to that, before I asked it.
However, Apple has done a lot of work to make their OS(es) and apps (including apps by 3rd-party developers):
-- largely independent/abstracted from the underlying hardware
-- to use OpenCL and GCD wherever possible
-- to exploit parallelism using any available CPU and GPU cores
-- for lack of a better phrase distributed processing
If Apple has done its job well, I believe that a power computing solution, for the near future, would be a series of "compute boxes" daisy chained on a thunderbolt cable along with RAID Storage, peripheral docks, wireless stations, and Displays.
These "compute boxes" would consist of:
-- enough SSD to run a minimal OS
-- CPUs and GPUs
-- an internal power supply
-- a fan if needed
-- small packaging like the Mac Mini or AppleTV 2
The theory is that as your compute needs grow -- just add another "compute box" to the daisy chain
These "compute boxes" could contain whatever CPU and GPU architecture that provided the required price/performance.
Anyway, the question you should ask yourself is not whether ARM chips can rival Intel chips in performance, but whether they even have to, to be successful: we've reached a level of computational capabilities sufficient for nearly everything the typical user needs long ago. For many use-cases, a fast dual or quad-core ARM CPU would be more than fast enough.
Yes, I believe that the iPad supports that conclusion.
Take the Intel Atom for example: compared to current dual-core Cortex-A9 ARM chips, it offers about the same level of performance, at somewhat higher power consumption. Many netbooks and low-end PC's have been sold running Atoms, and everyone has accepted them as 'fast enough', mainly because of the brilliant marketing Intel pulled off, selling years-old, repackaged technology as if it were something new, slapping a fancy, modern-sounding name onto it. Sure you are not going to do any gaming on an Atom, or transcode videos, but for basic home/office use, browsing the web, that kind of stuff, it's more than sufficient. That said, compared to a Sandy Bridge i3/i5/i7 or even to a 5-year old Core 2 Duo, the performance you get from an Atom is abysmal, laughable even.
Personally I don't think ARM will ever rival Intel/AMD for serious computing, but I can see ARM chips ending up in low-end/cheap computers and laptops in the future. If Intel is worried about anything, it is not worried about losing their lead in high-end or even mid-range, but about losing the low-end, and not succeeding in gaining a share of the mobile market.
The only problem I have with your last paragraph -- is that for ARM chips to be used in low-end, cheap computers and laptops.
As I understand Windows 8, in order to run legacy x86 apps the device will require an x86 CPU. This would appear to eliminate the use of ARM in low-end, cheap computers and laptops.
Further, developers might be discouraged from rewriting their x86 apps for Metro/ARM because of the disincentive of paying MS 30% for the privilege.
I have no problem with the curated Metro store or the 30%...
But I think it is a chicken/egg thing -- without a lot of Metro apps there won't be any Metro tablets (and low-end, cheap computers and laptops) -- and without the Metro tablets et al, there won't be any incentive to port x86 apps to Metro/ARM.
Finally, I don't know this, but based on past performance, I suspect it is true:
Say there is a breakthrough and a new computer architecture suddenly arrives on the scene. Apple is in a good position to migrate their OSes and apps, natively, to exploit that new platform. And through something like rosetta, existing iOS, OS X and Windows apps could run at normal speed in emulation. Third-party iOS and OS X could run native with a simple recompile.
Apple has bet the farm (and won) on this kind of revolutionary migration -- no other OS or hardware mfgr has.