or Connect
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › ARM seen challenging Intel's notebook chip dominance by 2013
New Posts  All Forums:Forum Nav:

ARM seen challenging Intel's notebook chip dominance by 2013 - Page 2

post #41 of 89
deleted
post #42 of 89
Intel has on its side the fact that Windows proper has been ported to lots of CPUs, including Itanium, PowerPC and the old DEC Alphas. And with Windows 8, ARM as well. But Windows doesn't gain any traction on alternate CPUs, and Microsoft quietly discontinues these versions to focus on x86. The result is that Intel wins by keeping x86 competitive. As long as they can some kind of x86 to be competitive with ARM on power/performance, it'll keep ARM from gaining momentum.

"Apple should pull the plug on the iPhone."

John C. Dvorak, 2007
Reply

"Apple should pull the plug on the iPhone."

John C. Dvorak, 2007
Reply
post #43 of 89
Quote:
Originally Posted by mytdave View Post

Performance wise, sure, I imagine they'll be a strong competitor to Intel soon. The problem is that pesky x86 instruction set. Apple pulled a magic rabbit out of the proverbial hat with their transition to x86 by way of Rosetta. They could do it again should they choose to move to ARM for MacOSX. I don't think they will... [...]

Apple may not need a Rosetta-like emulation utility to get Windows 8 to run on ARM-based Macs.

If Windows-8-on-ARM takes off (and don't anybody hold your breath here) then legacy app developers will port their apps to the ARM architecture. And if Microsoft really wants that porting to happen in earnest, they'll update their IDE to make the transition as easy as Apple made it with Xcode. Again, don't hold your breath waiting for this.

But with or without Windows 8 emulation and/or native ARM-based Windows 8 apps on OS X, Apple absolutely must ship a MacBook Air with an ARM-based processor. There is no way Apple won't. Using ARM-based chips of their own design would extend Apple's component cost advantage, since they will no longer be buying Intel chips at off-the-shelf boutique prices. It will help Apple maintain their margins as hardware selling prices inevitably decline over the years.

Sure, the prosumers and pros who need to run Office and/or Adobe software suites would need to stick with Intel-based MacBook Pros. But Apple is aiming at the vast majority who don't run high-end productivity suites on their Macs. The MacBook Air has replaced the old plastic MacBook at the low end of Apple's laptop line. And the low end, the high-volume end of the spectrum, is where ARM chips will first appear in Macs.

In the fullness of time, if the Microsofts and Adobes of the world fail to migrate to ARM, there is nothing stopping Apple from building out iWork to compete more directly against Office. And Apple could snap up Pixelmator at any time and enhance it to compete against Photoshop. All available instantly through the Mac App Store at reasonable prices.

Sent from my iPhone Simulator

Reply

Sent from my iPhone Simulator

Reply
post #44 of 89
Quote:
Originally Posted by Snowdog65 View Post

Before anyone trots out Apples transitions, remember that those were complete transitions of the product line. That is a very clear strategy. Not simply having mixed CPU architecture releases in the market confusing everything, and the transitions included CPU emulation software, which ARM will be incapable of running adequately.

What a stupid distinction. There is absolutely no reason why Apple couldn't leverage their *existing* universal binary technology to support more than one architecture as a long term strategy. With the app store, ensuring the right binaries are always available as well as providing a mechanism to "fix" missing binaries, is trivial. Heck, they could make it another condition of the app store.

Anyone banking on Apple staying "Intel only" purely because of momentum is being pretty myopic and bound to be surprised.

If Intel doesn't produce, I would be more surprised if Apple didn't go dual architecture. Don't think so? Just ask IBM...
post #45 of 89
Quote:
Originally Posted by Snowdog65 View Post

"Win8 compatible" needs to be in quotes.

Because being able to run an OS with essentially no actual compatible software isn't exactly going to leverage the installed base.

The same applies to Apple if they were to release an ARM laptop. What exactly would you run on it for software?

Neither company will be transitioning to ARM, so how would it really make sense to have a predominately x86 ecosystem with a few portables running ARM, with ostensibly a "compatible" OS that actually is NOT compatible with the software base.

That is just a mess of confusion. I can see Microsoft doing it, but not Apple.

Before anyone trots out Apples transitions, remember that those were complete transitions of the product line. That is a very clear strategy. Not simply having mixed CPU architecture releases in the market confusing everything, and the transitions included CPU emulation software, which ARM will be incapable of running adequately.

Apple does exactly this with Xcode and there's no doubting MS can (and is) doing the same thing in regards to the ARM/x86 for Windows 8. It's proven that a universal development environment can be provided and there's not so much significantly different between the OS X and iOS libraries. Most of the differences at the application layer relate to UI elements anyway. I don't see any reason to believe Apple could not deliver a functioning version of OS X running on ARM along with an updated Xcode for developers to add ARM support. It's not a click wizard, but there's no real technology obstacle.

In fact, based on its own behavior history Apple seems more likely to leave older apps behind if it wants a dual target model. Microsoft has historically been less willing to leave out backward compatibility, although ARM based Windows 8 machines most surely will not offer X86 compatibility due to performance.
post #46 of 89
I cannot help but think ÂdéjÃ* vueÂ!

Back in 1985, where I worked, PCs, were viewed more as addendum to our work environment because everyone new, back then, that if you needed to get Âreal work done, you would use the mainframe. The single mainframe performed the Âbread and butter (MRP, Finance etc) work required for our company with 3 plants and over 1000 employees.

Today, those ARM iPad/iPhones/etc are cute but everyone knows that if you need Âreal work done, you use an x86 machine (OSX /Linux/Windows.)
In my opinion, like the mainframe, the x86 machines will continue to be present in our world for purposes they serve well, but for the everyday user, an ARM machine may very well be all that they need. The days of understanding directory trees, hard drives volumes, etc may become, in a relative short time, irrelevant knowledge. (Do you care about directory structures in your iPad?) Computing may change a lot in the next couple of years for the average person.

Why not n ARM based MacAir, running a version of IOS modifying to use a keyboard and pad? The iPad is showing, or at least pointing to, that most people do not need to be running an OS like OSX/Linux/Windows to get their work done. What they need are well made, simple and cheaply priced applications. The OS is no longer the system, the applications are!
post #47 of 89
Quote:
Originally Posted by mytdave View Post

I would presume they'd tweak Rosetta to run in conjunction with LLVM and Klang.

Why bother? Native code is what gets you the performance - you might as well run flash

Just make ARM Mac's Mac App store only and let the app store manage the binaries. Problem solved. And more elegantly than MS - not that it's hard to be
post #48 of 89
Quote:
Originally Posted by C_For_Short View Post

Why not n ARM based MacAir, running a version of IOS modifying to use a keyboard and pad? The iPad is showing, or at least pointing to, that most people do not need to be running an OS like OSX/Linux/Windows to get their work done. What they need are well made, simple and cheaply priced applications. The OS is no longer the system, the applications are!

Well you could try to do a typical task of selling your used gear on a sell-site for instance. It is redicularly difficult to do this on an ios device. You might need to download your pictures/video to a site and search and copy links to the browser. It is WAY to complicated with an IOS device. An ios device is good for a lot of things but it is NOT good for any real WORK.

Its a sidekick to have around the computer. This will be the case as long as the multitasking at the current level. I really hope that it gets better. eg. you could upload pics/video and edit a page as you search for information. And where do you put the files you need to attach that arent available on the net... icloud isnt really there yet... I have used the ipad for about 1 year when i commute but sometimes it just makes my heart bleed when I think how easy it would be to do something on my macbook that isnt with me.... If i know that im going to need the computer il take in instead but there are always those things that you cant predict that you needed it with you.
post #49 of 89
Quote:
Originally Posted by Tallest Skil View Post

iOS doesn't work in a laptop setting. OS X doesn't work in a tablet setting.

You make an assumption that iOS is a static Operating System. It isn't and could be easily extended for a laptop. Just as it was extended from iPhone into a tablet OS.

Now is that good or bad? I'd say it depends upon your needs. In A nutshell you add a keyboard to the iPad. The clam shell is one way to do that but don't forget about early radio shack computers. You shouldn't think about Apple doing a direct attack with just another laptop.
post #50 of 89
Quote:
Originally Posted by Tallest Skil View Post

iOS doesn't work in a laptop setting. OS X doesn't work in a tablet setting.

Why couldn't iOS work in a laptop setting?

Picture an iPad Pro which has iOS and a touch screen and a keyboard. Just why wouldn't that work?

It might be very convenient for email and some other applications.
"I'm way over my head when it comes to technical issues like this"
Gatorguy 5/31/13
Reply
"I'm way over my head when it comes to technical issues like this"
Gatorguy 5/31/13
Reply
post #51 of 89
Quote:
Originally Posted by DocNo42 View Post

What a stupid distinction. There is absolutely no reason why Apple couldn't leverage their *existing* universal binary technology to support more than one architecture as a long term strategy. With the app store, ensuring the right binaries are always available as well as providing a mechanism to "fix" missing binaries, is trivial. Heck, they could make it another condition of the app store.

It is a very solid distinction. Apple actually had a transition strategy to move all future machines to an architecture that was offering equivalent to greater performance.

Distinction 1: It was actually a clear strategy. Apple won't be selling OSX intel and OSX ARM computers at the same time. That is a confused muddle. Apple is all about streamlined products lines.

Distinction 2: Intel was at least equal in performance. ARM is behind behind by a factor of ten so is completely unsuitable to actually replace Intel.

ARM doesn't provide to power to replace x86 and it will be a confused mess coexisting.
post #52 of 89
Quote:
Originally Posted by jragosta View Post

However, software availability is overrated. If Microsoft were to port Windows Office to ARM, that (plus IE and a few other lesser apps) would be all that many low end laptop users need. I don't see them running Adobe Creative Suite on this type of machine.


I would say the opposite. It is underrated.

If software availability didn't matter, Linux (being both capable and FREE) would be a dominant OS.

Linux is mired at about 1% because everything you want to run is on Windows.

I would never buy a windows box that didn't run all my old software.
post #53 of 89
Quote:
Originally Posted by Snowdog65 View Post

I would say the opposite. It is underrated.

If software availability didn't matter, Linux (being both capable and FREE) would be a dominant OS.

Linux is mired at about 1% because everything you want to run is on Windows.

Not at all. It's mired at 1% because it's a POS from a UI perspective and offers no significant benefits for 99.9% of users.

Furthermore, Linux is a bad example. What I said is that once you have the basics, then additional software is irrelevant. For most users, that's MS Office, a web browser, an email client, and a media player. Linux doesn't meet even that basic definition.

Quote:
Originally Posted by Snowdog65 View Post

I would never buy a windows box that didn't run all my old software.

As soon as you become the spokesperson for the rest of the planet, I might care. Besides, I don't believe you. Are you really expecting me to believe that you've NEVER had to give up a piece of software when you upgraded to a new system? Never? BS.
"I'm way over my head when it comes to technical issues like this"
Gatorguy 5/31/13
Reply
"I'm way over my head when it comes to technical issues like this"
Gatorguy 5/31/13
Reply
post #54 of 89
Quote:
Originally Posted by jragosta View Post


As soon as you become the spokesperson for the rest of the planet, I might care. Besides, I don't believe you. Are you really expecting me to believe that you've NEVER had to give up a piece of software when you upgraded to a new system? Never? BS.

You are certainly no better spokesperson.

I don't remember any piece of software I was actually running ever being killed in an upgrade. Naturally there is probably software I was using on Win3.1 in 1993 (when I got my first Windows Box) that wouldn't run today, but I stopped using it before it was killed in an upgrade.

But two of my favorite games are quite old, and are currently on my Win7 system, and still working fine. They date from 1997 (Total Annihilation) and 2003 (NWN).

I would not upgrade to a new "Windows compatible" computer where none of my games worked, in fact none of my software worked.

Perhaps requiring "all" is strong, naturally if an upgrade killed some trivial program I didn't care about, it wouldn't matter. But your position of, all you need is MS Office is ridiculous.

Anyone who has used a computer for some time will likely have developed a library of applications that they are used to, unless we are talking about granny that just uses a web browser.

Why would anyone downgrade to the same OS but on an architecture that prevented the library of software they are used to, and already paid for, from running. That would be silly.

Note what a big deal Bootcamp/Parallels/Vmware is for the Mac. Why? Because software availability matters. I mean why are these things needed if all you need is MS Office?

Or why the 68K-PPC and PPC-x86 transition contained extensive CPU emulation capability. Could it be because software availability matters?

IIRC there are over a Million applications for Windows. Seriously all you need is Office?? I wonder how all those other companies stayed in business selling software nobody needs.
post #55 of 89
Quote:
Originally Posted by jragosta View Post

Why couldn't iOS work in a laptop setting?

Multitouch vs. keyboard/mouse?

Quote:
Picture an iPad Pro which has iOS and a touch screen and a keyboard. Just why wouldn't that work?

Comments Jobs made (correctly) about fatigue to touch upright (vertical) screens for long periods of time?

Quote:
It might be very convenient for email and some other applications.

Add a keyboard to an iPad now?

I once thought an iOS based laptop made sense, but listening to Jobs and thinking about it more, nope - ain't gonna happen. Different paradigms for different form factors. Keyboard/mice just don't make sense with iOS.

Unless you are pitching Apple to release their version of the Kindle DX?!?
post #56 of 89
Quote:
Originally Posted by Snowdog65 View Post

It is a very solid distinction. Apple actually had a transition strategy to move all future machines to an architecture that was offering equivalent to greater performance.

Distinction 1: It was actually a clear strategy. Apple won't be selling OSX intel and OSX ARM computers at the same time. That is a confused muddle. Apple is all about streamlined products lines.

Confusing in what way? With the Mac App store - which didn't exist in the past two architecture transitions (don't forget 68K to PPC), confusion over architectures could easily be negated and handled transparently.

Quote:
Distinction 2: Intel was at least equal in performance. ARM is behind behind by a factor of ten so is completely unsuitable to actually replace Intel.

Based on what criteria? For every day tasks, an ARM laptop could easily run the vast majority of the applications I use today. All but things like FinalCut, Photoshop and Aperture - and heck, even Aperture would probably be decent since it's highly dependent on GPU and disk access.

Quote:
ARM doesn't provide to power to replace x86 and it will be a confused mess coexisting.

I don't understand your clinging to the concept that ARM has to replace x86 for this to potentially happen. I don't think anyone (at least anyone sane) is arguing that, nor do I think that has to be true. But if Intel doesn't deliver on equitable performance per watt as ARM continues to improve, I can easily see Apple not only transitioning their lower end laptops to ARM, but moving to a split architecture in a way that makes sense and is transparent to the end users. And I think the Mac App store is that vehicle to do so.

I really regret getting the i7 in my 13" MacBook Air. It wasn't really necessary for what I run on it, and the heat it pumps out when the CPU utilization goes up is crazy. Wish I would have read some more reviews before just doing the geek thing of "maxing it out"...

The ball is definitely in Intel's court - and it's obvious Intel has got that message loud and clear. I'd say they have Ivy Bridge and maybe one more generation beyond - after that, if Intel hasn't upped their game I'd say they are in real jeopardy of ceding even more of the critical mobile market to ARM.

I'll reiterate - assuming Apple won't consider a split architecture for Mac OSX based on your assumptions is short sighted. Who knows, perhaps Intel will pull another one out. I didn't think they were going to recover from the Pentium (netburst) fiasco, and the Israeli design team delivered Core and saved the day when it looked like AMD was going to sail past them. Intel has managed to keep the teetering x86 architecture wheezing along - can they keep it up? Time will tell. But don't think for a minute that Apple won't move if they think it's in their best interest to do so.
post #57 of 89
Quote:
Originally Posted by SockRolid View Post

Sure, the prosumers and pros who need to run Office and/or Adobe software suites would need to stick with Intel-based MacBook Pros. But Apple is aiming at the vast majority who don't run high-end productivity suites on their Macs. The MacBook Air has replaced the old plastic MacBook at the low end of Apple's laptop line. And the low end, the high-volume end of the spectrum, is where ARM chips will first appear in Macs.

In the fullness of time, if the Microsofts and Adobes of the world fail to migrate to ARM, there is nothing stopping Apple from building out iWork to compete more directly against Office. And Apple could snap up Pixelmator at any time and enhance it to compete against Photoshop. All available instantly through the Mac App Store at reasonable prices.

Right now you have basically that in the ipad. If your software needs are so light that you don't even need something like Office, the restrictions of the ipad could become much less of an issue in future revisions. Segmenting their laptop line seems weird. I'd imagine if they did go this route it would involve a complete transition as opposed to an IOS laptop. Anyway at some point we'll probably see some reasonable alternatives to Office make it to IOS. Adobe has worked on projects for the ipad as well.

Pixelmator is cool, but I think they'd have a better chance of killing photoshop if developed by a different company. A lot of Apple's design time seems to go into simplicity rather than maximum control, and I hate what they've done with a lot of their software. I don't think it would really thrive as a side project for Apple.


Quote:
Originally Posted by Snowdog65 View Post

It is a very solid distinction. Apple actually had a transition strategy to move all future machines to an architecture that was offering equivalent to greater performance.

Distinction 1: It was actually a clear strategy. Apple won't be selling OSX intel and OSX ARM computers at the same time. That is a confused muddle. Apple is all about streamlined products lines.

Distinction 2: Intel was at least equal in performance. ARM is behind behind by a factor of ten so is completely unsuitable to actually replace Intel.

ARM doesn't provide to power to replace x86 and it will be a confused mess coexisting.

Yeah much of the hardware was fast enough that it could absorb some of the performance penalty of Rosetta. In the case of ARM I would think Apple would wait for some of the heavier software to make its way into IOS rather than wanting to run something comparable to Rosetta over it.

Quote:
Originally Posted by DocNo42 View Post


If Intel doesn't produce, I would be more surprised if Apple didn't go dual architecture. Don't think so? Just ask IBM...

You just sound really angry here. IBM and Intel aren't really that similar. With IBM it led to a terribly segmented line and hot, unreliable machines. There were a lot of design flaws on the Apple end too with the failure in cooling system design (recall all the photos with leaked coolant?), but overall it just didn't make sense over simple generic parts that provided fewer design restrictions. In the case of ARM it's still slower. I'm not sure why Apple would want to segment their laptop line especially if Intel comes through on their promises of cutting power consumption through 2013.
post #58 of 89
A lot of people say that the Ultrabooks are an attack on Apple and the MacBook Air, but I disagree. First, remember that Intel is inside the MacBook Air. Apple stands to benefit if Intel makes an even better microprocessor that was designed specifically for that type of product like the MBA. That's what Intel's Ultrabook strategy could do for Apple.

Just throwing a new idea into the mix, what if Intel's Ultrabook strategy is good for Apple?

Right now the MacBook Air is in a category all its own. It promises a lot of the features that people like in tablets (longer battery life, faster on and connection, super portable) in a conventional design with a built-in keyboard and larger screen. If Intel focuses a lot of its energy on creating chips for this Ultrabook class of products, Apple is sure to benefit from using the same chip and differentiate their products in other areas. Apple doesn't compete on HW specs. It competes on design, software, and user experience. Of course the HW matters in delivering that experience, but that's not where Apple competes.

To switch from Intel, Apple would need to have a huge motivation such as not being able to deliver the experience they want to with the MBA. Apple is saying the MBA is the future of the notebook. If that's the case, then it sounds like Intel is helping by putting resources behind the Ultrabook class (of which the MBA is certainly one - even if it isn't branded as one). If Intel makes the Ultrabook class work (they said at their developer forum in Sept. that in the next year or so, power consumption on Intel's chips will be down by a factor of 10) Intel stands to benefit from using those chips in their MBA designs.

It wouldn't make sense to put a lot of resources into switching from Intel in the MBA - what do they stand to benefit? Apple isn't going to compete on the chip performance. They will just use the best on the market and make people want to buy their products for other reasons (design, software, experience). That's what they've done in the past, and it works really well.

Apple creates their own chips for iPhone/iPad because there isn't another company that could do it right now to deliver the power and performance they are targeting in those products. The same motivation isn't there for the Mac line. Intel is a huge company (it's hard to design chips for any-purpose computing), and Apple can't compete in laptop chip design. It wouldn't align to their interests to put a huge amount of money and effort into making a product that at best wouldn't have the performance that Intel can deliver. They are better off putting the time and money into making great design, software, and user experience. To make the best Ultrabook just as they now have the best notebooks and desktops.
post #59 of 89
Quote:
Originally Posted by al_bundy View Post

the cheapest laptops can be bought for $299. what's the point of buying ARM vs an older Intel CPU? once you add in all the other parts that go into a laptop the savings vanish

A less powerhungry CPU means that some of the other components can be scaled back or omitted. Fans, cooling profiles and battery is just three examples.

The main reason for Intels greater speed and higher power consumption is the larger L2 and L3 caches.
ARM designs as it is now, are targeted for cost and size first and foremost, because the main use is in products where those needs are most important.
Multicore ARM CPUs with large eDRAM scratchpads would be able to reach just as great speeds and still hold their power advantage and thermal profile.
The ARM core is simply a leaner and thereby better design.
Power per watt is not just something that concerns treehuggers and mobile users. It's pertinent for everybody who wants more computing power for their money.

Of course there is much better ways to do CPUs than either ARM or Intels ways, but those aren't industry standards, and sadly that's what counts above all else for the decision makers.
post #60 of 89
Quote:
Originally Posted by Smalltalk-80 View Post

A less powerhungry CPU means that some of the other components can be scaled back or omitted. Fans, cooling profiles and battery is just three examples.

The main reason for Intels greater speed and higher power consumption is the larger L2 and L3 caches.
ARM designs as it is now, are targeted for cost and size first and foremost, because the main use is in products where those needs are most important.
Multicore ARM CPUs with large eDRAM scratchpads would be able to reach just as great speeds and still hold their power advantage and thermal profile.
The ARM core is simply a leaner and thereby better design.
Power per watt is not just something that concerns treehuggers and mobile users. It's pertinent for everybody who wants more computing power for their money.

Of course there is much better ways to do CPUs than either ARM or Intels ways, but those aren't industry standards, and sadly that's what counts above all else for the decision makers.

Your post was interesting . Anyway reduced power consumption is felt most on the extreme ends. In data centers this can cut a huge amount of electrical costs and help with implementing high density solutions which seem to be popular given the rise in popularity in virtualized hardware solutions. For mobile users it means better battery life and the ability to push macbook air like devices as the future of laptops (I'm speculating on the fact that this is Apple here). I don't actually agree with anyone that's claimed Intel is competing with Apple by trying to push the ultrabooks. They simply want to push development on the low voltage cpus, and in the end that will help them hold onto Apple with improved technology(I agree with junctionscu on this).

On the workstation end if other components follow the trend, it might eventually influence the form factors of these machines. Right now it seems like the most power hungry parts will be higher end gpus going forward, and yeah they do help for some things beyond gaming.
post #61 of 89
Quote:
Originally Posted by hmm View Post

Your post was interesting . Anyway reduced power consumption is felt most on the extreme ends. In data centers this can cut a huge amount of electrical costs and help with implementing high density solutions which seem to be popular given the rise in popularity in virtualized hardware solutions. For mobile users it means better battery life and the ability to push macbook air like devices as the future of laptops (I'm speculating on the fact that this is Apple here). I don't actually agree with anyone that's claimed Intel is competing with Apple by trying to push the ultrabooks. They simply want to push development on the low voltage cpus, and in the end that will help them hold onto Apple with improved technology(I agree with junctionscu on this).

On the workstation end if other components follow the trend, it might eventually influence the form factors of these machines. Right now it seems like the most power hungry parts will be higher end gpus going forward, and yeah they do help for some things beyond gaming.

I think it is pretty obvious to most people that the GPU as a separate unit will disappear. The advantages such as bandwidth, latency, cost, coherency in architecture etc. are numerous. IE. Sutherlands rule of the hardware wheel of reincarnation proves true once again.

Remember the old bipolar tech used in bitslice and ECL circuits back in the 70's and early 80's. That tech killed CMOS in speed, as long as CMOS was treated as a cost saving measure and not as a new approach that allowed you to cram a hellavulot more components onto a die of similar size.
This is quite analogous to the discrepancy between Intel and ARM today.
Intel has a huge kludgy, bolt-together legacy tech, that is only held up by Moore's law and the cache set-up allowed by the former.
ARM on the other hand has tech with quite different roots, and which has been forced to keep lean and clean.
I'm far from saying that ARM's approach is the the be all end all, it's not. But it IS better than Intels in the most important respects.
post #62 of 89
Quote:
Originally Posted by Snowdog65 View Post

It is a very solid distinction. Apple actually had a transition strategy to move all future machines to an architecture that was offering equivalent to greater performance.

Distinction 1: It was actually a clear strategy. Apple won't be selling OSX intel and OSX ARM computers at the same time. That is a confused muddle. Apple is all about streamlined products lines.

Distinction 2: Intel was at least equal in performance. ARM is behind behind by a factor of ten so is completely unsuitable to actually replace Intel.

ARM doesn't provide to power to replace x86 and it will be a confused mess coexisting.

Your wrong. Apple already sells iOS and Mac OS X at the same time and no one confuses the products. As long as Apple makes a clear visual distinction in its products no one will confuse them. ARM as I wrote before will be (much) better than the current MacBook air performance wise because of its blazing GPU iOS offloads most of its tasks to. (Read my previous post.)

J.
post #63 of 89
Quote:
Originally Posted by Smalltalk-80 View Post

I think it is pretty obvious to most people that the GPU as a separate unit will disappear. The advantages such as bandwidth, latency, cost, coherency in architecture etc. are numerous. ...

GPUs differ quite a bit from CPUs due to the specific use of the cores.
Merging the different design goals to one unified core is very difficult to do and involves all kind of trade offs.
I think the current (ARM) setup of both types of cores on one chip is probably the best way to go. This already almost completely addresses the problems you mention. Maybe a different layout of the cores could make a difference but it seems to me that grouping similar cores together makes a lot of sense.
The 'future' of processors will be multi-specialized-and non specialized-cores on one chip.

J.
post #64 of 89
Quote:
Originally Posted by Smalltalk-80 View Post

This is quite analogous to the discrepancy between Intel and ARM today.
Intel has a huge kludgy, bolt-together legacy tech, that is only held up by Moore's law and the cache set-up allowed by the former.
ARM on the other hand has tech with quite different roots, and which has been forced to keep lean and clean.
I'm far from saying that ARM's approach is the the be all end all, it's not. But it IS better than Intels in the most important respects.

This is an ignorant but often repeated opinion since the 1980s Risc/Cisc wars by people who understood little.

If you actually research a bit on what has been said by actual CPU architects like Mitch Alsup and Andy Glew, the x86 overhead vs ARM is about 10-15% for simple in order ARM chips like the A-8, but once you get to more complex chips like A-15 that drops to only about 5% penalty for x86 overhead.

There are three factors at work here.

1) Overhead penalty from x86 complexity: Only 5% overhead going forward(as per above).

2) Scaling. Even Intels own low power/low performance have several times performance/watt than their more powerful CPUs. It is simply much easier to increase performance/watt on low performing CPUs. An ARM CPU scaling up would also lose many times performance/watt.

3) Execution. ARM has been concentrating for decades on handhelds (since Newton) while Intel concentrated on desktop performance crown. ARMs current designs are refined SoCs aimed perfectly at handhelds, Intel is just barely getting to one chip SoCs this year.

But those are actually in reverse order of current importance. The current ARM advantage is primarily from 3) Intels weak execution from ignoring this segment, secondarily from 2) Scaling mismatch, and only minor, and shrinking, benefit from 1) the so called x86 overhead penalty.

An analogy of where things are now in Mobile is kind of like where Intel was when they realized that the Netburst was a mistake and they needed to commit to a new direction.

In a couple more cycles scaling will be matched and execution from Intel will be much better such that Intels Mobile CPUs will be extremely competetive on perf/watt.

The x86 overhead penalty is really a red herring compared to the real factors involved.
post #65 of 89
Quote:
Originally Posted by Snowdog65 View Post

This is an ignorant but often repeated opinion since the 1980s Risc/Cisc wars by people who understood little.

If you actually research a bit on what has been said by actual CPU architects like Mitch Alsup and Andy Glew, the x86 overhead vs ARM is about 10-15% for simple in order ARM chips like the A-8, but once you get to more complex chips like A-15 that drops to only about 5% penalty for x86 overhead.

There are three factors at work here.

1) Overhead penalty from x86 complexity: Only 5% overhead going forward(as per above).

2) Scaling. Even Intels own low power/low performance have several times performance/watt than their more powerful CPUs. It is simply much easier to increase performance/watt on low performing CPUs. An ARM CPU scaling up would also lose many times performance/watt.

3) Execution. ARM has been concentrating for decades on handhelds (since Newton) while Intel concentrated on desktop performance crown. ARMs current designs are refined SoCs aimed perfectly at handhelds, Intel is just barely getting to one chip SoCs this year.

But those are actually in reverse order of current importance. The current ARM advantage is primarily from 3) Intels weak execution from ignoring this segment, secondarily from 2) Scaling mismatch, and only minor, and shrinking, benefit from 1) the so called x86 overhead penalty.

An analogy of where things are now in Mobile is kind of like where Intel was when they realized that the Netburst was a mistake and they needed to commit to a new direction.

In a couple more cycles scaling will be matched and execution from Intel will be much better such that Intels Mobile CPUs will be extremely competetive on perf/watt.

The x86 overhead penalty is really a red herring compared to the real factors involved.

I would like to see the source for your first claim. I suspect we are talking specific cases here as it is quite pointless to do an average, since there will be so much spread in the data.
And what is "overhead" referring to precisely? Is it power consumption and thermal profile, or power per square inch of diespace or everything summed?

2. Not by as much as Intels equivalent size/price.
It's hard to scale stuff that works well on one level, correct. But that is exactly the trouble Intel is in, ARM to a much lesser extent.
Much of the power consumption overhead in Intel processors stems from their reliance in huge caches. They take up a lot of diespace, that could be used for scratchpads, SIMD units etc. And they burn a lot of power. ARM CPUs are by heritage not reliant on large L2 and L3 caches for speed.

3. I don't see your point there. Of course they are better at SOCs and so what? That's a merit. We we will just have to see if Intel can do anything that comes close. If it was straightforward and easy they would have done it years ago.
post #66 of 89
Quote:
Originally Posted by Smalltalk-80 View Post

I think it is pretty obvious to most people that the GPU as a separate unit will disappear. The advantages such as bandwidth, latency, cost, coherency in architecture etc. are numerous. IE. Sutherlands rule of the hardware wheel of reincarnation proves true once again.

Intel has a huge kludgy, bolt-together legacy tech, that is only held up by Moore's law and the cache set-up allowed by the former.
ARM on the other hand has tech with quite different roots, and which has been forced to keep lean and clean.

AMD today has power integrated graphics, this is a theoretical, but if AMD focus on lowering power and not improving power on there APU chips, could this not also become a much larger threat to Intel?

as most people have pointed out, Core 2 Duo's and i3/5/7 cores are fast enough, for most work. Meanwhile using the APU would allow for more open CL (my understanding) and lower power consumption than todays MBP's have.

imagine keeping the MBP 13' the same, but with a "real" (still integrated) graphics card

PC means personal computer.  

i have processing issues, mostly trying to get my ideas into speech and text.

if i say something confusing please tell me!

Reply

PC means personal computer.  

i have processing issues, mostly trying to get my ideas into speech and text.

if i say something confusing please tell me!

Reply
post #67 of 89
Quote:
Originally Posted by Smalltalk-80 View Post

I would like to see the source for your first claim. I suspect we are talking specific cases here as it is quite pointless to do an average, since there will be so much spread in the data.
And what is "overhead" referring to precisely? Is it power consumption and thermal profile, or power per square inch of diespace or everything summed?

That is a good question! Even so I think his numbers are either bogus or misleading. It is well known that Intel hardware can execute more instructions per cycle. One has to remember ARM is fairly simple in order hardware.
Quote:
2. Not by as much as Intels equivalent size/price.
It's hard to scale stuff that works well on one level, correct. But that is exactly the trouble Intel is in, ARM to a much lesser extent.
Much of the power consumption overhead in Intel processors stems from their reliance in huge caches. They take up a lot of diespace, that could be used for scratchpads, SIMD units etc. And they burn a lot of power. ARM CPUs are by heritage not reliant on large L2 and L3 caches for speed.

I don't think you understand computer architecture. Caches exist to buffer the flow of data to much slower off chip devices ( in most cases memory ). The cost of going off chip is pretty significant for any processor. That includes ARM hardware
Quote:
3. I don't see your point there. Of course they are better at SOCs and so what? That's a merit. We we will just have to see if Intel can do anything that comes close. If it was straightforward and easy they would have done it years ago.
post #68 of 89
Quote:
Originally Posted by wizard69 View Post

That is a good question! Even so I think his numbers are either bogus or misleading. It is well known that Intel hardware can execute more instructions per cycle. One has to remember ARM is fairly simple in order hardware.

I don't think you understand computer architecture. Caches exist to buffer the flow of data to much slower off chip devices ( in most cases memory ). The cost of going off chip is pretty significant for any processor. That includes ARM hardware

Yes more instructions per cycle, but also a much larger processor. With the ARM you can have the choice of a cheap power saving processor, a larger ASIC, many more of the same cores on the same die or a huge chunk of eDRAM, or a little of all.

Caches is a wasteful, brute force general way to accelerate old/legacy or multi platform software.
It uses a lot of power and takes up a lot of space.
There are many other better ways to alleviate off chip latency, one of the simplest and most immediately applicable to RISC architectures being a eDRAM.
post #69 of 89
Quote:
Originally Posted by Snowdog65 View Post

This is an ignorant but often repeated opinion since the 1980s Risc/Cisc wars by people who understood little.

If you actually research a bit on what has been said by actual CPU architects like Mitch Alsup and Andy Glew, the x86 overhead vs ARM is about 10-15% for simple in order ARM chips like the A-8, but once you get to more complex chips like A-15 that drops to only about 5% penalty for x86 overhead. ... .

You seem to forget that ARM chips are build with 65nm feature size, versus Intels 30nm feature size. This means that Intels problems scale up a factor four in energy efficiency and performance. If you have read my pervious posts you would also understand that ARM socs are actually on par - performance wise - with current intel processors used for example in the MacBookAir. Next year ARM socs will blow the Intel processors away.

J.
post #70 of 89
Quote:
Originally Posted by jragosta View Post

Why couldn't iOS work in a laptop setting?

Picture an iPad Pro which has iOS and a touch screen and a keyboard. Just why wouldn't that work?

It might be very convenient for email and some other applications.

It seems pretty clear that touch screens are not a necessary feature for iOS, rather an ARM processor is -- e.g., AppleTV is not touch screen, but it is iOS running on ARM.

There is absolutely no reason that iOS couldn't work in a laptop setting, but there is absolutely no reason it has to be a touch screen laptop. On the other hand, it might be interesting to have an iOS device in an iMac-like form factor with keyboard and track pad, a couple of USB ports (for connecting cameras, etc)... and where the screen undocks and becomes an iPad.
post #71 of 89
Quote:
Originally Posted by jnjnjn View Post

You seem to forget that ARM chips are build with 65nm feature size, versus Intels 30nm feature size. This means that Intels problems scale up a factor four in energy efficiency and performance. If you have read my pervious posts you would also understand that ARM socs are actually on par - performance wise - with current intel processors used for example in the MacBookAir. Next year ARM socs will blow the Intel processors away.

Don't bring a feather to a gun fight.

Intels actual "SoC" product on the market is actually a two chip 45nm/65nm design.

Current ARM SoC for over a year now have been a one chip 40nm design.

i5 processors in the MBA are beyond an order of magnitude more powerful than any available ARM SoC and not directly comparable.
post #72 of 89
Quote:
Originally Posted by Smalltalk-80 View Post

I would like to see the source for your first claim. I suspect we are talking specific cases here as it is quite pointless to do an average, since there will be so much spread in the data.
And what is "overhead" referring to precisely? Is it power consumption and thermal profile, or power per square inch of diespace or everything summed?

You can start with this thread containing some info from Alsup/Glue. After that try a search engine.
http://www.groupsrv.com/computers/ab...5-0-asc-0.html

Thinking the ISA (Instruction Set Archictecture) is responsible for big swings in perf/watt is just naive.

Quote:
Much of the power consumption overhead in Intel processors stems from their reliance in huge caches. They take up a lot of diespace, that could be used for scratchpads, SIMD units etc. And they burn a lot of power. ARM CPUs are by heritage not reliant on large L2 and L3 caches for speed.

You are confused about why this is the case and it is part of scaling, not ISA. If you run a slow CPU then you can be kept easily fed with instruction fetches from memory.

But when you run a very fast CPU, memory isn't good enough for speed and you use those big caches.

If you built a very fast ARM CPU to challenge x86 on the desktop, it would likewise have to use big caches to keep it running at full clip.

This is why you can't compare perf/watt across any performance gulf. It is meaningless.

If you built a high performance ARM part it would end up looking much like AMD/Intel designs, running on higher power silicon(lower perf/watt), using huge caches(lower perf/watt), higher voltage(lower perf/watt), higher GHz(lower perf/watt), all driving up power use at faster rate than performance.

Everything you do to increase toward desktop performance, costs you perf/watt.


Quote:
3. I don't see your point there. Of course they are better at SOCs and so what? That's a merit. We we will just have to see if Intel can do anything that comes close. If it was straightforward and easy they would have done it years ago.

Yes this is a merit, but it isn't related to ISA. ARM has no magic pixie dust, it is just that Intel has never really given this a serious effort. It's Atom SoC are like the neglected step child of the lineup. Intel is widely considered the process king, yet it's Atom "SoC" in 2011 has been on a worse process than any ARM competitor. While Intel's desktop has been at 32nm for over a year, the Atom, has been an archaic multichip 45nm/65nm design.

Do you really think that is something Intel can't remedy quickly? The real question is whether Intel has woken up and is ready to get serious about making a really competitive SoC?

I think they are getting serious now. This is why I think this is much like the Netburst (AKA Pentium 4) wake-up call.

At the same time that Intel starts building true one chip SoCs and puts them on it's leading silicon process, ARM designs will be getting more complex to try and take on laptops. They will likely end up very close in perf/watt in similar performance envelope and in that case both will essentially keep their legacy OS with them.
post #73 of 89
Quote:
Originally Posted by Snowdog65 View Post

Don't bring a feather to a gun fight.

Intels actual "SoC" product on the market is actually a two chip 45nm/65nm design.

Current ARM SoC for over a year now have been a one chip 40nm design.

i5 processors in the MBA are beyond an order of magnitude more powerful than any available ARM SoC and not directly comparable.

Your not well informed. According to Intels own specs the current (2011) i5 CPU and GPU are on 32nm.
Look it up. Wikipedia confirms this but misses the latest info regarding the GPU.
It could be of course that Intel is lying ...
Your right about the ARM SoCs, I missed that. It's still a big efficiency and speed difference between 45nm and 32nm though.
But your wrong again about 'the order of magnitude more powerful', it's a factor 5 (20GFLOPS versus 100GFLOPS) and that isn't an order of magnitude.
The current PowerVR series 6 GPU (possibly used in the A6 next year) is a factor 10 faster and therefore twice as fast as the current i5 GPU core.
So I would love to bring the ARM SoC to the gun fight, and I will win.

J.
post #74 of 89
Quote:
Originally Posted by jnjnjn View Post

Your not well informed. According to Intels own specs the current (2011) i5 CPU and GPU are on 32nm.

The i5 is mainstream desktop/laptop part. It isn't an SoC.

Intels "SoC" is Atom and it is dual chip 65nm/45nm, just as I said.

Also don't know where you are getting GFLOPs numbers.

Best ARM numbers I have seen are about 50 MFLOPS, the Intel i5 is about 50 GFLOPS.

That is MegaFlops vs GigaFlops. 1000 times better on i5. So three orders of magnitude. You simply can't compare.
post #75 of 89
Quote:
Originally Posted by majjo View Post

we're talking about Intel's SoC, which is atom, not core i5.
atom is currently on a 5 year cadence on 45nm process. Its not so much Intel unable to compete as it is Intel waking the heck up.
and Intel IS waking up. They are transitioning atom to the tick tock cadence with Silvermont and fast tracking it to current process tech. Unfortunately the transition won't be complete until 2013, and who knows how the landscape will look like then.

Are they transitioning it to the same release cycle as their other cpus as in alternating a new architecture with a die shrink year after year? I'm asking because Haswell = 2013 at 22nm with a die shrink to 14nm the following year. I do hope they examine the issue of component reliability as they continue to shrink things. Expensive unreliable components suck if everything is completely integrated.
post #76 of 89
Quote:
Originally Posted by Snowdog65 View Post

The i5 is mainstream desktop/laptop part. It isn't an SoC.

Intels "SoC" is Atom and it is dual chip 65nm/45nm, just as I said.

Also don't know where you are getting GFLOPs numbers.

Best ARM numbers I have seen are about 50 MFLOPS, the Intel i5 is about 50 GFLOPS.

That is MegaFlops vs GigaFlops. 1000 times better on i5. So three orders of magnitude. You simply can't compare.

The i5 used in the MacBookAir has an integrated GPU. So thats a SoC.
And we are comparing the integrated GPU from ARM with the integrated GPU of Intel (i5) CPUs.

If you look at the GPU performance of the PowerVR GPU on the A5 chip its 5 times slower than the current integrated GPU of Intels i5.
But the latest PowerVR GPU is 10 times faster, and thereby faster than the current integerated GPU of the i5.

The point is that CPU performance isn't the most important factor, especially for iOS and Mac OS X, because they offload most of the work to the GPU and can use OpenCL to do all kinds of ultra fast calculations.

So, readjust your focus. Look at the GPU.
post #77 of 89
Quote:
Originally Posted by majjo View Post

we're talking about Intel's SoC, which is atom, not core i5.
atom is currently on a 5 year cadence on 45nm process. Its not so much Intel unable to compete as it is Intel waking the heck up.
and Intel IS waking up. They are transitioning atom to the tick tock cadence with Silvermont and fast tracking it to current process tech. Unfortunately the transition won't be complete until 2013, and who knows how the landscape will look like then.

as for performance, I don't know where you're getting 20gflops from, but the last benchmarks I've seen pegged ARM at Mflops. Granted that was a while ago on the a8 architecture
http://www.brightsideofnews.com/news...ersus-x86.aspx

I am looking at the integrated GPU, not the CPU.
See my previous post.

J.
post #78 of 89
Quote:
Originally Posted by jnjnjn View Post

The i5 used in the MacBookAir has an integrated GPU. So thats a SoC.
And we are comparing the integrated GPU from ARM with the integrated GPU of Intel (i5) CPUs.

You really don't have the faintest notion what you are talking about. The i5 isn't an SoC, integrated graphics don't make it an SoC.

The GPU isn't a complete FPU solution. BTW the latest Atom SoCs will be using PowerVR GPUs, so they should be equal on GPU.
http://pcper.com/news/Processors/Int...R-GPUs-Planned

As far as actual CPUs, The FPU inside ARM chips is laughable, several orders of magnitude behind Intels.

Really you are just spouting off a bunch of incorrect, uninformed opinions and you have proven wrong at every turn.

You don't actually know what an SoC is.
You made big claims about Intel SoCs being ahead on process, yet you were wrong about the process for both Intel/ARM that is in use an actually had it backwards.
You try to pretend GPUs compensate for weak ARM CPUs, even though they are incomplete for FPU duty.

At every turn you are high on opinions, while being completely wrong on facts.

If this is the case for a Macbook Air with ARM, it has ZERO probability.
post #79 of 89
Quote:
Originally Posted by Snowdog65 View Post

You really don't have the faintest notion what you are talking about. The i5 isn't an SoC, integrated graphics don't make it an SoC.

The GPU isn't a complete FPU solution. BTW the latest Atom SoCs will be using PowerVR GPUs, so they should be equal on GPU.
http://pcper.com/news/Processors/Int...R-GPUs-Planned

As far as actual CPUs, The FPU inside ARM chips is laughable, several orders of magnitude behind Intels.

Really you are just spouting off a bunch of incorrect, uninformed opinions and you have proven wrong at every turn.

You don't actually know what an SoC is.
You made big claims about Intel SoCs being ahead on process, yet you were wrong about the process for both Intel/ARM that is in use an actually had it backwards.
You try to pretend GPUs compensate for weak ARM CPUs, even though they are incomplete for FPU duty.

At every turn you are high on opinions, while being completely wrong on facts.

If this is the case for a Macbook Air with ARM, it has ZERO probability.

A system on a chip (SoC) isn't an absolute definition, it could have more or less building blocks and still be called a system on a chip (as you should know).
So it isn't a problem to call a GPU/CPU combination a SoC.
But I'am not taking about definitions, if you don't call it a SoC that's fine by me.

I'am interested in performance and as I noted before its interesting to see that an ARM CPU/GPU combination is as good as an Intel i5 CPU/GPU combination in the near future.
And that's what counts from a consumer point of view, real world performance.

I was right about the feature size, Intel i5 is on 32nm (both GPU and CPU) and ARM is on 45nm (I already said I made a mistake on this one) and that's a big difference. (I assumed the difference was 32nm versus 65nm and it turned out to be 32nm versus 45nm, and the point was that there was a feature size difference.)

But I see you don't dispute my claims about GPU performance and even claim that PowerVR GPUs will be included in future Atom SoCs (was it a SoC?), and that will make them comparable to ARM SoCs, so why using an Atom with an order of magnitude worse battery drain?

You have to learn about discussing a topic and not lose your temper. Your social skills lack at least as much as my "uninformed opinions".

J.
post #80 of 89
Quote:
Originally Posted by jnjnjn View Post

A system on a chip (SoC) isn't an absolute definition, it could have more or less building blocks and still be called a system on a chip (as you should know).
So it isn't a problem to call a GPU/CPU combination a SoC.
But I'am not taking about definitions, if you don't call it a SoC that's fine by me.


No one considers the i5 an SoC. Either you are trying to lie about it to make your case, or you are a fool. I haven't decided which.

But I don't suffer liars or fools mildly.

The i5 isn't an SoC, and for you stack your claims on that, makes everything wrong from that point forward.

Intel has barely entered the SoC game and all their SoCs are Atom based. Every Atom SoC in products this year was a combo 45nm/65nm part. Far from Intels state of the art.

But they will be moving to putting their SoCs on leading process going forward. 32nm in 2012 and 22nm in 2013.

Intel is behind in SoCs, but has had it's P4 moment and is waking up in SoCs.

By 2013 Intels 22nm SoCs should be very competitive, that same timeframe as the claim when ARM will be challenging in notebooks.

ARM CPUs still lack the general purpose computing power to be taken as a competitor to Intel Laptop chips which have an order of magnitude more processing power.

A GPU does not make up for the weaknesses in the ARM CPU. GPUs can only help is specific tasks and they need to be custom coded. It is not a general purpose computing solution for the weak CPU.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Future Apple Hardware
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › ARM seen challenging Intel's notebook chip dominance by 2013