or Connect
AppleInsider › Forums › Mobile › iPad › Apple's A5 processor could pave the way for larger chip trend
New Posts  All Forums:Forum Nav:

Apple's A5 processor could pave the way for larger chip trend - Page 3

post #81 of 119
One of the big issues for Intel is that the company is setup to sell $150.00 CPUs. The ARM cores we are talking about cost more like $15-20. They need to sell more expensive chips as they have the huge overhead of very state of the art fabs. But the fact of the matter is that mainframes more or less were replaced by mini-computer that were replaced by pc that were replaced by laptops and now tablets. Intel by not playing in this market is making themselves irrelevant. I think in the next 2-4 years we will not have Intel based machines at home though they will hang on to the server market for much longer.
post #82 of 119
Quote:
Originally Posted by ash471 View Post

What you say is certainly true for the very near term (e.g., 2 years or so). However, what if we look further out. Will ARM be 64bit? 6 cores? 2 GHz clock? I think the point that is being made is that ARM is accelerating fast and the improvements in Intel chips are becoming less important. The issue isn't that intel can't keep up (they can). Intel's problem is that their increases in horsepower are becoming increasingly irrelevant to people's computing needs. At some point we may cross a line where ARM provides sufficient horsepower and Intel's advantage is irrelevant. Keep in mind that despite Intel's prowess, it has struggled to reduce power consumption. It also doesn't have a terribly good track record with SoC. Thus, there could very well be a time (maybe in the next 5 years) when ARM starts to replace chips in the mobile PC realm and potentially some day in consumer desktops.

I think the failure of ARM in Netbooks had a lot to do with 1. premature use of ARM (it still isn't there), 2. Lack of an OS optimized to take advantage of ARM 3. Microsoft using its marketing power to push WinXP. With MS out of the picture, an OS optimized for ARM, and more advanced ARM chips, we may see a different result.

Also, I think the major difference between iOS and OS X is touch vs. mouse optimization. You will never see OS X on a touch device and you will never see iOS on a Mac. However, there is no inherent reason why Apple will prefer one CPU architecture over another. Of course, switching isn't trivial, but Apple has shown that it can carry out a switch without a hitch. I think the real issue will always be a balance between computing power, power consumption, and cost. Right now Intel has such an advantage in the computing power (needed computing power) that it outweighs power consumption and cost in Macs. But things are changing fast........

Well, there are three big differences between switching from PowerPC to Intel and switching from Intel to ARM.
1 - the switch from PowerPC to Intel involved using the next generation processor that was faster, thus Rosetta could do a reasonable job emulating PowerPC apps. In this case the move wouldn't be to a more powerful processor, thus emulation wouldn't work as well. This is a minus.
2 - Modern OS X apps are cocoa, which is a lot easier to move to new architectures with universal binaries than Carbon apps were. This is a plus.
3 - unlike prior architecture changes, this isn't being suggested as a complete migration, but rather as a move to supporting dual architectures. Something they have just gotten away from by killing Rosetta in Lion. Again, this isn't as bad now that apps are carbon apps are pretty much history.

I'm not sure how much sense it makes for MacBooks as I see iPads filling that need. But perhaps there's a case for a less expensive iMac going this route as some people still need desktop machines with large screens without really needing faster CPUs than they have today.
post #83 of 119
Quote:
Originally Posted by solipsism View Post

LOL Id love to see a water-cooled smartphone.

I do believe that all current cell phone models can be water cooled --- once.
post #84 of 119
I don't see Apple moving Macs to ARM in the near future. Several things will be needed for this to occur:

1) Single-threaded performance increases. ARM Cortex A9 is great for what it is, and outperforms an Atom at the same clock speed. But Atom isn't anything like a 3GHz+ Sandy Bridge. ARM Cortex A15 will be better of course, but not good enough - except maybe in a MacBook Air (or iPad Netbook type configuration - iPad in MBA form factor with ability to run Mac apps).

2. 64-bit. ARM Cortex A9 is strictly 32-bit all over. ARM Cortex A15 will extend the addressing range, but apps will still be 32-bit. However systems will be able to have more than 4GB of memory, and each app will be able to have a full 4GB memory range. However full 64-bit capability isn't there, and ARM haven't released any plans for such yet. MIPS, an ARM competitor, does have a 64-bit version however, so I expect ARM are working hard on it internally, but it isn't a trivial thing extending an architecture to 64-bits.

3. Non-embedded style of CPUs. 128-bit memory buses and SO-DIMM support. PCIe integration. SATA 3. USB 3. And so on. At least Apple is in control of this - they have a 64-bit memory bus on the A5 (albeit internal, with layered DRAM). These designs will be a different branch completely from the AX 'embedded' chips, they will use more power and not be suitable for the iPad or below. I assume these will start appearing in around 2014/15.

Let's call them the E series, and assume the first generation will be for a cost-reduced MacBook Air type product (that might have a different branding too than Mac). The transistor budget will be around 1 billion transistors, and the die size could be up to 150mm^2. I could see that we could have a single, 20nm-ish chip. Four or Eight ARM Cortex A15+ cores, each with 1MB L2 cache. PowerVR 6 series graphics (let's assume SGX543MP16 performance at least, running at 4x the clock speed too - i.e., 32x faster than A5). It will have an external memory bus - 128-bit, driving DDR3+ memory. It may be that I/O will be delegated to an external chip, with PCIe3 integrated on board the CPU connecting to discrete graphics and the I/O chip.

It would be quite exciting to see the internal product plans at Apple. In the meantime we will just have to live with anticipation, releases and teardowns!

Please note that I believe that the A5 was designed to drive a Retina Display iPad, hence the excessive graphics. The bonus being that the excessive graphics give anti-aliasing for free in games, giving a more-retina effect anyway. I don't expect 2012's A6 to actually offer more graphics capability, and to be more about the die-shrink to 32nm or 28nm. It could have four ARM cores however.
post #85 of 119
Quote:
Originally Posted by wizard69 View Post

And running pathetically slow with little capability to tackle the bigger jobs. I just don't think you have an understanding of just how far behind ARM is performance wise, such machines would simply not meet the performance needs of many Mac users.

Performance of current ARM designs isn't actually that bad, the latest chips (including the A5) are faster than for example current Intel Atoms for many tasks, and nettops and netbooks with Atoms have been sold by the millions. The gap between a high-end x86 chip from Intel or AMD is huge, not denying that, but for the typical garden variety computing that many people do, a competent ARM SoC would probably suffice and have a few advantages over faster x86 chips.

So while I don't see Apple actually switching to ARM for Macs, I don't rule out the possibility that they could introduce a MacBook Air type of laptop that running on an ARM SoC. Maybe even use a dual-CPU solution that lets you boot your computer either on the ARM chip, or on the x86, depending on your computing needs. This could also be an interesting idea for iOS application compatibility by the way.
post #86 of 119
Quote:
Originally Posted by extremeskater View Post

Apple didn't build the A5 chip, Samsung built the A5.

"Designed in California, built in Korea"!
post #87 of 119
Quote:
Originally Posted by Ssampath View Post

One of the big issues for Intel is that the company is setup to sell $150.00 CPUs. The ARM cores we are talking about cost more like $15-20. They need to sell more expensive chips as they have the huge overhead of very state of the art fabs. But the fact of the matter is that mainframes more or less were replaced by mini-computer that were replaced by pc that were replaced by laptops and now tablets. Intel by not playing in this market is making themselves irrelevant. I think in the next 2-4 years we will not have Intel based machines at home though they will hang on to the server market for much longer.

I wrote this the other day and I think it fits perfectly:

http://mattrichman.tumblr.com/post/4.../intel-is-dead
post #88 of 119
Quote:
Originally Posted by notme View Post

I wrote this the other day and I think it fits perfectly:

http://mattrichman.tumblr.com/post/4.../intel-is-dead

Intel is treating it's one entry in this field the Atom as a step child. The graphics on the chip are crippled, it places major restrictions on how people can use them and it is still made with 45nm process when most of their other chips are in 32nm heading to 22. What works in one era does not work in another. Looks very much like Intel is going to lose this.
post #89 of 119
Quote:
Originally Posted by Ssampath View Post

Intel is treating it's one entry in this field the Atom as a step child. The graphics on the chip are crippled, it places major restrictions on how people can use them and it is still made with 45nm process when most of their other chips are in 32nm heading to 22. What works in one era does not work in another. Looks very much like Intel is going to lose this.

They probably will. x86 is way too bloated for use in a tablet or a phone. If they don't completely reboot their operations they're dead.
post #90 of 119
Quote:
Originally Posted by abarry View Post

"Designed in California, built in Korea"!

This is still preferred to: designed in California, built in China. SK is probably 50 years ahead of China in regard to societal development. Most people don't know this, but in the 80's SK had something of a silent revolution which brought in democracy and free market policies. Since then it has been up and up.\\


Quote:
Originally Posted by notme View Post

They probably will. x86 is way too bloated for use in a tablet or a phone. If they don't completely reboot their operations they're dead.

This is an interesting thing. The x86 is a bloated, ugly architecture. However, it can be streamlined nicely, which is the way it is implemented in every Desktop/Laptop chip since the first Centrinos. The problem is, this streamlining requires a _fixed_ number of gates for translation logic, so it can scale-up well, but not down. In other words, I seriously doubt the ARM has a chance to contend for the desktop/laptop market -- Intel is already so entrenched, and every innovation they make is effectively standardized and rolled into gcc before you have a chance to blink. On the same token, I serious doubt x86 has a chance in the mobile market -- ARM is so entrenched, yadda yadda.

The A15 is, in my opinion, maybe not the best way for ARM to go. I am still green here, and I need to do more homework, but IMHO ARM's best opportunity is to focus more energy on scaling down. We are moving more computing power to the edge, and the first example here was the smartphone/tablet/kindle revolution. Now, the wave-riders are basically just scaling these up, feeding the marketing machine for bigger-faster in the same way that the PC market used to do 10 years ago. But the big winners will be the people who start the next wave, and that is sure to be something smaller and lighter than before. What's next: smartcards ... err, super-smartcards. I'm serious. The technologies required to produce a credit card sized computing platform, with wireless I/O and even display & buttons, are all coming together this year. The fun part about this technology is that it's another 1000x reduction of the way you think about power: PC's are concerned with watts, phones milliwatts, and cards microwatts. For microwatt tech, the sweetspot is somewhere between 90 - 130 nm, and there are loads of fabs now, at this spec.
Cat: the other white meat
Reply
Cat: the other white meat
Reply
post #91 of 119
Quote:
Originally Posted by Splinemodel View Post

The A15 is, in my opinion, maybe not the best way for ARM to go. I am still green here, and I need to do more homework, but IMHO ARM's best opportunity is to focus more energy on scaling down. We are moving more computing power to the edge, and the first example here was the smartphone/tablet/kindle revolution. Now, the wave-riders are basically just scaling these up, feeding the marketing machine for bigger-faster in the same way that the PC market used to do 10 years ago. But the big winners will be the people who start the next wave, and that is sure to be something smaller and lighter than before. What's next: smartcards ... err, super-smartcards. I'm serious. The technologies required to produce a credit card sized computing platform, with wireless I/O and even display & buttons, are all coming together this year. The fun part about this technology is that it's another 1000x reduction of the way you think about power: PC's are concerned with watts, phones milliwatts, and cards microwatts. For microwatt tech, the sweetspot is somewhere between 90 - 130 nm, and there are loads of fabs now, at this spec.

Interesting thoughts, but the problem with scaling down is that we're human being with human being sized parts and there is a natural lower limit to the size of things we can manipulate.

I think the cell phone is already as small as anyone wants to go with an integrated device, so the future there will be in packing more power into the same power/size envelope.

Even smaller devices that use wireless I/O don't seem to solve any problem. If its about having a lot of computing power in a negligible package, why not just put that power on the display device, which is going to be necessary regardless? If its about having your computing environment on your person, isn't that more likely to become cloud based? And I certainly don't want to be carrying around all my data on a tiny card, which is going to be replicated on remote servers or my home storage anyway.

In other words, in a world moving towards ever more mobile thin clients, the emphasis is going to be on the display and interconnectivity, not on tiny key fobs that can't do anything without hooking up to something else, and can't do anything that connected device with a display can't do just as well. At least until the display device becomes a contact lens.
They spoke of the sayings and doings of their commander, the grand duke, and told stories of his kindness and irascibility.
Reply
They spoke of the sayings and doings of their commander, the grand duke, and told stories of his kindness and irascibility.
Reply
post #92 of 119
Quote:
Originally Posted by Splinemodel View Post


This is an interesting thing. The x86 is a bloated, ugly architecture. However, it can be streamlined nicely, which is the way it is implemented in every Desktop/Laptop chip since the first Centrinos. The problem is, this streamlining requires a _fixed_ number of gates for translation logic, so it can scale-up well, but not down. In other words, I seriously doubt the ARM has a chance to contend for the desktop/laptop market -- Intel is already so entrenched, and every innovation they make is effectively standardized and rolled into gcc before you have a chance to blink. On the same token, I serious doubt x86 has a chance in the mobile market -- ARM is so entrenched, yadda yadda.

The A15 is, in my opinion, maybe not the best way for ARM to go. I am still green here, and I need to do more homework, but IMHO ARM's best opportunity is to focus more energy on scaling down. We are moving more computing power to the edge, and the first example here was the smartphone/tablet/kindle revolution. Now, the wave-riders are basically just scaling these up, feeding the marketing machine for bigger-faster in the same way that the PC market used to do 10 years ago. But the big winners will be the people who start the next wave, and that is sure to be something smaller and lighter than before. What's next: smartcards ... err, super-smartcards. I'm serious. The technologies required to produce a credit card sized computing platform, with wireless I/O and even display & buttons, are all coming together this year. The fun part about this technology is that it's another 1000x reduction of the way you think about power: PC's are concerned with watts, phones milliwatts, and cards microwatts. For microwatt tech, the sweetspot is somewhere between 90 - 130 nm, and there are loads of fabs now, at this spec.

I don't think the ARM needs to be in the desktop/notebook market. Intel has those - but those will become smaller markets as the phone/tablet markets become bigger. In fact this quarter for the first time I think more smartphones shipped than PCs.

From desktops to laptops to tablets it looks like there is a 5-10x reduction is wattage desktops about 100W to 20W for laptops to about 2W for tablets. Will the next jump be to something that uses less than .5W? Each of these transitions has taken 5-10 years and I am sure new technologies will come along to make this happen.
post #93 of 119
Quote:
Originally Posted by addabox View Post

... If its about having your computing environment on your person, isn't that more likely to become cloud based? ...

And I certainly don't want to be carrying around all my data on a tiny card, which is going to be replicated on remote servers or my home storage anyway. ...

In other words, in a world moving towards ever more mobile thin clients ...

Whether you think you like it or not, it is already starting. The cloud computing vision is mostly a marketing ploy by the IBMs, Ciscos, HPs of the world. They want to sell big, expensive servers and crap like that. The problem is that cloud computing just doesn't scale, and it is horribly energy inefficient even when it does work. Pushing query down-down-down ends up working better and using, well, no grid power at all because at the small level it is easy to scavenge energy.
Cat: the other white meat
Reply
Cat: the other white meat
Reply
post #94 of 119
Quote:
Originally Posted by Splinemodel View Post

Whether you think you like it or not, it is already starting. The cloud computing vision is mostly a marketing ploy by the IBMs, Ciscos, HPs of the world. They want to sell big, expensive servers and crap like that. The problem is that cloud computing just doesn't scale, and it is horribly energy inefficient even when it does work. Pushing query down-down-down ends up working better and using, well, no grid power at all because at the small level it is easy to scavenge energy.

I disagree. Powerful mobile devices plus ubiquitous connectivity are moving us in this direction. The smartphone is the going to be the most widely deployed computing platform in the world soon enough, and it simply makes sense to use devices like these in conjunction with network services and data. No one is going to carry all of their document on a phone, much less on a credit card, but it does make sense to use a small device to access remote data-- even if that data is on a server in your home.

And that still doesn't obviate the need for a display, which can't get smaller than it is now and remain useful. I could see a device as thin as a credit card, with the entire face a touch screen, but that's just an evolution of the current smartphone design.
They spoke of the sayings and doings of their commander, the grand duke, and told stories of his kindness and irascibility.
Reply
They spoke of the sayings and doings of their commander, the grand duke, and told stories of his kindness and irascibility.
Reply
post #95 of 119
Quote:
Originally Posted by alandail View Post

Well, there are three big differences between switching from PowerPC to Intel and switching from Intel to ARM.
1 - the switch from PowerPC to Intel involved using the next generation processor that was faster, thus Rosetta could do a reasonable job emulating PowerPC apps. In this case the move wouldn't be to a more powerful processor, thus emulation wouldn't work as well. This is a minus.
2 - Modern OS X apps are cocoa, which is a lot easier to move to new architectures with universal binaries than Carbon apps were. This is a plus.
3 - unlike prior architecture changes, this isn't being suggested as a complete migration, but rather as a move to supporting dual architectures. Something they have just gotten away from by killing Rosetta in Lion. Again, this isn't as bad now that apps are carbon apps are pretty much history.

I'm not sure how much sense it makes for MacBooks as I see iPads filling that need. But perhaps there's a case for a less expensive iMac going this route as some people still need desktop machines with large screens without really needing faster CPUs than they have today.

I find number 3 to be quite persuasive. I agree that it wouldn't be good for Apple to support two architectures with the same OS for anything but a temporary transition period. Even though Apple is the expert in ARM with mobile, ARM in desktops and servers is a whole different ball game. Mobile users will readily accept functionality restrictions in exchange for ultra portability. There is absolutely no way a Mac Pro user is going to accept those compromises. (which is why I own an iPad and a Mac Pro). I doubt Apple would try to make the switch to ARM on their own. If Apple switches to ARM in desktops, it will probably be a transition that a fair number of PC manufacturers take....which means such a change is iffy and certainly not happening anytime soon. I know the ARM community is excited about competing with Intel in desktop and server. But until we see how Intel performs with GPU integration and increased power efficiency and what ARM can do to increase clock speeds and computing power, there is no way to predict what will happen. Should be interesting....
post #96 of 119
To know whether an ARM chip would be useful in a Macbook Air, we need to consider how much battery weight is added to run an Intel chip instead of an ARM chip. I suppose there is a certain screen size where the power requirements for the screen begin to dwarf the power requirements for the CPU. That cutoff may be in the range of 11-13 inches. If that is the case, switching from Intel to ARM makes much less sense in MacBooks and almost no sense in desktops. I'm increasly more convinced that the swtich to ARM in Macbooks doesn't make much sense unless the entire PC industry moves in that direction.
post #97 of 119
Quote:
Originally Posted by fisherman View Post

I do believe that all current cell phone models can be water cooled --- once.

Having lost my very first Mac, a 17" TiPb by kicking a can of Coke onto it while asleep, this post makes me very sad.

J/K

Quote:
Originally Posted by DominoXML View Post

The big advantage of Apple is the double take on optimization. Starting with the A4 they not only optimized the software for the chip but also began to optimize the SoC for the needs of their software.

So, this is going to be an incredibly stupid, uninformed question, but I'll ask anyways:

Does this have to do with the APIs are developed for developers, given that Apple knows all the ins and outs of the hardware? Or am I missing the boat entirely?

Thanks.
post #98 of 119
Quote:
Originally Posted by Splinemodel View Post

The A15 is, in my opinion, maybe not the best way for ARM to go. I am still green here, and I need to do more homework, but IMHO ARM's best opportunity is to focus more energy on scaling down. We are moving more computing power to the edge, and the first example here was the smartphone/tablet/kindle revolution. Now, the wave-riders are basically just scaling these up, feeding the marketing machine for bigger-faster in the same way that the PC market used to do 10 years ago. But the big winners will be the people who start the next wave, and that is sure to be something smaller and lighter than before. What's next: smartcards ... err, super-smartcards. I'm serious. The technologies required to produce a credit card sized computing platform, with wireless I/O and even display & buttons, are all coming together this year. The fun part about this technology is that it's another 1000x reduction of the way you think about power: PC's are concerned with watts, phones milliwatts, and cards microwatts. For microwatt tech, the sweetspot is somewhere between 90 - 130 nm, and there are loads of fabs now, at this spec.

Mark of the beast!

I'm not all that religious, but your post got me thinking of how the "mark of the beast" described in the Book of Revelation could theoretically become possible in modern times with microwatt ARM technology. I first thought of the "super-smartcard" application that you described in a non-MOTB way, but what if you lose the card? Well, you wouldn't easily lose your smartcard if it's embedded in your arm! Sure, there are chips the size of a grain of rice that can be implanted into your house pet. However, the device simply spits out a number when a handheld reader is put to the pet's neck. With ARM SoC's, things can get really advanced compared to today's primitive remote embedded chip devices. For demonstrative purposes, we'll call the device "BeastChip".

And it wouldn't be super-hard to sell the concept to anyone that isn't a privacy or religious advocate: A GPS receiver built into the BeastChip's SoC would help find you if you were kidnapped, or just got lost like this child. Fugitives would be easier to track down and be arrested. If you had a teenage daughter, you could electronically access her location at any time with a smartphone or PC (imagine the peace of mind that could bring). A WiFi chip within the SoC would be used to assist in GPS location when a person is indoors or otherwise out of the GPS satellites' LOS (because your teenage daughter might be in the basement of a friend's house, after her curfew).

The WiFi chip would also double as a way to connect to "BeastNet", the new all-electronic payment system that, as the system is matured and phased in, will become the required form of payment for anything. And by anything, I mean it'll replace the bulky EZ-Pass toll booth collection device currently placed on your car's windshield. And since the BeastNet system will become required payment for everything, it'll eliminate the need for toll booths on toll roads. Currently, you need to slow down and wait in line, possibly for a long time, to pay a toll, right? Well, now you'll pass under a human-free enclosure at full speed, and the proper toll will be deducted from your BeastNet account. Think this "e-toolbooth" is crazy? The state of Maryland recently introduced this exact concept on their newest tollroad (picture of an "e-tollbooth").

Additionally, entering the subway wouldn't require the need to print a farecard or use an old-fashioned credit-card-sized smartcard. Just walk through the designated "BeastWay" open gate, just like the e-tollbooth on the roads, and the proper fare is deducted from your BeastNet account.

Now how to power the device? Some scientists recently came up with the idea of creating power by harnessing internal body movements. Now it begins to make sense why Revelation would talk about how the mark must be located on the forehead or forearm. Both locations likely have a high enough concentration of blood vessels, whose movement created by one's pulse would be enough to constantly power a BeastChip-like device, even during sleep, while the head or arm might not be moving much. You wouldn't need a battery, eliminating the worry of how an old battery would affect performance over a potentially 100-year lifecycle. The WiFi antenna would serve as the needed antenna for all communications between the SoC and the outside world. No RFID chip is needed.

(Placing the device on one's chest wouldn't make sense, because the electronics would likely interfere with pacemakers. Even those who don't have a pacemaker one day might have one the next.)

And because there are billions of people on Earth, any BeastChip would have to be extremely mass-production friendly. If 90-130nm fabs are so common, then that makes it easy to mass-manufacture the technology quickly enough to globally roll it out in five years or less.

(No, I wouldn't want one put in me.)
post #99 of 119
Not sure this is worth getting so far off topic on, but I'll bite.

Quote:
Originally Posted by addabox View Post

I disagree...

Obviously you are free to disagree, but this push to cards is already in motion. Smartphones are not all the same, and of the total phone market, smartphones are a small percentage still. It is easier to distribute card computers to users, which are guaranteed to work with the system they are configured for, than to depend on smartphones. Big card issuing companies are betting their futures on better cards, because NFC is severely disrupting the way these companies have made money in the past.

Quote:
Originally Posted by mikemikeb View Post

I'm not all that religious, but your post got me thinking of how the "mark of the beast" described in the Book of Revelation could theoretically become possible in modern times with microwatt ARM technology...

A card is far from an implanted device. People used to carry around calculators, now phones. The number of cards in the wallet will probably stay constant, but some will be card computers. I don't see how a user-activated card is an invasion of privacy. E911 on your phone could be considered an invasion of privacy. A card with a button on it, probably not.
Cat: the other white meat
Reply
Cat: the other white meat
Reply
post #100 of 119
In terms of CPU size, I believe that the OMAP4430 (12mm x 12mm) is even larger than the Apple A5 (12mm x 10mm).

http://www.windowsfordevices.com/c/a...unveils-OMAP4/
post #101 of 119
Quote:
Originally Posted by AaronJ View Post

So, this is going to be an incredibly stupid, uninformed question, but I'll ask anyways:

Does this have to do with the APIs are developed for developers, given that Apple knows all the ins and outs of the hardware? Or am I missing the boat entirely?

Thanks.

There's no such thing as stupid questions, only stupid answers. I hope mine is satisfying.

Apple has a long history in optimizing their OS for different hardware (PPC, Intel, ARM) by keeping it flexible for implementation of new hardware features and specialized chip units.

They use tools like Shark and Instruments for many years, which enable a transparent view on performance and runtime characteristics of executed code .

They share the gained knowledge in the form of documentation, presentations (WWDC), APIs and frameworks with 3rd party developers.

So yes, I'm sure they know "all the ins and outs" especially since they design their own, customized silicon. Apple is IMHO one of the few companies with the "full understanding".
post #102 of 119
Quote:
Originally Posted by ash471 View Post

I doubt Apple would try to make the switch to ARM on their own. If Apple switches to ARM in desktops, it will probably be a transition that a fair number of PC manufacturers take.

You think Apple cares about what other PC makers do?
post #103 of 119
Quote:
Originally Posted by samab View Post

In terms of CPU size, I believe that the OMAP4430 (12mm x 12mm) is even larger than the Apple A5 (12mm x 10mm).

http://www.windowsfordevices.com/c/a...unveils-OMAP4/

The 12x12 mm quote is not the die size. It's the quoted package size. (see: Package on package). The die size of the OMAP4430 is likely on the order of 60 to 80 mm^2 based on the stated features of the device.

There's no doubt about it, the A5 SoC die is comparatively big. But then again, it's got two PowerVR SGX543 cores in it. If it implemented the NEON SIMD unit and pipeline FPU, that'll make the A9 cores 30% bigger (I hear). And who knows how and why Apple floor planned the A5 to be big. Ultimately, since Apple needs on the order of 60m to 80m of them, it probably all had to do with getting the best yield at the lowest cost.
post #104 of 119
Quote:
Originally Posted by Logisticaldron View Post

Forgive me if this has been addressed already, but is the same nanometer process required for the entire package or could Apple be saving total package size in other areas that are simply not cost effective for their competition due to economics of scale?

Yes. Kind of weird to answer as it should be obvious from the word "package" or "package-on-package".

The "A5" as a package likely consists of 2 to 3 chips: the CPU-GPU-I/O chip (the SoC proper), and one or two DRAM chips. They are stacked on top of each other in the package. And they already use different vendors for the DRAM. And people suspect that Apple will farm out the CPU-GPU-IO chip to TSMC.
post #105 of 119
Quote:
Originally Posted by Hattig View Post

A 40nm version of the A5 would be around 86mm^2 (122/sqrt(2)) - although different foundries will have different transistor densities anyway, so you can't compare directly. A 32nm chip would be 61mm^2, a 28nm chip (most likely for A6) 43mm^2 (although scaling isn't perfect either!).

Hopefully, they'll move the A5 to 40 nm this year for iPhone and iPod touch. Because next year, they better move to a dual Cortex-A15 CPU and a PowerVR 545MP2/6-series GPU, and a CPU with those features needs a 32 nm process to fit inside a smartphone.

122 mm^2 is big enough such that they may not be able to physically fit it inside a smartphone platform. The package is some 20% bigger then the A4 package. Maybe they'll have an iPhone/iPod touch specific package design, but 240 mm^2 package is pretty big. They'll have to take the iPad 3G GSM and CDMA logic board and squeeze it down to iPhone size. That's got to be tough.

Quote:
Tegra 2 doesn't include Neon vector floating point. This is quite large, apparently. Tegra 3 will include it.

Anandtech quoted Nvidia says that it's a 30% penalty (to include the Neon SIMD unit)!

Quote:
Tegra 2 also doesn't have on-package memory, unlike the A5.

Actually, on the Atrix, it does. Not sure about the Xoom or other devices.
post #106 of 119
Quote:
Originally Posted by DominoXML View Post

There's no such thing as stupid questions, only stupid answers. I hope mine is satisfying.

Apple has a long history in optimizing their OS for different hardware (PPC, Intel, ARM) by keeping it flexible for implementation of new hardware features and specialized chip units.

They use tools like Shark and Instruments for many years, which enable a transparent view on performance and runtime characteristics of executed code .

They share the gained knowledge in the form of documentation, presentations (WWDC), APIs and frameworks with 3rd party developers.

So yes, I'm sure they know "all the ins and outs" especially since they design their own, customized silicon. Apple is IMHO one of the few companies with the "full understanding".

Thank you very much.

One of the things I love about this site is that I learn something every day.
post #107 of 119
Quote:
Originally Posted by notme View Post

You think Apple cares about what other PC makers do?

Are you crazy? Of course Apple cares if the PC market changes CPU architecture. Apple has like 8% market share in desktop PCs. Unless Apple is willing to produce its own CPU, Apple better care what the other 92% of the market is doing. The point I was trying to make is that Apple isn't qualified to make its own CPUs for desktops and servers. And, it has very very little incentive to acquire that expertise.
post #108 of 119
Quote:
Originally Posted by ash471 View Post

Unless Apple is willing to produce its own CPU...

Apple is VERY willing to do that. Stop giving them more ideas about it.

Quote:
And, it has very very little incentive to acquire that expertise.

It has tons of incentive.

P.A. Semi making desktop chips? Means obscenely low power, an architecture built the way Apple wants it, new proprietary programming languages that take advantage of these chips, and less heat output for even thinner machines.

That's just the superficial incentive.

Originally Posted by Slurpy

There's just a TINY chance that Apple will also be able to figure out payments. Oh wait, they did already… …and you’re already fucked.

 

Reply

Originally Posted by Slurpy

There's just a TINY chance that Apple will also be able to figure out payments. Oh wait, they did already… …and you’re already fucked.

 

Reply
post #109 of 119
Quote:
Originally Posted by ash471 View Post

I find number 3 to be quite persuasive. I agree that it wouldn't be good for Apple to support two architectures with the same OS for anything but a temporary transition period. Even though Apple is the expert in ARM with mobile, ARM in desktops and servers is a whole different ball game. Mobile users will readily accept functionality restrictions in exchange for ultra portability. There is absolutely no way a Mac Pro user is going to accept those compromises. (which is why I own an iPad and a Mac Pro). I doubt Apple would try to make the switch to ARM on their own. If Apple switches to ARM in desktops, it will probably be a transition that a fair number of PC manufacturers take....which means such a change is iffy and certainly not happening anytime soon. I know the ARM community is excited about competing with Intel in desktop and server. But until we see how Intel performs with GPU integration and increased power efficiency and what ARM can do to increase clock speeds and computing power, there is no way to predict what will happen. Should be interesting....

how is a 32 bit arm architecture supposed to compete with Intel in desktops and servers? I have 12 gigs of ram in my machine, and certainly couldn't do that if it was based on ARM.
post #110 of 119
Quote:
Originally Posted by alandail View Post

how is a 32 bit arm architecture supposed to compete with Intel in desktops and servers? I have 12 gigs of ram in my machine, and certainly couldn't do that if it was based on ARM.

Sure you could, if the OS was written appropriately. There are lots of techniques that can paste over design constraints like that. The problems are that applications programmers often goon using the APIs and make a mess of it, and the techniques run in software which makes things a bit slower than you would like (See WindowsNT for all of the above).

If you want to make a judgement, at least make it based on correct facts rather than naive bullcrap.
.
Reply
.
Reply
post #111 of 119
Quote:
Originally Posted by Dirk Savage View Post

i posted twice..don't know how to delete.

Welcome to the forum.
Do you realize that fluoridation is the most monstrously conceived and dangerous Communist plot we have ever had to face? - Jack D. Ripper
Reply
Do you realize that fluoridation is the most monstrously conceived and dangerous Communist plot we have ever had to face? - Jack D. Ripper
Reply
post #112 of 119
Quote:
Originally Posted by Hiro View Post

Sure you could, if the OS was written appropriately. There are lots of techniques that can paste over design constraints like that. The problems are that applications programmers often goon using the APIs and make a mess of it, and the techniques run in software which makes things a bit slower than you would like (See WindowsNT for all of the above).

If you want to make a judgement, at least make it based on correct facts rather than naive bullcrap.

we're specifically talking about OS X, not some random OS. And we're talking about a low end mac that developers can target with a recompile, not something that introduces new programming rules. I don't recall a single instance, dating back to the original 680x0 based NeXT days, when OS X supported paged memory to allow more physical memory than the CPU could address. And I really don't see that changing.
post #113 of 119
Quote:
Originally Posted by alandail View Post

we're specifically talking about OS X, not some random OS. And we're talking about a low end mac that developers can target with a recompile, not something that introduces new programming rules. I don't recall a single instance, dating back to the original 680x0 based NeXT days, when OS X supported paged memory to allow more physical memory than the CPU could address. And I really don't see that changing.

You made a general comment about some mythical future CPU architecture & OS combination. You don't get to say "I have invented a future CPU, but I restrict it to running on an unadapted OS". That's pure idiocy.

So even if you are talking about OS X, you weren't talking about 10.6, or 10.7, but some future version just as mythical as the hardware you were discussing. Once that mythological line is crossed, every counter-argument you possibly have trying to restrict it is as broken as a dick that won't respond to Viagra or Cialis. Face it, your above post is flaccid and impotent.

I don't see the hardware changing to your mythological composition either, which make your whinging about having to hold to consistent rules in your own mythology given some past reality even less relevant.
.
Reply
.
Reply
post #114 of 119
Quote:
Originally Posted by Hiro View Post

You made a general comment about some mythical future CPU architecture & OS combination. You don't get to say "I have invented a future CPU, but I restrict it to running on an unadapted OS". That's pure idiocy.

So even if you are talking about OS X, you weren't talking about 10.6, or 10.7, but some future version just as mythical as the hardware you were discussing. Once that mythological line is crossed, every counter-argument you possibly have trying to restrict it is as broken as a dick that won't respond to Viagra or Cialis. Face it, your above post is flaccid and impotent.

I don't see the hardware changing to your mythological composition either, which make your whinging about having to hold to consistent rules in your own mythology given some past reality even less relevant.

changing CPUs for the mac wasn't my idea, and is something I said is unlikely to happen. I was simply pointing out an obstacle in Apple doing something like that.
post #115 of 119
Quote:
Originally Posted by alandail View Post

changing CPUs for the mac wasn't my idea, and is something I said is unlikely to happen. I was simply pointing out an obstacle in Apple doing something like that.

So your defense is "it wasn't my idea!" and you didn't think it through before you jumped into the thread? That's not much better...
.
Reply
.
Reply
post #116 of 119
Quote:
Originally Posted by Hiro View Post

So your defense is "it wasn't my idea!" and you didn't think it through before you jumped into the thread? That's not much better...

please read the thread. My first post on the topic said

Quote:
I'd be really surprised to see anything like that. Any gap between the current iPad and the 11" MacBook air will be filled with more powerful iPads, not less powerful Macs that require all new universal binaries.

It wasn't my idea, I wasn't in favor of it, I listed reasons why it would be a bad idea, you apparently agree with me, but spent your time arguing against one of my reasons against it.
post #117 of 119
Quote:
Originally Posted by alandail View Post

It wasn't my idea, I wasn't in favor of it, I listed reasons why it would be a bad idea, you apparently agree with me, but spent your time arguing against one of my reasons against it.

No, one of your supposed reasons was broken, I pointed that out. Then you tried to paper over it by claiming the hardware would change and dismissing the possibility of OS changes because ~we are talking about OS X~. Then when called on that fictitious cherry pick you rolled to ~it wasn't my idea~.

I don't agree with you, the reasons for why things don't work are more important than blind misguided statements. We aren't saying the same things just because you pulled a blind squirrel. Someday as the tech world inexorably moves on, the reasons something didn't work very well will get fixed. But your broken reason can never get fixed, and potentially, incorrectly, leading some that don't understand deeply to believe something very possible isn't. That's not good.

I find your sloppiness in technical knowledge followed by attempted slipperiness to be of little value added, and don't see the need to let it sit without correction. Repeated excuses don't change anything, they just illustrate unwillingness to understand.
.
Reply
.
Reply
post #118 of 119
Quote:
Originally Posted by Tallest Skil View Post

Apple is VERY willing to do that. Stop giving them more ideas about it.

It has tons of incentive.

P.A. Semi making desktop chips? Means obscenely low power, an architecture built the way Apple wants it, new proprietary programming languages that take advantage of these chips, and less heat output for even thinner machines.

That's just the superficial incentive.

You said it way better than I could've. Good job.
post #119 of 119
Quote:
Originally Posted by alandail View Post

changing CPUs for the mac wasn't my idea, and is something I said is unlikely to happen. I was simply pointing out an obstacle in Apple doing something like that.

There is something you should know about the Mach Kernel and the yellow box, Before being MacOS X and iOS, NeXTStep was created on a 68030 @ 33mHz, when NeXT stopped making their 68040 based Next Workstation, NeXTStep OS became OpenStep (the yellow box), which essentially was an application layer on top of Windows. Apple is still using the yellow box for porting their Objectives-C Apps like Safari and iTunes on Windows, and Apple could if they want port any Cocoa apps into native Windows apps . Mach-O app have hability to pack multiples CPU arch in the same executable. Beside, now Apple have all ready goes beyond NeXT with LLVM and JIT executables, making able to share the same apps on many system. The yellow box could really challenge JavaVM on his own turf.

I don't know Apple's long term plans, but they got all the technology to handle any platform the future will come with, Apple late OS effort have been put on exponential numbers of core and diversity with OpenCL and Grand Central. Think of the power of 25 x 1 watt A5 vs 1 x 25 watts core i7. I'm sure the iOS kernel and Frameworks are all ready using all optimization they can for the new mult-core GPU and A9 core. Something Google can't do on Android because they don't fully control his kernel, drivers and JavaVM. This is why Honeycomb is performing so badly and Google have been force to close the source code.. To give them some time to clean the big licensing mess they got in.

One day we shall see PCs has big clunky muscle cars.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: iPad
AppleInsider › Forums › Mobile › iPad › Apple's A5 processor could pave the way for larger chip trend