or Connect
AppleInsider › Forums › Mobile › iPad › Apple's A5 processor could pave the way for larger chip trend
New Posts  All Forums:Forum Nav:

Apple's A5 processor could pave the way for larger chip trend - Page 2

post #41 of 119
Quote:
Originally Posted by ksec View Post

Well, die size i was referring to Apple doesn't care about die size as much as Nvidia due to cost issues.

TSMC 40nm LP is actually much better then what other people think. But Samsung Holds all the liscense needed to create Apple's SoC.

Th cost of A5 comes from some other analyst and isuppli. Which predict from somewhere 20 - 30 Range. So i took the middle of the estimate as $25.

The thing is the A5 is a multichip module. If I remember correctly a stack of three chips. This inpacts pricing versus a single die implementation. I would lean towards $40 for each assembled A5. Basically they sandwich 3 state of the art chips in the A5 package.
post #42 of 119
Quote:
Originally Posted by EgisCodr View Post

I guess you guys don't realize that the Tegra 2 is last year's chip. You are comparing the A5 to a chip from 2010. The next chip will be out in products in June. Code-named Kal-El, it is the first quad-core processor on the market. Five times faster than the Tegra 2.

That is THIS YEAR's chip. Cannot wait to see how the A5 stacks up.

Apple would be better served buying the chips from Nvidia. By 2014, Nvidia will have chips 100 times faster than Tegra 2. This is an arena Apple will not be able to keep up in.

I guess about everything has already been said by the people before me, but I just wanted to add how I don't believe you are falling for the same hot air NVidia is blowing with Tegra 3 again. It's like you are buying into the same PR bluff three times in a row.

First Tegra was going to be the killer SoC from Nvidia that would show the complete embedded establishment how it was done. After a year of posing how great it was it finally ended up in just one mildly popular device from a big manufacturer (the Zune). It never delivered on any of its promises because before everyone found out it wasn't so great after all NVidia was already touting the Tegra 2 as the next big thing. This went on for another year, NVidia continually bragging about a chip no-one could actually buy yet, leaving no opportunity unused to put the Tegra 2 name everywhere, even in their 'own' Tegra 2 app store. Now devices are shipping with this fabulous chip, and surprise surprise: the CPU part is nothing special, and the GPU part is severely underperforming, it's almost a year behind the curve compared to Imagination's offerings.

Now instead of blindly following the same 'NVidia is going to rock the ARM SoC world' mantra a third time, realize that there are companies that have a whole decade more experience designing ARM CPU's and GPU's. NVidia might be a big name in desktop GPU's, but embedded graphics is a whole different ballgame, with completely different design requirements, performance metrics, it's actually almost incomparable to desktop GPU's in terms of how you build them and what makes them fast and efficient. Desktop GPU's are mainly brute force, throwing more hardware at the problem. Mobile GPU's are nothing like that.

When I was graduating I worked on a video decoding chip that was supposed to be used for 3D graphics somewhere in the future. Back then (10 years ago), at the company I was graduating at (it has been bought by intel a few months ago by the way), everyone was looking at Imagination already, because even then they were the only credible mobile GPU company. TEN years ago, and they have been steadily improving all these years up to now. NVidia is just entering this market and hoping their desktop GPU experience will give them a wildcard for entry into this market, but it won't. It will take them at least 5 years before they can close the gap with Imagination, if they ever do.

NVidia is not going to take over the world with their ARM SoC's, they should be happy if they are still even in that business in 5 years instead of quietly retracting and going back to what they are good at. I have to admit I'm impressed by the way NVidia has managed to spin the public opinion (at least in the tech world) that they have a great product line-up and roadmap, because in reality, they are barely catching up.
post #43 of 119
FWIW, there's more than one flavor of Tegra2/graphics combo. The fact that a specific Apple chipset on a specific device benchmarks better than a specific Tegra2 in a specific device doesn't mean that all combinations will have the same results does it? And Tegra isn't even the latest mobile chipset in use.

http://www.engadget.com/2011/04/02/q...ompetition-in/
melior diabolus quem scies
Reply
melior diabolus quem scies
Reply
post #44 of 119
Quote:
Originally Posted by EgisCodr View Post

I guess you guys don't realize that the Tegra 2 is last year's chip. You are comparing the A5 to a chip from 2010. The next chip will be out in products in June. Code-named Kal-El, it is the first quad-core processor on the market. Five times faster than the Tegra 2.

That is THIS YEAR's chip. Cannot wait to see how the A5 stacks up.

Apple would be better served buying the chips from Nvidia. By 2014, Nvidia will have chips 100 times faster than Tegra 2. This is an arena Apple will not be able to keep up in.

To be honest I also believed that NVIDIA's Tegra chips will rule the mobile chip world, but I think it was more a NVIDIA marketing buzz by now.

Looking back they claimed that the first Tegra equipped Zoom HD will crush the iPod Touch in gaming performance. Unfortunately it wasn't able to show this superiority and the Zune failed in attracting developers and customers.

Then there was word that the Tegra 2 will crush the competition. But beside the lack of games for e.g. the Xoom, it never showed a significant advantage over the A4 in graphics performance.
That's because it's not only the raw calculation power that counts but also things like memory bandwidth, filtrate, driver and core system optimization when it comes to decent performance.

The big advantage of Apple is the double take on optimization. Starting with the A4 they not only optimized the software for the chip but also began to optimize the SoC for the needs of their software.

I think the A5 is an impressive result of this multi synergy, multi optimization strategy which insured that games like Real Racing 2 were able to unleash it's power from day one.

Even if NVIDIA packs an impressive number of cores in the Tegra 3 it seems unlikely that devices like a Xoom 2 will soon be able to take advantage of this glory because Android might get better in multicore efficiency, but this OS will have a hard time to catch up with Apple having technologies like GPU-accelerated UI, GCD and OpenCL already in place.

The big advantage for Apple and their 3rd party devs is that they can easily unleash the power of the A5 or an upcoming A6 chip while Google and Android devs will have to take care for a multitude of SoC's.

Apple concentrates on improving the ARM A and VRX lines which ensures that devs have a future proof compatibility guaranty for optimized code and only have to adopt new SoC features when favorable.

Therefore I don't believe that the Tegra-Android combo will catch up anytime soon.
post #45 of 119
Apple can easily clock the A5 lower for iPod and iPhone use. As for space, I'm sure they can squeeze it in, that's what they do best.
post #46 of 119
Domino, Google is making chipset standardization a priority in it's Android development. I think they've taken a cue from Apple and realized that having their software running on different chipset combos makes for an uneven user experience. Some see great results while others are so-so. Expect to see a more unified approach from android over the next year, and that includes chipsets.
melior diabolus quem scies
Reply
melior diabolus quem scies
Reply
post #47 of 119
Quote:
Originally Posted by solipsism View Post

LOL I’d love to see a water-cooled smartphone.

Fandroids would love it!

Edit: Damn, pipped by ThePixelDoc
post #48 of 119
Quote:
Originally Posted by solipsism View Post

Excellent points, but could you explain this one in more detail. Im wondering why less metal layers would create a larger chip.

Transistors must be connected together through wires for them to be called integrated circuits! The challenge is of course, how do you wire hundreds of millions of transistors together.

The metal layers are the wiring for the transistors. The transistors are etched, drawn onto the silicon wafer. Then the wires are deposited in layers on top of the transistor layer. My arm chair understanding of this, I'm no EE btw, is that by using more metal layers, the CPU die could be made more compact.

So, 10,000 ft level, if you laid down the wires at the same level as the transistors, the various blocks in the CPU (fetchers, decoders, integer, caches, registers, execution, etc) would have to be floor planned in such a way to minimize die size as there would be millions of wires in between them. This would make a pretty big die size in any case.

Well, you can reduce the die size by taking some of the wiring off this first layer, and deposit them in another layer above. This allows die size to shrink some by removing the area occupied by the move wiring. The ICs and blocks can then be replanned to occupy the space more compactly in an optimization dance between transistor floor plan and wiring layers. By adding more metal layers, or wiring/interconnect layers, the manufacturer can further optimize die size. There is a point of diminishing returns along with other factors (yield) that'll come into play.

In a 6 metal layer CPU, you have 1 level of transistors, than you have 6 levels of wiring connecting the transistors. So a CPU is really a 3 dimensional construct like a 7 story building where the first floor are the transistors and the remaining 6 floors are nothing but wiring.
post #49 of 119
Quote:
Originally Posted by Shrike View Post

That's just a little bit of exaggeration there, man. Tegra 3 is 5x times faster than Tegra 2? And June? Nvidia ARM SoCs will be 100x faster in 2014?

How about Tegra 3 will be out in 2H 2011. The rumors are saying August, not June. There are no known design wins for it yet. They better announce one soon if a product is going to ship soon. In system throughput, a 1 GHz Tegra 3 has the potential to be 2x faster than a 1 GHz Tegra 2, and has potential to be 5x faster in GPU than Tegra 2.

In 2014, 3 years away, ARM SoCs will be at 18nm to 20nm process nodes, representing 4x to 5x the transistor count from Tegra 2. This will translate to about 2x to 3x the single-threaded performance (not many low hanging fruit for single threaded performance after the A5) and 5x to 10x the multi-threaded performance (quad-core with 2-way SMT). GPU will be much more power consumption limited than desktop hardware GPU are. And I think 5x to 10x is the most one could expect as well. You likely can have 100x GPU performance, desktop SLI rigs today are probably already there, but I doubt they are going to fit in handheld products.

Nvidia's press release roadmap is great and all, but it's a press release roadmap. You should inject some realism instead of propagandizing it.

Tegra 3 is going to be built on the same 40 nm process as Tegra 2. The thing has about 50% more transistors then Tegra 2. It'll need to be operating at ~25% lower voltages and be power-gated up the wazoo to fit inside the same power envelope (~0.5 Watts) as the Tegra 2 in order to fit inside handhelds. People should be skeptical that they could do this, and quad-core SoCs probably won't fit until the 28nm to 32nm nodes. And that means 2012.

According to Nvidia web site (http://blogs.nvidia.com/2011/02/tegr...ile-processor/)

Nvidia are claiming the quad core (Kal-El) is sampling now and launches in August 2011.

As to running in the same power budget this will really depend how they have exploited, power gating, DVFS and the quad cores, and how the OS load-balances between the cores. No mean feat I can tell you. However, for many use-cases a quad core A9 vs a dual core A9 should be be far more power efficient as the four cores will be running at a lower power more of the time, which will give less heat and less leakage (so better active time).

http://twitter.com/kevinmcintyre09
post #50 of 119
Quote:
Originally Posted by wizard69 View Post

There are several things the article didn't consider.

1.
The article didn't take into account Apples FAST logic that trades space for lower power and high performance.

2.
The CPU is assembled in a module that stacks RAM on top of it. The chips need to line up properly to accomplish this.

This is common practice in the industry, most semicos I have dealt with offer this as an option

3.
No body has a sound idea as to what is actually included on A5. Chipworks can guess at possible functionality but they won't get 100% of the functionality. So the question is how much of this comparison is one to one.

The x-ray analysis or whatever it is called is pretty good at comparison. So they have a fair idea of A4 vs A5, and most of the major blocks are distinguishable.

4.
I will take a trade off that gives me better performance and lower power any day. Customer satisfaction means more than chip size!!!!

Me too
post #51 of 119
Quote:
Originally Posted by Swift View Post

It showed at last year's CES. There are a couple of half-assed tablets on the market now with that chip. Not before this year, though they've told us all multiple times that it was coming. You can tell the difference between "shipping" and "in the design stages," can't you?

A quad chip? Will it finally run Flash?

My single core 1.2ghz Droid 2 Global runs Flash smoothly. I watch the Daily Show on the official website with it. So I'm guessing a quad-core ARM Android device will amply run Flash. Very amply.

Now if the A5 and iPad are so fast, why isn't Apple allowing you to have Flash? That is the question.
"Overpopulation and climate change are serious shit." Gilsch
"I was really curious how they had managed such fine granularity of alienation." addabox
Reply
"Overpopulation and climate change are serious shit." Gilsch
"I was really curious how they had managed such fine granularity of alienation." addabox
Reply
post #52 of 119
Quote:
Originally Posted by kevin.mcintyre View Post

Quote:
Originally Posted by Shrike

[]
How about Tegra 3 will be out in 2H 2011. The rumors are saying August, not June.
[]

According to Nvidia web site (http://blogs.nvidia.com/2011/02/tegr...ile-processor/)

Nvidia are claiming the quad core (Kal-El) is sampling now and launches in August 2011.

[]

Sounds like hes getting his info from the same place you are. Since you both have it it tell me that the August launch for Tegra 3 is the most accurate. Now, this is for production of the chip, not for a shipping product utilizing the chip, right? So when are we to expect new devices utilizing Tegra 3? I assume by the end of the year well see a couple trying to claim the prize of first, but that CES 2012 will be swarmed with another round of iPad-kllers sporting Tegra 3.
Dick Applebaum on whether the iPad is a personal computer: "BTW, I am posting this from my iPad pc while sitting on the throne... personal enough for you?"
Reply
Dick Applebaum on whether the iPad is a personal computer: "BTW, I am posting this from my iPad pc while sitting on the throne... personal enough for you?"
Reply
post #53 of 119
Quote:
Originally Posted by Aquatic View Post

My single core 1.2ghz Droid 2 Global runs Flash smoothly. I watch the Daily Show on the official website with it. So I'm guessing a quad-core ARM Android device will amply run Flash. Very amply.

Now if the A5 and iPad are so fast, why isn't Apple allowing you to have Flash? That is the question.

I can watch the Daily Show on the official website on my iPhone, too. But I don't need to have Flash to watch it.
post #54 of 119
Quote:
Originally Posted by Aquatic View Post

My single core 1.2ghz Droid 2 Global runs Flash smoothly. I watch the Daily Show on the official website with it. So I'm guessing a quad-core ARM Android device will amply run Flash. Very amply.

Now if the A5 and iPad are so fast, why isn't Apple allowing you to have Flash? That is the question.

That has been settled a long time ago. If you ever folllowed the various discussions. but you are too lazy to do that. It makes your point irrelevant. I have several browsers (Camino, Firefox, Chrome, Opera) that were set up (some with Flash, others where Flash is restricted).

Comparing the same website, e.g., the New York Times, I get better viewing experience in the browser where the Flash is restricted -- no stalling, battery consumption much better, etc.

One very big bonus, many ads and useless animations still use Flash -- I get to view New York Times without have to suffer those ads and animations. Do you know how satisfying to be freed of most of those ads and animations? I have photos to prove this experiment.

The last observation alone is a big argument NOT to have Flash -- in my book.

One question that you should answer: Can you name ten (10) very popular (most visited websites) that do not yet have Flash alternatives? Do you even know what that is - Flash alternatives?

Do you see many here dying of envy with your Flash?

So, enjoy your Flash but do it in your own time. Or, have an orgy with Flash with other Droids, and Xooms. Oh wait, the Xoom owners do not yet have Flash.

Quote:
Originally Posted by sessamoid View Post

I can watch the Daily Show on the official website on my iPhone, too. But I don't need to have Flash to watch it.


And that is because any show, website, Apps creator or any endeavor that wants to reach a larger audience cannot ignore about 200 million iOS devices already around (this number may reach around 250-300 million by the end of this year). The owners of these iOS devices are known to be willing to spend their money and buy stuff at premium, not just use free Apps.

No Flash in those iOS devices. Solutions for creators. Don't use Flash, or create Flash alternatives (e.g., HTML5 scripting), for those iOS. That is how YouTube works -- it has both Flash and non-Flash versions, the latter for iOS devices. And many others do the same. Others simply just do not use Flash anymore. That is what I am seeing in many of the sites I visit myself.

I do not use Flash in any of the websites I created. Joomla (a very popular CMS) used by millions of website creators all over the world, do not include Flash as a native component.

QED

CGC
post #55 of 119
Quote:
Originally Posted by Ssampath View Post

I think the question is what value add does Nvidia provide in the Arm food chain if an upstart (in the chip business) like Apple can outdo them with the A5. Also there is at least 4 other companies doing Arm chips - Marvel, Samsung, Ti and Qualcomm. It is not clear to me if this is going to be a great business for Nvidia.

For stock investors, I'd submit there is no value to invest in any of these companies. I wouldn't.

The market is fairly saturated today, so, there isn't a lot of opportunity for chip companies to make handsome profits, especially for $15 parts.

Theoretically there is a scenario where ARM becomes the de facto standard for computing: servers, desktops, laptops, handhelds and embedded. This can allow in increase per part revenues: desktop/laptop chips would be in the $100 to $500 range and server parts will be in the $5000 to $3000 range. A company can make handsome profits by selling into these markets and sweep away the existing competitors in this space.

Maybe it is Nvidia, maybe it could be Qualcomm or TI. Maybe Intel will smarten up and license ARM and fab ARM chips again and it'll be them. That's an interesting investment scenario and if you choose the right company, you can get 10:1 returns. If only Intel would smarten up.

But.

ARM is a licensed architecture. It's not owned by a fab company. In such a scenario I can't see how one company can come to dominate (except for Intel). All companies are essentially at parity in terms of architecture and fabs, so how does one company dominate? (Intel is the exception. They can do it but it looks like they let x86 cut their legs under them before they move.)

Agree with you. Chip companies are a poor investment. As for why Nvidia would want to do it, well, you have to give them credit for wanting to dominate. It is more business for them and represents a growth revenue stream.
post #56 of 119
Quote:
Originally Posted by extremeskater View Post

Apple didn't build the A5 chip, Samsung built the A5.

Touche.
--SHEFFmachine out
Da Bears!
Reply
--SHEFFmachine out
Da Bears!
Reply
post #57 of 119
Quote:
Originally Posted by Aquatic View Post

My single core 1.2ghz Droid 2 Global runs Flash smoothly. I watch the Daily Show on the official website with it. So I'm guessing a quad-core ARM Android device will amply run Flash. Very amply.

Now if the A5 and iPad are so fast, why isn't Apple allowing you to have Flash? That is the question.

your first answer

Quote:
Originally Posted by sessamoid View Post

I can watch the Daily Show on the official website on my iPhone, too. But I don't need to have Flash to watch it.

the question for you. Why do you think having standard H.264 video wrapped in a flash wrapper who's only purpose is to force you to use proprietary, closed flash is a good thing? Why do you think it's a good thing for Adobe to dictate release schedules for companies? How many years did it take Adobe to release a working flash?

I think you'd be surprised at just how many sites there are that still use flash when you access them from a desktop computer or an Android device that work just fine without flash in an iOS device. Why do they still use flash? They want you to install it so their flash ads will run. I wish they would stop doing that because Flash is the only thing that ever crashes on my computer, and it crashes daily.

So you have an "open" phone so you can extend the life of a needless proprietary wrapper for h.264 to get more ads and crash more computers. Is there even one thing that flash does that couldn't be accomplished without flash (aside from crashing computers)?
post #58 of 119
Apple bought PA Semi just for their chip design talent back in 2008. It cost Apple $278 million, and now their investment is finally starting to pay off. The A5 chip is the fastest mobile SoC on the market, it runs cool, it is economical with battery power, and it costs Apple far less than an off-the-shelf chip from, say, Intel.

And down the road, it's not out of the realm of possibility for Apple to unify its Mac and iOS device hardware to all use some quad-or 6-core ARM variant. Especially for portables like the MacBook Air. Of course, that would require porting Mac OS X to the AX chip of the future. But don't forget that Mac OS X ran on RISC chips from the start. Been there, done that transition.

If I were to make a silly wild-ass guess, I'd say that Apple will be quietly developing the RISC-based AX version of Mac OS quietly. They'll sit back and watch Microsoft botch their attempt to port Windows 8 to Tegra. It'll be buggy, the Tegra chip will run hot, and laptops running the combination will get poor battery life. And as we've seen, Microsoft has a poor track record of providing apps with backward compatibility. Two words: "XP Mode."

Apple could watch the Windows 8 + Tegra dumpster fire to get out of control, then drop the bomb. Mac OS 11 running on 4- and 6-core AX chips, Grand Central Dispatch balancing the load perfectly, MacBooks running as cool as iPads with enormously long battery life, apps running fat binaries just like they did in the 68k to PowerPC and the PowerPC to Intel transitions.

Should be fun to watch.

Sent from my iPhone Simulator

Reply

Sent from my iPhone Simulator

Reply
post #59 of 119
You guys feed the trolls too much.
post #60 of 119
Quote:
Originally Posted by Gatorguy View Post

Domino, Google is making chipset standardization a priority in it's Android development. I think they've taken a cue from Apple and realized that having their software running on different chipset combos makes for an uneven user experience. Some see great results while others are so-so. Expect to see a more unified approach from android over the next year, and that includes chipsets.

Sounds reasonable at first glance, but I think this move is pretty complicated.

Where to draw the line? Tegra is OK by now, but what about Qualcomm SoC with Adreno GPU or Samsung Hummingbirds with SGX 540 or better?
If this is already too much differentiation should Samsung abandon their own chips for the Tegra in their new tablets?

The second problem is Google's system to elect launch partners like they did with Moto and NVIDIA for Honeycomp. These partners invested a lot in the Xoom and as compensation they seem to get exclusivity for some months.

Neither Moto nor NVIDIA are interested in providing their hard work for free to their competitors. Perhaps this is one of the reasons why the HC code is still closed and has to be "cleaned" a bit.

Why should Samsung, LG, NVIDIA and all the others invest heavily in a system Google is willing to exercise complete control?
I'm sure all these Partners would prefer to optimize Android for their chips and systems.

And third: It seems that this "open always wins" and "don't be evil" is just marketing buzz of an advertising company. While the Android fans seem not be much effected by this move Google lost a lot of credibility inside the open source community and among their partners.
It was the broad support of the community and a the huge variety of partners that enabled the success of Android.

Taking a cue from Apple and switching to a model of hand-picked partners is kicking the others in the a$$.

Which leads me to the fourth point:
Google somehow officially admits that HC was rushed to the market.
So the hard cuts seem necessary to keep pace.

We're talking about a different business model for Android 3.x then that of 2.x.

I'm not convinced that it will be equally successful because it starts with some displeased partners questioning Google's trustworthiness.
post #61 of 119
Quote:
Originally Posted by extremeskater View Post

Apple didn't build the A5 chip, Samsung built the A5.

Apple builds the A5, Samsung is just one of the contracted fabs. Contracted manufactring labor, nothing more.
.
Reply
.
Reply
post #62 of 119
Quote:
Originally Posted by EgisCodr View Post

I guess you guys don't realize that the Tegra 2 is last year's chip. You are comparing the A5 to a chip from 2010. The next chip will be out in products in June. Code-named Kal-El, it is the first quad-core processor on the market. Five times faster than the Tegra 2.

That is THIS YEAR's chip. Cannot wait to see how the A5 stacks up.

Apple would be better served buying the chips from Nvidia. By 2014, Nvidia will have chips 100 times faster than Tegra 2. This is an arena Apple will not be able to keep up in.

Interesting how when it was the A4 vs Tegra and Snapdragon it was, hey you compare what you are shipping, the future is just so much vapor. But when the A5 comes out it flip flops to "I gues you guys don't realize that the Tegra 2 is last year's chip."

Opportunistic sour grapes is all your post is about.

And Nvidia has had a horrible time with power management. I guess you can take the engineers out of SGI, but you seemingly can't take the SGI out of the Nvidia lead engineers. I have no confidence they can maintain prominence in the marketplace, they have been showing dangerous engineering and business parallels to SGI for some time, and Huang's ego also cost them at least two to three years in the power management regime.
.
Reply
.
Reply
post #63 of 119
Quote:
Originally Posted by SockRolid View Post

And down the road, it's not out of the realm of possibility for Apple to unify its Mac and iOS device hardware to all use some quad-or 6-core ARM variant. Especially for portables like the MacBook Air. Of course, that would require porting Mac OS X to the AX chip of the future. But don't forget that Mac OS X ran on RISC chips from the start. Been there, done that transition.

So far, I am skeptical of Mac OS X AppKit and iOS UIKit being merged completely. However, with more fanciful speculation if Apple ever builds a 4 or 6 core ARM SOC, I assume that GCD would make a significant impact within IOS. Another factor would be the increasing capabilities of LLVM to target different architectures, thus smoothing transitions between IA_64 and ARM. As mentioned above, the MACH-O file format allowed the switch between RISC and CISC CPUs in the past.

It is my belief that Apple keeps its options robust with regard to computer architectures.

Nullis in verba -- "on the word of no one"

 

 

 

Reply

Nullis in verba -- "on the word of no one"

 

 

 

Reply
post #64 of 119
Quote:
Originally Posted by solipsism View Post

LOL Id love to see a water-cooled smartphone.

Here you go:

They spoke of the sayings and doings of their commander, the grand duke, and told stories of his kindness and irascibility.
Reply
They spoke of the sayings and doings of their commander, the grand duke, and told stories of his kindness and irascibility.
Reply
post #65 of 119
Quote:
Originally Posted by solipsism View Post

Excellent points, but could you explain this one in more detail. Im wondering why less metal layers would create a larger chip.

Very simplified analogy: Instead of building high density multi-store apartment buildings connected by skyways, you build suburban townhouses and single family homes. Same number of people, more streets and acreage necessary.
.
Reply
.
Reply
post #66 of 119
Domino, I agree with you. It's probably going to be a transition with more than a few bumps. As as you correctly hinted at, "normal" consumers don't care a twit about open, closed, chipsets or even OS for that matter. The bumps and bruises will be with the handset/tablet manufacturers. But apparently Google feels some control needs to happen if Android is going to continue it's somewhat surprising success. I think they're right. I don't know anyone who predicted they'd have this kind of mobile market presence this soon. Android was severely underestimated as a competitor. Still is. Google is a bit better developer than they've been given credit for going by market success. And evil? I still don't see any real evidence of it.
melior diabolus quem scies
Reply
melior diabolus quem scies
Reply
post #67 of 119
Quote:
Originally Posted by SockRolid View Post

Apple bought PA Semi just for their chip design talent back in 2008. It cost Apple $278 million, and now their investment is finally starting to pay off. The A5 chip is the fastest mobile SoC on the market, it runs cool, it is economical with battery power, and it costs Apple far less than an off-the-shelf chip from, say, Intel.

We really dont know how much PA Semi tech made it into A5. Fact is we don't know much at all about the A5. It is however an obvious advantage for Apple.
Quote:
And down the road, it's not out of the realm of possibility for Apple to unify its Mac and iOS device hardware to all use some quad-or 6-core ARM variant.

I'd say that is extremely remote. What people seem to mis here are a few very important things that have made Mac acceptable. One is X86 compatibility. Like it or not being able to run Windows on a VM is a valuable thing. Second many of your Mac users are professionals and thus need fast machines. ARM is pathetically slow when put up against an Intel implementation. Third ARM is currently a 32 bit platform. For many common Mac apps, 32 bit is becoming useless.

However we should not dismiss consumer machines using ARM chips. However I can't see Apple calling these Macs.
Quote:
Especially for portables like the MacBook Air. Of course, that would require porting Mac OS X to the AX chip of the future. But don't forget that Mac OS X ran on RISC chips from the start. Been there, done that transition.

There is really little to port now. In any event I would imagine Apple jeeps Mac OS/X running on several architectures as a normal part of keeping the code base solid. Not to mention is the fact that the kernel and many of the libraries used on iOS are the same as those built for Mac OS/X.

Sadly people seem to think there is a huge gulf between the two OS's. That has really never been the case, each builds upon what is learned in the other. The differences there are to support entirely different usage patterns, security and marketing.
Quote:

If I were to make a silly wild-ass guess, I'd say that Apple will be quietly developing the RISC-based AX version of Mac OS quietly. They'll sit back and watch Microsoft botch their attempt to port Windows 8 to Tegra. It'll be buggy, the Tegra chip will run hot, and laptops running the combination will get poor battery life. And as we've seen, Microsoft has a poor track record of providing apps with backward compatibility. Two words: "XP Mode."

While I suspect that Mac OS/X already runs on ARM I see zero incentive for Apple to try to market such devices. It is far easier to grow iOS in a different direction and keep marketing simple. Right now the consumer has a clear distinction between iOS devices and Macs, apple really shouldn't muddy the waters.
Quote:
Apple could watch the Windows 8 + Tegra dumpster fire to get out of control, then drop the bomb. Mac OS 11 running on 4- and 6-core AX chips, Grand Central Dispatch balancing the load perfectly, MacBooks running as cool as iPads with enormously long battery life, apps running fat binaries just like they did in the 68k to PowerPC and the PowerPC to Intel transitions.

And running pathetically slow with little capability to tackle the bigger jobs. I just don't think you have an understanding of just how far behind ARM is performance wise, such machines would simply not meet the performance needs of many Mac users.

Again though I'm not dismissing that future iOS devices might have capability beyond today's iPad and target the consummer with simpler needs. Rather the point is why would Apple muddy up the marketing between the Macs and iOS devices. If or when Apple comes out with more iOS devices I can't see them going after the Mac market with them.

Look at it this way, the iPad is a massive success, however it hasn't impacted Mac sales at all yet. The reason in my mind is that they service different needs and markets.
Quote:

Should be fun to watch.

Well if it ever happens. The netbook market was a flash in the pan due to poor performance measured in a number of ways. I don't see ARM bringing anything special to this market.
post #68 of 119
Quote:
Originally Posted by wizard69 View Post

And running pathetically slow with little capability to tackle the bigger jobs. I just don't think you have an understanding of just how far behind ARM is performance wise, such machines would simply not meet the performance needs of many Mac users.

Again though I'm not dismissing that future iOS devices might have capability beyond today's iPad and target the consummer with simpler needs. Rather the point is why would Apple muddy up the marketing between the Macs and iOS devices. If or when Apple comes out with more iOS devices I can't see them going after the Mac market with them.

Look at it this way, the iPad is a massive success, however it hasn't impacted Mac sales at all yet. The reason in my mind is that they service different needs and markets.

Good points all, well made, but consider this: computer processing power has been outstripping software needs for a while now. Sure, there are edge cases where more grunt is always better (video rendering, heavy duty image processing, 3D modeling, high end visualization) but, increasingly, the average Intel desktop/laptop now has vastly more power than the majority of the users of such machine will ever want or need.

That, in fact, is the basis of the success of the iPhone and the market it spawned, and the accelerating success of the iPad-- these type of devices have reached the point where they can do most of what most people want to do with computers.

The iPad is really the poster child for this reality: with software optimizations it can do the work of a Mac from about five years ago, and about five years ago was when desktop/laptop machines started to be as powerful as most users actually needed (outside of the kinds of games that even then were moving largely to consoles).

So it appears to me that most of the processing advantages of desktop/laptop processors are being piled on top of what is already overkill for most users, and that the new mobile chipsets are now moving comfortably into the sweet spot of being able to provide plenty of performance for most of what folks are doing with their computer.

So if Apple were to start running OS X machines on the Ax platform, they wouldn't be crippled machines for most people because most people aren't really using the power of what they have now in their MacBook Pros or Airs or Minis.

So I could see a bifurcation of the Mac line into Mac Pros retaining the Intel architecture and being sold to actual professionals with an actual need for as much processing power as possible, with the Mac line (including the Mini, MacBook and Air and possibly new models) moving to the Ax architecture and becoming extremely cost competitive with vanilla Wintel boxes.

Plus, Apple is obviously going to keep working on merging the iOS and OS X platforms, bringing ever more cross compatibility along the way. It's not hard to imagine a $300 Ax Miini (being even smaller) which runs OS X out of the box but has an iOS compatibility mode that includes drag and drop between apps. Or even more likely an Ax based Air like machine for $600 which satisfies the "I need a real OS" crowd while providing iPad size and weight and battery life.
They spoke of the sayings and doings of their commander, the grand duke, and told stories of his kindness and irascibility.
Reply
They spoke of the sayings and doings of their commander, the grand duke, and told stories of his kindness and irascibility.
Reply
post #69 of 119
I water cooled mine once!
post #70 of 119
i posted twice..don't know how to delete.
post #71 of 119
A 40nm version of the A5 would be around 86mm^2 (122/sqrt(2)) - although different foundries will have different transistor densities anyway, so you can't compare directly. A 32nm chip would be 61mm^2, a 28nm chip (most likely for A6) 43mm^2 (although scaling isn't perfect either!).

Tegra 2 doesn't include Neon vector floating point. This is quite large, apparently. Tegra 3 will include it.

A high yielding mature process can allow larger dies to be made than those on a lower yielding process, for the same price.

Tegra 2 also doesn't have on-package memory, unlike the A5.

Sorry if these points have already been made.
post #72 of 119
Forgive me if this has been addressed already, but is the same nanometer process required for the entire package or could Apple be saving total package size in other areas that are simply not cost effective for their competition due to economics of scale?
post #73 of 119
Quote:
Originally Posted by addabox View Post

Good points all, well made, but consider this: computer processing power has been outstripping software needs for a while now. Sure, there are edge cases where more grunt is always better (video rendering, heavy duty image processing, 3D modeling, high end visualization) but, increasingly, the average Intel desktop/laptop now has vastly more power than the majority of the users of such machine will ever want or need.

That, in fact, is the basis of the success of the iPhone and the market it spawned, and the accelerating success of the iPad-- these type of devices have reached the point where they can do most of what most people want to do with computers.

The iPad is really the poster child for this reality: with software optimizations it can do the work of a Mac from about five years ago, and about five years ago was when desktop/laptop machines started to be as powerful as most users actually needed (outside of the kinds of games that even then were moving largely to consoles).

So it appears to me that most of the processing advantages of desktop/laptop processors are being piled on top of what is already overkill for most users, and that the new mobile chipsets are now moving comfortably into the sweet spot of being able to provide plenty of performance for most of what folks are doing with their computer.

So if Apple were to start running OS X machines on the Ax platform, they wouldn't be crippled machines for most people because most people aren't really using the power of what they have now in their MacBook Pros or Airs or Minis.

So I could see a bifurcation of the Mac line into Mac Pros retaining the Intel architecture and being sold to actual professionals with an actual need for as much processing power as possible, with the Mac line (including the Mini, MacBook and Air and possibly new models) moving to the Ax architecture and becoming extremely cost competitive with vanilla Wintel boxes.

Plus, Apple is obviously going to keep working on merging the iOS and OS X platforms, bringing ever more cross compatibility along the way. It's not hard to imagine a $300 Ax Miini (being even smaller) which runs OS X out of the box but has an iOS compatibility mode that includes drag and drop between apps. Or even more likely an Ax based Air like machine for $600 which satisfies the "I need a real OS" crowd while providing iPad size and weight and battery life.

I'd be really surprised to see anything like that. Any gap between the current iPad and the 11" MacBook air will be filled with more powerful iPads, not less powerful Macs that require all new universal binaries.
post #74 of 119
It must suck for the Apple haters to have the cry of the "Apple Tax" removed from their repertoire
post #75 of 119
Quote:
Originally Posted by EgisCodr View Post

Cannot wait to see how the A5 stacks up.

You do realize that this isn't 1995 and the integration of hardware and software needs to be taken as a whole to show real world performance?

Right?

Quote:
Apple would be better served buying the chips from Nvidia.

HAHAHAHA! That's rich... I have a sneaky suspicion Apple won't be switching their strategy any time soon.

And as an owner of a Late 2008 MBP that had a crappy Nvidia GPU that had to be replaced not just once, but twice, I won't miss them.

Quote:
By 2014, Nvidia will have chips 100 times faster than Tegra 2.

Ok - but how much will they cost? How much power will they use? How will they really perform? Since Apple owns the design of their chips, the more they make, the lower their per-chip cost gets since the cost of the intellectual property/design for them is a fixed cost, not a per-unit cost if they were buying from a provider like Nvidia.

They can also customize their design in ways that someone who is buying a simple commodity off-the-shelf chip simply can't.

Do you really think it's an accident that Apple is handily beating all of their tablet "competitors" on pricing?

Quote:
This is an arena Apple will not be able to keep up in.

From your keyboard to Jobs' screen. I guess time will tell. Something tells me Apple isn't worried.
post #76 of 119
Can someone with the proper knowledge comment on this size difference? The delta in size cannot be explained solely by the 45nm vs 40 process.

From my own limited knowledge, it would appear the only differences between the chips are:

1) dual-core PowerVR GPU in the Apple chip (vs Geforce ULV)
2) Implementation of SIMD NEON engine (Tegra doesn't implement NEON)
3) Memory controller differences? Are they both single-channel LPDDR2?

What else would cause the huge size difference?
post #77 of 119
Quote:
Originally Posted by winterspan View Post

What else would cause the huge size difference?

Power management?
post #78 of 119
Quote:
Originally Posted by solipsism View Post

You can now buy the WiFi version of the 32GB Xoom for $599, matching Apples 32GB iPad price point.
http://www.bestbuy.com/site/Motorola...;skuId=1946197

I suppose some will honestly feel that the slightly higher resolution display and 16:9 aspect ratio on a TN panels are more important than a more universal 4:3 IPS panel on a tablet, or that their cameras are more awesomer despite being cameras on a tablet, or that its not to locked to iTunes and you dont have to buy all your stuff from iTunes Store, or that Android is open and free I said feel, I didnt say they would think.



A couple weeks ago we were wondering if this larger chip could fit into the iPhone 4/G4 iPod Touch casing. I recall that its only slightly larger on the short access making it possible if they maneuvered some other chips on the board.

sheffs remark that it could be for better cooling makes sense to me. Do we know how thick the A4 and A5 PoPs are compared to other chips? No one has yet made a logic board as dense as Apples iPhone 4 board. They are stacking chips on either side so perhaps thickness is more important than area in this case.
G1 iPhone Teardown
G4 iPhone Teardwon Theyve come a long way.

Memory size becomes a moot point with AirShare, I can access my entire iTunes library from a 16GB iPad and only need to carry around the essentials for $499.

I managed to pick up an ex demo iPad 16GB (first gen with a dock and a case) for $249 from a store which didn't know it's real value.

There's also 16GB iPad plus 20GB MobileMe storage at the same pricepoint you can have 36GB.
A problem occurred with this webpage so it was reloaded.A problem occurred with this webpage so it was reloaded.A problem occurred with this webpage so it was reloaded.A problem occurred with this...
Reply
A problem occurred with this webpage so it was reloaded.A problem occurred with this webpage so it was reloaded.A problem occurred with this webpage so it was reloaded.A problem occurred with this...
Reply
post #79 of 119
Quote:
Originally Posted by Hattig View Post

Tegra 2 doesn't include Neon vector floating point. This is quite large, apparently. Tegra 3 will include it.

This is a big deal. ARM VFP floating point SUCKS. Neon is quite good, as long as you stick with single precision, and for the demanding graphics apps where it counts, double precision is not generally used. I spent summer 2010 developing embedded image processing algorithms for an OMAP plaform. NEON is about 3x faster than VFP even if you don't know what you're doing or have a rubbish compiler. If you do know what you're doing, it's more like 8x.
Cat: the other white meat
Reply
Cat: the other white meat
Reply
post #80 of 119
Quote:
Originally Posted by alandail View Post

I'd be really surprised to see anything like that. Any gap between the current iPad and the 11" MacBook air will be filled with more powerful iPads, not less powerful Macs that require all new universal binaries.

What you say is certainly true for the very near term (e.g., 2 years or so). However, what if we look further out. Will ARM be 64bit? 6 cores? 2 GHz clock? I think the point that is being made is that ARM is accelerating fast and the improvements in Intel chips are becoming less important. The issue isn't that intel can't keep up (they can). Intel's problem is that their increases in horsepower are becoming increasingly irrelevant to people's computing needs. At some point we may cross a line where ARM provides sufficient horsepower and Intel's advantage is irrelevant. Keep in mind that despite Intel's prowess, it has struggled to reduce power consumption. It also doesn't have a terribly good track record with SoC. Thus, there could very well be a time (maybe in the next 5 years) when ARM starts to replace chips in the mobile PC realm and potentially some day in consumer desktops.

I think the failure of ARM in Netbooks had a lot to do with 1. premature use of ARM (it still isn't there), 2. Lack of an OS optimized to take advantage of ARM 3. Microsoft using its marketing power to push WinXP. With MS out of the picture, an OS optimized for ARM, and more advanced ARM chips, we may see a different result.

Also, I think the major difference between iOS and OS X is touch vs. mouse optimization. You will never see OS X on a touch device and you will never see iOS on a Mac. However, there is no inherent reason why Apple will prefer one CPU architecture over another. Of course, switching isn't trivial, but Apple has shown that it can carry out a switch without a hitch. I think the real issue will always be a balance between computing power, power consumption, and cost. Right now Intel has such an advantage in the computing power (needed computing power) that it outweighs power consumption and cost in Macs. But things are changing fast........
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: iPad
AppleInsider › Forums › Mobile › iPad › Apple's A5 processor could pave the way for larger chip trend