or Connect
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › Rumor: TSMC to build quad-core 20nm chips for Apple by late 2013
New Posts  All Forums:Forum Nav:

Rumor: TSMC to build quad-core 20nm chips for Apple by late 2013

post #1 of 67
Thread Starter 
Apple's future iOS devices may be powered by custom chips built by Taiwan Semiconductor Manufacturing Co., taking a key component away from rival Samsung, according to a new rumor.

The details come from research fellow J.T. Hsu of Citigroup Global Markets, who was quoted in a report published on Friday by Taiwan Economic News (via MacRumors). Hsu claimed that the 20-nanometer quad-core chips are most likely to show up in a future iPad, the rumored Apple television, or even a MacBook computer.

However, Hsu indicated that future iPhones are expected to continue to feature dual-core processors, due to power consumption issues.

"Apple began verifying TSMC's 20nm process in August this year and may begin risk production in November with the process," the report said. "Volume production is expected to start in the fourth quarter of 2013, raising the possibility that TSMC will hike capital expenditure to US$11-12 billion in 2013 and 2014."

Reports have linked Apple to TSMC for over a year now, but the company still relies on Samsung as its sole supplier of custom chips found in the iPhone, iPad, iPod touch and Apple TV. As such, a move to TSMC as Apple's sole supplier for a future chip would represent a major shakeup in Apple's supply chain.

A6


Friday's report claimed that TSMC has "unmatched" technological advancements in the 20-nanometer mobile chip space. As a result, Apple is said to have chosen TSMC's 20-nanometer process for its future products, presumably starting with a fifth-generation iPad in early 2014 based on Apple's current product release schedule.

The first indication came earlier this year that TSMC apparently hoped to land orders for 20-nanometer chips from Apple as soon as 2014. But Friday's report suggests those orders could come even earlier, by the end of next year.

One rumor that surfaced in August claimed that Apple made an offer for around $1 billion that would have made TSMC a dedicated chip producer to Apple alone. The offer was allegedly rejected by TSMC, as the company was said to be interested in staying involved in the booming broader smartphone market.

The new iPhone 5 features an A6 processor built by Apple that is a dual-core design. The chip also features two graphics processing unit cores and a full gigabyte of RAM.

Friday's latest rumor comes on the heels of news this week that Apple has hired a noted engineer of both desktop and mobile processors away from rival Samsung. Apple began designing its own mobile chips starting with the A4 in the first-generation iPad in 2010.
post #2 of 67

I would LOVE to own an iPhone knowing that its CPU design comes from Apple. Just buy Sharp and some radio chip company and most components will be made in-house.
 

post #3 of 67
how late?

Late like in August Late? Like in 'Fall Announcement date' Late? or December and 'February 2014 Announcement Late?'

The physics is pretty amazing. 20nm is pretty darn small for large scale production runs... we're getting down to the quantum tunneling levels (~15nm), thus approaching the end of Moore's Law at the single core level. Will quality rates erode? Will Shroedinger's Cat Youtube Videos be both alive and dead on an Iphone at the same time?
post #4 of 67
Quote:
Originally Posted by ClemyNX View Post

I would LOVE to own an iPhone knowing that its CPU design comes from Apple. Just buy Sharp and some radio chip company and most components will be made in-house.
 


My understanding is that Apple does do all the designs for their mobile CPU's.  They do license from ARM but they still tweak them to their needs.

I would prefer that Apple move the fabrication of their chips away from Samsung.

post #5 of 67
Quote:
Originally Posted by TheOtherGeoff View Post

how late?
Late like in August Late? Like in 'Fall Announcement date' Late? or December and 'February 2014 Announcement Late?'
The physics is pretty amazing. 20nm is pretty darn small for large scale production runs... we're getting down to the quantum tunneling levels (~15nm), thus approaching the end of Moore's Law at the single core level. Will quality rates erode? Will Shroedinger's Cat Youtube Videos be both alive and dead on an Iphone at the same time?

Intel has Broadwell at 14nm which has been confirmed to work, slated for 2014 as the die shrink of Haswell (just publicly demoed) that's coming next year as the sequel Ivy Bridge. It is going to be interesting to see what we'll be looking at come 2016. I think Intel has 10nm in the roadmap, call me a skeptic on that one...

I'm not a pessimist. I'm an optimist, with experience.
Reply
I'm not a pessimist. I'm an optimist, with experience.
Reply
post #6 of 67

Will be interesting to see if TSMC uses their Camas, Washington plant to make the A7 (or whatever the quad-core Ax chip will be called.)  Or maybe they could open another fab in the US...

Sent from my iPhone Simulator

Reply

Sent from my iPhone Simulator

Reply
post #7 of 67
Originally Posted by thataveragejoe View Post
I think Intel has 10nm in the roadmap, call me a skeptic on that one...

 

Skylake and Skymont are 14 and 10nm, respectively. Why be skeptical? They also have plans for unnamed 7nm (2017) and 5nm (2019) processes. CEO says they'll have to stop using silicon for those.

Originally Posted by helia

I can break your arm if I apply enough force, but in normal handshaking this won't happen ever.
Reply

Originally Posted by helia

I can break your arm if I apply enough force, but in normal handshaking this won't happen ever.
Reply
post #8 of 67
Quote:
Originally Posted by thataveragejoe View Post

Intel has Broadwell at 14nm which has been confirmed to work, slated for 2014 as the die shrink of Haswell (just publicly demoed) that's coming next year as the sequel Ivy Bridge. It is going to be interesting to see what we'll be looking at come 2016. I think Intel has 10nm in the roadmap, call me a skeptic on that one...

Actually, Intel has a 7 nm process on their roadmap for 2017 using exotic materials. (1)

At Research@Intel Day 2011 Intel's Chief Technology Officer, Justin Rattner, spoke about the 14nm node that is scheduled to debut in about 2 years, but also about 8nm, that he said was on track about 18 months after the 14 nm release. (2)



1. Brian Wang. 16 June 2011. Intel Roadmap from June 2011 with 7nm node for 2017 and 10 nm in 2015. Next Big Future. Retrieved 12 October 2012.

2. Mads Ølholm. Published 8 June 2011. Intel starts talking about 8nm node Also more exotic materials on the radar. Semi Accurate. Retrieved 12 October 2012.
post #9 of 67

Very nice!  Can't wait for this to happen.  Apple's about to put a hurtin' on Samsung.  Awesome.

post #10 of 67
Quote:
Originally Posted by Tallest Skil View Post

 

Skylake and Skymont are 14 and 10nm, respectively. Why be skeptical? They also have plans for unnamed 7nm (2017) and 5nm (2019) processes. CEO says they'll have to stop using silicon for those.

I frankly haven't looked THAT far head. I did see this http://www.tomshardware.com/news/intel-cpu-processor-5nm,17578.html but a lot can happen in 2-3 years in this business. What I originally said is what I know works. 

 

Quote:

Originally Posted by MacBook Pro View Post


Actually, Intel has a 7 nm process on their roadmap for 2017 using exotic materials. (1)
At Research@Intel Day 2011 Intel's Chief Technology Officer, Justin Rattner, spoke about the 14nm node that is scheduled to debut in about 2 years, but also about 8nm, that he said was on track about 18 months after the 14 nm release. (2)

1. Brian Wang. 16 June 2011. Intel Roadmap from June 2011 with 7nm node for 2017 and 10 nm in 2015. Next Big Future. Retrieved 12 October 2012.
2. Mads Ølholm. Published 8 June 2011. Intel starts talking about 8nm node Also more exotic materials on the radar. Semi Accurate. Retrieved 12 October 2012.

That's rather old and has a big "to be defined label" on it. I don't doubt Intel will march right on forward, but I do think it will get harder, especially as they move to 'new materials' and collapse further into a SoC.

I'm not a pessimist. I'm an optimist, with experience.
Reply
I'm not a pessimist. I'm an optimist, with experience.
Reply
post #11 of 67
Quote:
Originally Posted by MacBook Pro View Post


Actually, Intel has a 7 nm process on their roadmap for 2017 using exotic materials. (1)

1. Brian Wang. 16 June 2011. Intel Roadmap from June 2011 with 7nm node for 2017 and 10 nm in 2015. Next Big Future. Retrieved 12 October 2012.
2. Mads Ølholm. Published 8 June 2011. Intel starts talking about 8nm node Also more exotic materials on the radar. Semi Accurate. Retrieved 12 October 2012.

 

I stand corrected: Exotic Materials =/=  CMOS (15nm is considered the quantum leakage zone for CMOS).  and likely 'not cheap.'  (like in cheap like in dirt cheap... silicon is a pretty cheap material).

 

The underlying issue here is that Intel has always charged a premium for their chips.  Can they pull an Apple and claim that the value is worth the markup, and if their production prices double when they move to exotic materials, what will their prices do?

 

Remember, while Intel may consider their chips a top tier brand and can command the prices, they are basically a commodity product.  they are one step up from ball bearings in the computing industry (okay, more like engines vs ball bearings... but they are not 'the car').

 

This is where I like Apple being able to customized their SoC's to be tuned to their 'current' OS.  Intel can't do that.  They have to build a stock engine.  Apple can build an Engine that matches both the transmission, and the fuel and the air mix.

 

But it's these barriers that generate a quantum (no pun intended) leap in science and engineering. 

post #12 of 67
Quote:
Originally Posted by Tallest Skil View Post

Skylake and Skymont are 14 and 10nm, respectively. Why be skeptical? They also have plans for unnamed 7nm (2017) and 5nm (2019) processes. CEO says they'll have to stop using silicon for those.
Quote:
Originally Posted by TheOtherGeoff View Post

I stand corrected: Exotic Materials =/=  CMOS (15nm is considered the quantum leakage zone for CMOS).  and likely 'not cheap.'  (like in cheap like in dirt cheap... silicon is a pretty cheap material).

The underlying issue here is that Intel has always charged a premium for their chips.  Can they pull an Apple and claim that the value is worth the markup, and if their production prices double when they move to exotic materials, what will their prices do?

Remember, while Intel may consider their chips a top tier brand and can command the prices, they are basically a commodity product.  they are one step up from ball bearings in the computing industry (okay, more like engines vs ball bearings... but they are not 'the car').

This is where I like Apple being able to customized their SoC's to be tuned to their 'current' OS.  Intel can't do that.  They have to build a stock engine.  Apple can build an Engine that matches both the transmission, and the fuel and the air mix.

But it's these barriers that generate a quantum (no pun intended) leap in science and engineering. 

Not a correction. I was simply providing evidence to support that which you already believed but had some right for skepticism as 10 nm has been touted as the "breaking point" for Moore's Law for more than a decade. Incorrectly, I incidentally believe. There is; however, something much larger at work than Moore's Law.

I believe Tallest Skil is essentially referring to the same ideas I am promoting:

"Exponential growth continued through paradigm shifts from vacuum tubes to discrete transistors to integrated circuits."

The resources underlying the exponential growth of an evolutionary process are relatively unbounded:

(i) The (ever growing) order of the evolutionary process itself. Each stage of evolution provides more powerful tools for the next. In biological evolution, the advent of DNA allowed more powerful and faster evolutionary “experiments.” Later, setting the “designs” of animal body plans during the Cambrian explosion allowed rapid evolutionary development of other body organs such as the brain. Or to take a more recent example, the advent of computer assisted design tools allows rapid development of the next generation of computers.

(ii) The “chaos” of the environment in which the evolutionary process takes place and which provides the options for further diversity. In biological evolution, diversity enters the process in the form of mutations and ever changing environmental conditions. In technological evolution, human ingenuity combined with ever changing market conditions keep the process of innovation going.


Ray Kurzweil predicts the meta-trend, that Moore's Law attempts to describe, will continue beyond 2020. The result in 2029 should be a computer 512 times more powerful than Watson. If the trend continues then by 2045 we will have computers 131,072 times more powerful than Watson.


From The Law of Accelerating Returns
March 7, 2001 by Ray Kurzweil

It is important to note that Moore’s Law of Integrated Circuits was not the first, but the fifth paradigm to provide accelerating price-performance. Computing devices have been consistently multiplying in power (per unit of time) from the mechanical calculating devices used in the 1890 U.S. Census, to Turing’s relay-based “Robinson” machine that cracked the Nazi enigma code, to the CBS vacuum tube computer that predicted the election of Eisenhower, to the transistor-based machines used in the first space launches, to the integrated-circuit-based personal computer which I used to dictate (and automatically transcribe) this essay.

But I noticed something else surprising. When I plotted the 49 machines on an exponential graph (where a straight line means exponential growth), I didn’t get a straight line. What I got was another exponential curve. In other words, there’s exponential growth in the rate of exponential growth. Computer speed (per unit cost) doubled every three years between 1910 and 1950, doubled every two years between 1950 and 1966, and is now doubling every year.

But where does Moore’s Law come from? What is behind this remarkably predictable phenomenon? I have seen relatively little written about the ultimate source of this trend. Is it just “a set of industry expectations and goals,” as Randy Isaac, head of basic science at IBM contends? Or is there something more profound going on?
Edited by MacBook Pro - 10/12/12 at 10:35am
post #13 of 67

I can only think of two (technical) reasons why Apple hasn't already migrated the MacBook Air to ARM:

 

Issue 1: Insufficient computing performance

Solution: A quad core ARM SoC clocked reasonably fast (between 1Ghz and 2Ghz) might be fast enough for the average MacBook Air user.

 

Issue 2: OS X requires a 64-bit architecture and instruction set but ARMv7 is 32-bit

Solution: The ARMv8 spec, with 64-bit architecture and instruction set, was released almost a year ago.  A7 could be an ARMv8 design.

 

This would be a perfect opportunity for Apple to do for Mac what they've already done for iPhone / iPad / iPod touch / Apple TV:

optimize their hardware and software for each other.  In a way that no other consumer electronics company can.

Sent from my iPhone Simulator

Reply

Sent from my iPhone Simulator

Reply
post #14 of 67
Quote:
Originally Posted by sflocal View Post


My understanding is that Apple does do all the designs for their mobile CPU's.  They do license from ARM but they still tweak them to their needs.

I would prefer that Apple move the fabrication of their chips away from Samsung.

 

The A4 is believed to be a tweaked Samsung/Intrinsity SoC using an ARM Cortex-A8.

The A5, and A5X are custom Apple SoCs, the CPU cores are ARM Cortex-A9.

The A6 is a custom SoC with what appear to be custom (non-ARM reference designs) cores that implement ARM's ARMv7 ISA.

Disclaimer: The things I say are merely my own personal opinion and may or may not be based on facts. At certain points in any discussion, sarcasm may ensue.
Reply
Disclaimer: The things I say are merely my own personal opinion and may or may not be based on facts. At certain points in any discussion, sarcasm may ensue.
Reply
post #15 of 67

LIKE AN APPLE: Microsoft will design their own ARM chip based on an ARM Mali graphics core. WinRT will run on a dedicated Microsoft Windows Metro Processor (No more Qualcomm, Nvidia fragmentation). TBA '13.

 
To be announced after Surface phone.
post #16 of 67
It's going to be frightening what the iPad 2014 will be able to do.

For 2012-2013.

The A6 puts the hurt on the A5/A5x in a big way. Smashes it in 'Geek Bench.'

...and I guess it isn't fully clocked at that due to it going into an iPhone.

Put the A6'X'(?) with more GPU cores and higher clocked cpu...it should be a very impressive performer in the next iPad (4?) (New, new 1wink.gif

A 2014 iPad is going to be pretty amazing. Will anybody want to use low to mid end laptops by then?

I think Marv' posted a pretty impressive graph of the exploding performance of the iPhone since its debut.

Apple making their own chips? They're already doing it...and seem set to do it in an even bigger way in future with the former AMD chip designer lured away from Samsung.

Go Apple.

Lemon Bon Bon.

You know, for a company that specializes in the video-graphics market, you'd think that they would offer top-of-the-line GPUs...

 

WITH THE NEW MAC PRO THEY FINALLY DID!  (But you bend over for it.)

Reply

You know, for a company that specializes in the video-graphics market, you'd think that they would offer top-of-the-line GPUs...

 

WITH THE NEW MAC PRO THEY FINALLY DID!  (But you bend over for it.)

Reply
post #17 of 67
Originally Posted by eastofeastside View Post
To be announced after Surface phone.

 

Surface phone… so's that a 6" phone, then? lol.gif

Originally Posted by helia

I can break your arm if I apply enough force, but in normal handshaking this won't happen ever.
Reply

Originally Posted by helia

I can break your arm if I apply enough force, but in normal handshaking this won't happen ever.
Reply
post #18 of 67
Quote:
Originally Posted by eastofeastside View Post

LIKE AN APPLE: Microsoft will design their own ARM chip based on an ARM Mali graphics core. WinRT will run on a dedicated Microsoft Windows Metro Processor (No more Qualcomm, Nvidia fragmentation). TBA '13.

 
To be announced after Surface phone.

Apple and M$ going ARM?

 

Smelling the coffee, Intel?  

 

Lemon Bon Bon.

You know, for a company that specializes in the video-graphics market, you'd think that they would offer top-of-the-line GPUs...

 

WITH THE NEW MAC PRO THEY FINALLY DID!  (But you bend over for it.)

Reply

You know, for a company that specializes in the video-graphics market, you'd think that they would offer top-of-the-line GPUs...

 

WITH THE NEW MAC PRO THEY FINALLY DID!  (But you bend over for it.)

Reply
post #19 of 67
Quote:
Originally Posted by Tallest Skil View Post

 

Surface phone… so's that a 6" phone, then? lol.gif

 

If it's anything like its 'big ass' touch table...it's going to be pretty hard to pick up next to your ear...

 

Lemon Bon Bon.

You know, for a company that specializes in the video-graphics market, you'd think that they would offer top-of-the-line GPUs...

 

WITH THE NEW MAC PRO THEY FINALLY DID!  (But you bend over for it.)

Reply

You know, for a company that specializes in the video-graphics market, you'd think that they would offer top-of-the-line GPUs...

 

WITH THE NEW MAC PRO THEY FINALLY DID!  (But you bend over for it.)

Reply
post #20 of 67
I can't wait for the inevitable 2 nm then 0 nm. Course leakage might be an issue for (Intel or anyone else) on that but I am sure they will find a way around it. /s

Seriously, how far are we from using light in a commercial apparatus -- that would seem to blow anything metallic away (except at some point point there is that damn bottle neck where they need to interface with the rest of the silicon/exotic materials world).
post #21 of 67
Quote:
Originally Posted by eastofeastside View Post

LIKE AN APPLE: Microsoft will design their own ARM chip based on an ARM Mali graphics core. WinRT will run on a dedicated Microsoft Windows Metro Processor (No more Qualcomm, Nvidia fragmentation). TBA '13.

 
To be announced after Surface phone.

You made accounts here and MR today just to post this?

I see now you are trying to run a short game. http://finance.yahoo.com/mbview/userview/?&u=eastofeastside&bn=289f1fdb-c8cf-3d22-9664-863cc3adc5d8
post #22 of 67
Originally Posted by Damn_Its_Hot View Post
I can't wait for the inevitable 2 nm then 0 nm. Course leakage might be an issue for (Intel or anyone else) on that but I am sure they will find a way around it. /s

 

Bah. Call me when we're doing calculations in the quantum foam at the Planck scale.

Originally Posted by helia

I can break your arm if I apply enough force, but in normal handshaking this won't happen ever.
Reply

Originally Posted by helia

I can break your arm if I apply enough force, but in normal handshaking this won't happen ever.
Reply
post #23 of 67
Quote:
Originally Posted by SockRolid View Post

I can only think of two (technical) reasons why Apple hasn't already migrated the MacBook Air to ARM:

Issue 1: Insufficient computing performance
Solution: A quad core ARM SoC clocked reasonably fast (between 1Ghz and 2Ghz) might be fast enough for the average MacBook Air user.
 
Issue 2: OS X requires a 64-bit architecture and instruction set but ARMv7 is 32-bit
Solution: The ARMv8 spec, with 64-bit architecture and instruction set, was released almost a year ago.  A7 could be an ARMv8 design.

This would be a perfect opportunity for Apple to do for Mac what they've already done for iPhone / iPad / iPod touch / Apple TV:
optimize their hardware and software for each other.  In a way that no other consumer electronics company can.

What about the capability to run x86 code -- shouldn't that be an issue for using ARM on Mac?

As I understand it, today's x86 architecture is CISC, which is "transformed" into RISC for execution... And the "transformation" process is Intel IP.
"Swift generally gets you to the right way much quicker." - auxio -

"The perfect [birth]day -- A little playtime, a good poop, and a long nap." - Tomato Greeting Cards -
Reply
"Swift generally gets you to the right way much quicker." - auxio -

"The perfect [birth]day -- A little playtime, a good poop, and a long nap." - Tomato Greeting Cards -
Reply
post #24 of 67
I think it is great that they do not depend quite so much on another company.
It would be nice if they did not have to depend so heavily on Intel one day! There will be little real innovation until that monopoly is broken.
post #25 of 67
Quote:
Originally Posted by MacBook Pro View Post


Not a correction. I was simply providing evidence to support that which you already believed but had some right for skepticism as 10 nm has been touted as the "breaking point" for Moore's Law for more than a decade. Incorrectly, I incidentally believe. There is; however, something much larger at work than Moore's Law.
I believe Tallest Skil is essentially referring to the same ideas I am promoting:
"Exponential growth continued through paradigm shifts from vacuum tubes to discrete transistors to integrated circuits."
The resources underlying the exponential growth of an evolutionary process are relatively unbounded:
(i) The (ever growing) order of the evolutionary process itself. Each stage of evolution provides more powerful tools for the next. In biological evolution, the advent of DNA allowed more powerful and faster evolutionary “experiments.” Later, setting the “designs” of animal body plans during the Cambrian explosion allowed rapid evolutionary development of other body organs such as the brain. Or to take a more recent example, the advent of computer assisted design tools allows rapid development of the next generation of computers.
(ii) The “chaos” of the environment in which the evolutionary process takes place and which provides the options for further diversity. In biological evolution, diversity enters the process in the form of mutations and ever changing environmental conditions. In technological evolution, human ingenuity combined with ever changing market conditions keep the process of innovation going.
[...]
But where does Moore’s Law come from? What is behind this remarkably predictable phenomenon? I have seen relatively little written about the ultimate source of this trend. Is it just “a set of industry expectations and goals,” as Randy Isaac, head of basic science at IBM contends? Or is there something more profound going on?

I agree there is chasm crossing events.

 

But Moore's law based of observation by someone 'in the field'  The extrapolation was only of IC and MOS technologies.   Eventually, you reach the end of the line for a material... you can't make it purer, you can't make electrons smaller, or hold more/less charge.  

 

The current solid state electronics basically hits the 'absolute wall' at around 3nm.  At this point electrons  can dance around any (silicon or otherwise) atom through quantum forces.  15nm is the undoped silicon theoretical minimum.  So, given what we currently know, getting below 7nm using 'any' material is getting into the fanciful physics of  TAMO.

 

Do we start dealing with quantum 'events'  like single bit errors?  dunno.  but at some point, electrons will basically not 'flow' but 'change state' to points 'outside the box' (hence the leakage, and the concern about the cat).  Can you build that into the atomic structure of the etch?

 

If not, At that point, you need to start changing the model.  Like vacuum tubes to Solid State, like copper to fiber, but even more so (quantum transputers?)   When will computers move from 'atomic' scale, to 'sub atomic' scale. (below the diameter of an atom... for reference... a silicon atom is ~.23nm.  or in other words 6 generations of Moores Law (9 years)? 

 

Oh and that 'c' thing. At what point is the ability to 'travel' 'receive' 'evaluate'  'execute' and  'transmit'  when will that be impacted.  We are now starting to see the 'slew' of photons.  At what point does the slowness of 'normal' electrons limit our ability to increase the gigahertz.

 

I'm less concerned about that.  I am noting that in a phone that fits in your hand (temple of your glasses?), at what point does performance per cubic cm stop improving?  When you need to start growing to keep your performance gains up (massively parallel SoC?).  is that 10 years?  5 years?  20?

 

Kurzweil is is just a futurist, and sees the will of human science beating the system... and as a human acheivement guy, I say 'Yay!'.   As an Engineer, I have see some fundemental problems with achieving his goals, at least without a massive research effort on the scale of a Moon Race or a world war.   and I definitely prefer the former, but don't rule out the latter.

post #26 of 67
Quote:
Originally Posted by Tallest Skil View Post

 

Bah. Call me when we're doing calculations in the quantum foam at the Planck scale.

Call you?  When we are there... we'll just embed the thought packet directly into your consciousness.

post #27 of 67
Quote:
Originally Posted by sflocal View Post


My understanding is that Apple does do all the designs for their mobile CPU's.  They do license from ARM but they still tweak them to their needs.

I would prefer that Apple move the fabrication of their chips away from Samsung.


So you would rather see jobs go from Texas to Taiwan?

post #28 of 67
Quote:
Originally Posted by Damn_Its_Hot View Post

I can't wait for the inevitable 2 nm then 0 nm. Course leakage might be an issue for (Intel or anyone else) on that but I am sure they will find a way around it. /s
Seriously, how far are we from using light in a commercial apparatus -- that would seem to blow anything metallic away (except at some point point there is that damn bottle neck where they need to interface with the rest of the silicon/exotic materials world).

'leakage will be an issue'   and a little /s?   you need to massively parallelize that /s to get to the scale of the issue;-)^4

 

We are far from using light. photons are slippery little things and currently require more power than electrons to lasso and evaluate.  Their best use is still replace 'long haul' information (like between blades in a massive compute frame).  

 

But use photons in a Integrated Circuit instead of electrons.... That's still a bit too far to put into your product roadmap.

post #29 of 67
Quote:
Originally Posted by eastofeastside View Post

LIKE AN APPLE: Microsoft will design their own ARM chip based on an ARM Mali graphics core. WinRT will run on a dedicated Microsoft Windows Metro Processor (No more Qualcomm, Nvidia fragmentation). TBA '13.

 
To be announced after Surface phone.

 

err.

 

Microsoft doesn't subscribe to ARM architectural licences at this time.  Beside It worth noting Apple has more than 20 years of investment in ARM Holding, the ARM6 was design for the Newton with Apple collaboration.  

 

Beside, what is the Surface phone vaporware? Microsoft doesn't need another failed Mobile ecosystem, look at WinMo 6.5, 7, 7.5 and 8.  What Microsoft need desperatly is a good mobile developement environment.

post #30 of 67
'The chip also features two graphics processing unit cores' No it doesn't it has three! I wish I knew which model number GPU's they are - PowerVR's latest 6000 series maybe? Highly unlikely, probably just the same as those found in the A5x just running at a higher clock speed to compensate for the fact there isn't four of them. Does anybody here know?
post #31 of 67
Neither of these is right. The problem is the customer and the need to support lagacy hardware. I86 is extremely important for many users that need to support legacy software.

On the otherhand I could see Apple offering an alternative product that is a laptop but not a MAC. It would be a good platform for users that don't need I86 as long as it is 64 bit. It should be noted that Apple has cheaper solutions than Intel for i86 but yet they don't use them. It isn't always a cost thing.
Quote:
Originally Posted by SockRolid View Post

I can only think of two (technical) reasons why Apple hasn't already migrated the MacBook Air to ARM:

Issue 1: Insufficient computing performance
Solution: A quad core ARM SoC clocked reasonably fast (between 1Ghz and 2Ghz) might be fast enough for the average MacBook Air user.
 
Issue 2: OS X requires a 64-bit architecture and instruction set but ARMv7 is 32-bit
Solution: The ARMv8 spec, with 64-bit architecture and instruction set, was released almost a year ago.  A7 could be an ARMv8 design.

This would be a perfect opportunity for Apple to do for Mac what they've already done for iPhone / iPad / iPod touch / Apple TV:
optimize their hardware and software for each other.  In a way that no other consumer electronics company can.
post #32 of 67
Quote:
Originally Posted by BigMac2 View Post

err.

Microsoft doesn't subscribe to ARM architectural licences at this time.  Beside It worth noting Apple has more than 20 years of investment in ARM Holding, the ARM6 was design for the Newton with Apple collaboration.  

Beside, what is the Surface phone vaporware? Microsoft doesn't need another failed Mobile ecosystem, look at WinMo 6.5, 7, 7.5 and 8.  What Microsoft need desperatly is a good mobile developement environment.

Actually, Microsoft does have a developers license for ARM. I don't have the specific type or article on it in front of me. However, they type of license is in addition to the architecture itself, (ARMv7, ARMv8 et al), it also includes the ability to do custom designs for their specific applications. It is considered likely that they are using it in the next Xbox.
post #33 of 67
Quote:
Originally Posted by TheOtherGeoff View Post

I agree there is chasm crossing events.

But Moore's law based of observation by someone 'in the field'  The extrapolation was only of IC and MOS technologies.   Eventually, you reach the end of the line for a material... you can't make it purer, you can't make electrons smaller, or hold more/less charge.  

The current solid state electronics basically hits the 'absolute wall' at around 3nm.  At this point electrons  can dance around any (silicon or otherwise) atom through quantum forces.  15nm is the undoped silicon theoretical minimum.  So, given what we currently know, getting below 7nm using 'any' material is getting into the fanciful physics of  TAMO.

Do we start dealing with quantum 'events'  like single bit errors?  dunno.  but at some point, electrons will basically not 'flow' but 'change state' to points 'outside the box' (hence the leakage, and the concern about the cat).  Can you build that into the atomic structure of the etch?

If not, At that point, you need to start changing the model.  Like vacuum tubes to Solid State, like copper to fiber, but even more so (quantum transputers?)   When will computers move from 'atomic' scale, to 'sub atomic' scale. (below the diameter of an atom... for reference... a silicon atom is ~.23nm.  or in other words 6 generations of Moores Law (9 years)? 

Oh and that 'c' thing. At what point is the ability to 'travel' 'receive' 'evaluate'  'execute' and  'transmit'  when will that be impacted.  We are now starting to see the 'slew' of photons.  At what point does the slowness of 'normal' electrons limit our ability to increase the gigahertz.

I'm less concerned about that.  I am noting that in a phone that fits in your hand (temple of your glasses?), at what point does performance per cubic cm stop improving?  When you need to start growing to keep your performance gains up (massively parallel SoC?).  is that 10 years?  5 years?  20?

Kurzweil is is just a futurist, and sees the will of human science beating the system... and as a human acheivement guy, I say 'Yay!'.   As an Engineer, I have see some fundemental problems with achieving his goals, at least without a massive research effort on the scale of a Moon Race or a world war.   and I definitely prefer the former, but don't rule out the latter.

Moore's Law is an observation about computing design densities but there are many more factors to consider. The exponential growth of computing power per unit cost is far more important and guided by numerous advances outside of computing design density.

There are multiple layers of innovation in computer design, e.g., pipelining, parallel processing, instruction look-ahead, instruction and memory caching, and many others. We can't state that we aren't eight times more powerful simply because the core generation hasn't increased incrementally eight times. In fact, innovation outside of process shrink has been at least as important to information technology as process technology.

Moore's Law was not the first, but the fifth paradigm to provide exponential growth of computing. Kurzweil's belief is that once Moore's Law has been exhausted that a new paradigm will emerge to continue the meta-trend of the exponential growth of computing. Observers are quick to criticize extrapolations of an exponential trend on the basis that the trend is bound to run out of “resources.” Notice; however, that exponential growth continued through paradigm shifts from vacuum tubes to discrete transistors to integrated circuits.

Accordingly, some of the paradigm shifts we may see to accommodate the continued exponential growth of computing could be:

Single Atom Computing (Quantum Computing)
Terahertz Transistors
Organic Semiconductors
Racetrack Memory (non-volatile, solid state memory with the capacity of hard drives, but the durability and performance of flash drives
Graphene Memory
Phase-change Memory
Self-programming Computers


Thus, while we may soon reach the scalability limits of silicon transistors, we will also likely move to a new paradigm.
post #34 of 67
Quote:
Originally Posted by TheOtherGeoff View Post

how late?
Late like in August Late? Like in 'Fall Announcement date' Late? or December and 'February 2014 Announcement Late?'
The physics is pretty amazing. 20nm is pretty darn small for large scale production runs... we're getting down to the quantum tunneling levels (~15nm), thus approaching the end of Moore's Law at the single core level. Will quality rates erode? Will Shroedinger's Cat Youtube Videos be both alive and dead on an Iphone at the same time?

20nm by late 2013 is nothing. At best TSMC Is 18 months behind Intel. Not good! Pretty soon Intel will destroy ARM in power efficiency.

iPod nano 5th Gen 8GB Orange, iPad 3rd Gen WiFi 32GB White
MacBook Pro 15" Core i7 2.66GHz 8GB RAM 120GB Intel 320M
Mac mini Core 2 Duo 2.4GHz 8GB RAM, iPhone 5 32GB Black

Reply

iPod nano 5th Gen 8GB Orange, iPad 3rd Gen WiFi 32GB White
MacBook Pro 15" Core i7 2.66GHz 8GB RAM 120GB Intel 320M
Mac mini Core 2 Duo 2.4GHz 8GB RAM, iPhone 5 32GB Black

Reply
post #35 of 67
Quote:
Originally Posted by MacBook Pro View Post


Moore's Law is an observation about computing design densities but there are many more factors to consider. The exponential growth of computing power per unit cost is far more important and guided by numerous advances outside of computing design density.
There are multiple layers of innovation in computer design, e.g., pipelining, parallel processing, instruction look-ahead, instruction and memory caching, and many others. We can't state that we aren't eight times more powerful simply because the core generation hasn't increased incrementally eight times. In fact, innovation outside of process shrink has been at least as important to information technology as process technology.
Moore's Law was not the first, but the fifth paradigm to provide exponential growth of computing. Kurzweil's belief is that once Moore's Law has been exhausted that a new paradigm will emerge to continue the meta-trend of the exponential growth of computing. Observers are quick to criticize extrapolations of an exponential trend on the basis that the trend is bound to run out of “resources.” Notice; however, that exponential growth continued through paradigm shifts from vacuum tubes to discrete transistors to integrated circuits.
Accordingly, some of the paradigm shifts we may see to accommodate the continued exponential growth of computing could be:
Single Atom Computing (Quantum Computing)
Terahertz Transistors
Organic Semiconductors
Racetrack Memory (non-volatile, solid state memory with the capacity of hard drives, but the durability and performance of flash drives
Graphene Memory
Phase-change Memory
Self-programming Computers
Thus, while we may soon reach the scalability limits of silicon transistors, we will also likely move to a new paradigm.

 

 

when you come to a point of discontinuity, the key word in your statement is 'may.'   Implicit in 'may' is 'may not'   Growth did occur through the vacuum tube era (and before that the mechanical era, and before that, the room full of accounts era).  there was significant overlap in these (vacuum tubes existed for quite a while during the mechanical eraa... and solid state overlapped vacuum tubes for a good 10 years).  I'm not seeing any candidate for overlap at this moment.

 

I agree that it was an observation, but within a framework of a solidstate space: is that all the dimensions (weight, volume, required energy in, waste energy out, speed) have all decreased to the positive.  And almost all due to the shrinkage of the package.  Until we can get to single atom computing (I'll opt for single molecule transistors, quantum computers have been 30 years out for 30 years now),  I strongly feel the only ability to shrink a computer smaller will be integrating more into the SoC ,and massively parallelizing that.  Not a bad thing,   just a thing.  But eventually, we will reach a state where this technology will run into 'c' 'h' and eV as the barriers of construction, and performance growth will come at a heat/energy/size increase.  Not in 10 years, maybe, but we'll start seeing he pinch soon.

 

Wthin this thread, I think you're seeing Apple moving to the obvious intermediate solution:  Custom SoCs optimized for the application, and each appliance manufacturer will be effectively be a chipless fab as well as a software design house.   That will likely keep costs/power/heat/size envelope as small as possible for the next few years. 

 

As for self programming computers, heck, my undergrad robotics project was that in 1980. Sometimes by accident, but mostly by design (foo on virtual memory, if you need more A space, just overwrite it with new code [actually I was modifying code as I bootstrapped from a cold start to fully operational, ugly, but damned efficient] ;-). 

post #36 of 67
Quote:
Originally Posted by Aizmov View Post


20nm by late 2013 is nothing. At best TSMC Is 18 months behind Intel. Not good! Pretty soon Intel will destroy ARM in power efficiency.

 

Then Win Win for apple.  TSMC until Intel can build a chip for Apple that is cheaper and more efficient. Then Apple switches.

post #37 of 67

Unless you all have missed it, Quantum Computing is years away from doing the most rudimentary calculations presently expected in computer systems. Before they resolve those issues, Material Science Engineering will provide better materials for Silicon solutions and allow for those nanometer processes start being described in picometer differentials. It'll be in the high hundreds, but then again they won't call it, a .1 nm process but instead will call it a 100 pm process.

 

Stuff like Quantum Computing are being described as follows: http://aaas.confex.com/aaas/2012/webprogram/Session4151.html

 

 

Quote:

Quantum Computing: Current Status and Future Prospects

Saturday, February 18, 2012: 8:30 AM-11:30 AM
Room 118 (VCC West Building)
Large-scale quantum computers, if and when they can be developed, will be capable of solving otherwise intractable problems with far-reaching applications to cryptology, materials science, and medicine. The quest for scalable quantum computers is a grand challenge for 21st century science and a highly interdisciplinary enterprise, drawing heavily from physical science, computer science, mathematics, and engineering. In this symposium, leading theorists and experimentalists will report on rapidly moving recent developments and assess the prospects for future progress. The speakers are major contributors to the subject who are also highly effective at conveying the excitement of the field to broad audiences. The symposium will address some of the central questions pursued in current research. What is the essential difference between quantum and classical information processing, and what is the source of a quantum computer's power? What can quantum computing teach us about fundamental physical law? Can a quantum computer operate reliably even though its elementary components are imperfect? What is the best way to construct a quantum processor, and how can we build the large systems needed to solve hard computational problems?
Organizer:
John Preskill, California Institute of Technology
Speakers:
 
John Preskill, California Institute of Technology
The Entanglement Frontier
 
 
Michael Freedman, Microsoft Station Q
Topological Quantum Computing
 
 
Charles Marcus, Harvard University
Semiconductor Quantum Computing
 
 

John Martinis, University of California
Quantum Computing with Superconducting Circuits

 

We are no where near the adoption of Quantum Computing to replace out present and near-term CPU/APU/GPGPU designs.

 

Kurzweill is a wind bag who constantly gets credit as a futurist with his Singularity book. His predictions continue to move forward into the future to cover up the fact his predictions are piss poor.
 

post #38 of 67
Quote:
Originally Posted by Tallest Skil View Post

Skylake and Skymont are 14 and 10nm, respectively. Why be skeptical? They also have plans for unnamed 7nm (2017) and 5nm (2019) processes. CEO says they'll have to stop using silicon for those.

I can't help but continue to be amazed at this.

It really wasn't all that long ago that we were worrying about whether CPU manufacturers could reliably get below 100 nm.
"I'm way over my head when it comes to technical issues like this"
Gatorguy 5/31/13
Reply
"I'm way over my head when it comes to technical issues like this"
Gatorguy 5/31/13
Reply
post #39 of 67
Originally Posted by jragosta View Post
I can't help but continue to be amazed at this.
It really wasn't all that long ago that we were worrying about whether CPU manufacturers could reliably get below 100 nm.

 

It's truly amazing. I wonder… is part of the drive toward this sort of exponential betterment the very existence of Moore's Law? Because you'll definitely see other industries stagnate when one company has such a marked advantage over the others as Intel does here. 

Originally Posted by helia

I can break your arm if I apply enough force, but in normal handshaking this won't happen ever.
Reply

Originally Posted by helia

I can break your arm if I apply enough force, but in normal handshaking this won't happen ever.
Reply
post #40 of 67
Quote:
Originally Posted by Lemon Bon Bon. View Post

It's going to be frightening what the iPad 2014 will be able to do.
Actually the next iPad ought to be impressive. A doubling of performance would most likely remove the last performance glitches for general use.
Quote:

For 2012-2013.
The A6 puts the hurt on the A5/A5x in a big way. Smashes it in 'Geek Bench.'
...and I guess it isn't fully clocked at that due to it going into an iPhone.
I wouldn't be surprised to find it hitting 2GHz while maintaining a respectable power profile.
As it is now I wouldn't be surprised if it can go well beyond 2GHz if power use isn't critical. 😄
Quote:
Put the A6'X'(?) with more GPU cores and higher clocked cpu...it should be a very impressive performer in the next iPad (4?) (New, new 1wink.gif
Does the iPad need more GPU cores?
Quote:

A 2014 iPad is going to be pretty amazing. Will anybody want to use low to mid end laptops by then?
IPad already has me thinking long and hard about my need for a laptop. If I do go laptop in the future it most likely would be a high end machine.
Quote:

I think Marv' posted a pretty impressive graph of the exploding performance of the iPhone since its debut.
Apple making their own chips? They're already doing it...and seem set to do it in an even bigger way in future with the former AMD chip designer lured away from Samsung.
Go Apple.
Lemon Bon Bon.

Honestly I think it is time for Apple to build its own foundry. If not that a more robust partnership with an established foundry.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Future Apple Hardware
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › Rumor: TSMC to build quad-core 20nm chips for Apple by late 2013