Rumor: TSMC to build quad-core 20nm chips for Apple by late 2013

24

Comments

  • Reply 21 of 66


    Originally Posted by Damn_Its_Hot View Post

    I can't wait for the inevitable 2 nm then 0 nm. Course leakage might be an issue for (Intel or anyone else) on that but I am sure they will find a way around it. /s


     


    Bah. Call me when we're doing calculations in the quantum foam at the Planck scale.

  • Reply 22 of 66
    sockrolid wrote: »
    I can only think of two (technical) reasons why Apple hasn't already migrated the MacBook Air to ARM:

    Issue 1: Insufficient computing performance
    Solution: A quad core ARM SoC clocked reasonably fast (between 1Ghz and 2Ghz) might be fast enough for the average MacBook Air user.
     
    Issue 2: OS X requires a 64-bit architecture and instruction set but ARMv7 is 32-bit
    Solution: The ARMv8 spec, with 64-bit architecture and instruction set, was released almost a year ago.  A7 could be an ARMv8 design.

    This would be a perfect opportunity for Apple to do for Mac what they've already done for iPhone / iPad / iPod touch / Apple TV:
    optimize their hardware and software for each other.  In a way that no other consumer electronics company can.

    What about the capability to run x86 code -- shouldn't that be an issue for using ARM on Mac?

    As I understand it, today's x86 architecture is CISC, which is "transformed" into RISC for execution... And the "transformation" process is Intel IP.
  • Reply 23 of 66
    ulfoafulfoaf Posts: 175member
    I think it is great that they do not depend quite so much on another company.
    It would be nice if they did not have to depend so heavily on Intel one day! There will be little real innovation until that monopoly is broken.
  • Reply 24 of 66

    Quote:

    Originally Posted by MacBook Pro View Post





    Not a correction. I was simply providing evidence to support that which you already believed but had some right for skepticism as 10 nm has been touted as the "breaking point" for Moore's Law for more than a decade. Incorrectly, I incidentally believe. There is; however, something much larger at work than Moore's Law.

    I believe Tallest Skil is essentially referring to the same ideas I am promoting:

    "Exponential growth continued through paradigm shifts from vacuum tubes to discrete transistors to integrated circuits."

    The resources underlying the exponential growth of an evolutionary process are relatively unbounded:

    (i) The (ever growing) order of the evolutionary process itself. Each stage of evolution provides more powerful tools for the next. In biological evolution, the advent of DNA allowed more powerful and faster evolutionary “experiments.” Later, setting the “designs” of animal body plans during the Cambrian explosion allowed rapid evolutionary development of other body organs such as the brain. Or to take a more recent example, the advent of computer assisted design tools allows rapid development of the next generation of computers.

    (ii) The “chaos” of the environment in which the evolutionary process takes place and which provides the options for further diversity. In biological evolution, diversity enters the process in the form of mutations and ever changing environmental conditions. In technological evolution, human ingenuity combined with ever changing market conditions keep the process of innovation going.

    [...]

    But where does Moore’s Law come from? What is behind this remarkably predictable phenomenon? I have seen relatively little written about the ultimate source of this trend. Is it just “a set of industry expectations and goals,” as Randy Isaac, head of basic science at IBM contends? Or is there something more profound going on?


    I agree there is chasm crossing events.


     


    But Moore's law based of observation by someone 'in the field'  The extrapolation was only of IC and MOS technologies.   Eventually, you reach the end of the line for a material... you can't make it purer, you can't make electrons smaller, or hold more/less charge.  


     


    The current solid state electronics basically hits the 'absolute wall' at around 3nm.  At this point electrons  can dance around any (silicon or otherwise) atom through quantum forces.  15nm is the undoped silicon theoretical minimum.  So, given what we currently know, getting below 7nm using 'any' material is getting into the fanciful physics of  TAMO.


     


    Do we start dealing with quantum 'events'  like single bit errors?  dunno.  but at some point, electrons will basically not 'flow' but 'change state' to points 'outside the box' (hence the leakage, and the concern about the cat).  Can you build that into the atomic structure of the etch?


     


    If not, At that point, you need to start changing the model.  Like vacuum tubes to Solid State, like copper to fiber, but even more so (quantum transputers?)   When will computers move from 'atomic' scale, to 'sub atomic' scale. (below the diameter of an atom... for reference... a silicon atom is ~.23nm.  or in other words 6 generations of Moores Law (9 years)? 


     


    Oh and that 'c' thing. At what point is the ability to 'travel' 'receive' 'evaluate'  'execute' and  'transmit'  when will that be impacted.  We are now starting to see the 'slew' of photons.  At what point does the slowness of 'normal' electrons limit our ability to increase the gigahertz.


     


    I'm less concerned about that.  I am noting that in a phone that fits in your hand (temple of your glasses?), at what point does performance per cubic cm stop improving?  When you need to start growing to keep your performance gains up (massively parallel SoC?).  is that 10 years?  5 years?  20?


     


    Kurzweil is is just a futurist, and sees the will of human science beating the system... and as a human acheivement guy, I say 'Yay!'.   As an Engineer, I have see some fundemental problems with achieving his goals, at least without a massive research effort on the scale of a Moon Race or a world war.   and I definitely prefer the former, but don't rule out the latter.

  • Reply 25 of 66

    Quote:

    Originally Posted by Tallest Skil View Post


     


    Bah. Call me when we're doing calculations in the quantum foam at the Planck scale.



    Call you?  When we are there... we'll just embed the thought packet directly into your consciousness.

  • Reply 26 of 66
    cnocbuicnocbui Posts: 3,613member

    Quote:

    Originally Posted by sflocal View Post




    My understanding is that Apple does do all the designs for their mobile CPU's.  They do license from ARM but they still tweak them to their needs.



    I would prefer that Apple move the fabrication of their chips away from Samsung.





    So you would rather see jobs go from Texas to Taiwan?

  • Reply 27 of 66

    Quote:

    Originally Posted by Damn_Its_Hot View Post



    I can't wait for the inevitable 2 nm then 0 nm. Course leakage might be an issue for (Intel or anyone else) on that but I am sure they will find a way around it. /s

    Seriously, how far are we from using light in a commercial apparatus -- that would seem to blow anything metallic away (except at some point point there is that damn bottle neck where they need to interface with the rest of the silicon/exotic materials world).


    'leakage will be an issue'   and a little /s?   you need to massively parallelize that /s to get to the scale of the issue;-)^4


     


    We are far from using light. photons are slippery little things and currently require more power than electrons to lasso and evaluate.  Their best use is still replace 'long haul' information (like between blades in a massive compute frame).  


     


    But use photons in a Integrated Circuit instead of electrons.... That's still a bit too far to put into your product roadmap.

  • Reply 28 of 66
    bigmac2bigmac2 Posts: 639member

    Quote:

    Originally Posted by eastofeastside View Post


    LIKE AN APPLE: Microsoft will design their own ARM chip based on an ARM Mali graphics core. WinRT will run on a dedicated Microsoft Windows Metro Processor (No more Qualcomm, Nvidia fragmentation). TBA '13.


     


    To be announced after Surface phone.



     


    err.


     


    Microsoft doesn't subscribe to ARM architectural licences at this time.  Beside It worth noting Apple has more than 20 years of investment in ARM Holding, the ARM6 was design for the Newton with Apple collaboration.  


     


    Beside, what is the Surface phone vaporware? Microsoft doesn't need another failed Mobile ecosystem, look at WinMo 6.5, 7, 7.5 and 8.  What Microsoft need desperatly is a good mobile developement environment.

  • Reply 29 of 66
    19831983 Posts: 1,225member
    'The chip also features two graphics processing unit cores' No it doesn't it has three! I wish I knew which model number GPU's they are - PowerVR's latest 6000 series maybe? Highly unlikely, probably just the same as those found in the A5x just running at a higher clock speed to compensate for the fact there isn't four of them. Does anybody here know?
  • Reply 30 of 66
    wizard69wizard69 Posts: 13,377member
    Neither of these is right. The problem is the customer and the need to support lagacy hardware. I86 is extremely important for many users that need to support legacy software.

    On the otherhand I could see Apple offering an alternative product that is a laptop but not a MAC. It would be a good platform for users that don't need I86 as long as it is 64 bit. It should be noted that Apple has cheaper solutions than Intel for i86 but yet they don't use them. It isn't always a cost thing.
    sockrolid wrote: »
    I can only think of two (technical) reasons why Apple hasn't already migrated the MacBook Air to ARM:

    Issue 1: Insufficient computing performance
    Solution: A quad core ARM SoC clocked reasonably fast (between 1Ghz and 2Ghz) might be fast enough for the average MacBook Air user.
     
    Issue 2: OS X requires a 64-bit architecture and instruction set but ARMv7 is 32-bit
    Solution: The ARMv8 spec, with 64-bit architecture and instruction set, was released almost a year ago.  A7 could be an ARMv8 design.

    This would be a perfect opportunity for Apple to do for Mac what they've already done for iPhone / iPad / iPod touch / Apple TV:
    optimize their hardware and software for each other.  In a way that no other consumer electronics company can.
  • Reply 31 of 66
    bigmac2 wrote: »
    err.

    Microsoft doesn't subscribe to ARM architectural licences at this time.  Beside It worth noting Apple has more than 20 years of investment in ARM Holding, the ARM6 was design for the Newton with Apple collaboration.  

    Beside, what is the Surface phone vaporware? Microsoft doesn't need another failed Mobile ecosystem, look at WinMo 6.5, 7, 7.5 and 8.  What Microsoft need desperatly is a good mobile developement environment.

    Actually, Microsoft does have a developers license for ARM. I don't have the specific type or article on it in front of me. However, they type of license is in addition to the architecture itself, (ARMv7, ARMv8 et al), it also includes the ability to do custom designs for their specific applications. It is considered likely that they are using it in the next Xbox.
  • Reply 32 of 66
    I agree there is chasm crossing events.

    But Moore's law based of observation by someone 'in the field'  The extrapolation was only of IC and MOS technologies.   Eventually, you reach the end of the line for a material... you can't make it purer, you can't make electrons smaller, or hold more/less charge.  

    The current solid state electronics basically hits the 'absolute wall' at around 3nm.  At this point electrons  can dance around any (silicon or otherwise) atom through quantum forces.  15nm is the undoped silicon theoretical minimum.  So, given what we currently know, getting below 7nm using 'any' material is getting into the fanciful physics of  TAMO.

    Do we start dealing with quantum 'events'  like single bit errors?  dunno.  but at some point, electrons will basically not 'flow' but 'change state' to points 'outside the box' (hence the leakage, and the concern about the cat).  Can you build that into the atomic structure of the etch?

    If not, At that point, you need to start changing the model.  Like vacuum tubes to Solid State, like copper to fiber, but even more so (quantum transputers?)   When will computers move from 'atomic' scale, to 'sub atomic' scale. (below the diameter of an atom... for reference... a silicon atom is ~.23nm.  or in other words 6 generations of Moores Law (9 years)? 

    Oh and that 'c' thing. At what point is the ability to 'travel' 'receive' 'evaluate'  'execute' and  'transmit'  when will that be impacted.  We are now starting to see the 'slew' of photons.  At what point does the slowness of 'normal' electrons limit our ability to increase the gigahertz.

    I'm less concerned about that.  I am noting that in a phone that fits in your hand (temple of your glasses?), at what point does performance per cubic cm stop improving?  When you need to start growing to keep your performance gains up (massively parallel SoC?).  is that 10 years?  5 years?  20?

    Kurzweil is is just a futurist, and sees the will of human science beating the system... and as a human acheivement guy, I say 'Yay!'.   As an Engineer, I have see some fundemental problems with achieving his goals, at least without a massive research effort on the scale of a Moon Race or a world war.   and I definitely prefer the former, but don't rule out the latter.

    Moore's Law is an observation about computing design densities but there are many more factors to consider. The exponential growth of computing power per unit cost is far more important and guided by numerous advances outside of computing design density.

    There are multiple layers of innovation in computer design, e.g., pipelining, parallel processing, instruction look-ahead, instruction and memory caching, and many others. We can't state that we aren't eight times more powerful simply because the core generation hasn't increased incrementally eight times. In fact, innovation outside of process shrink has been at least as important to information technology as process technology.

    Moore's Law was not the first, but the fifth paradigm to provide exponential growth of computing. Kurzweil's belief is that once Moore's Law has been exhausted that a new paradigm will emerge to continue the meta-trend of the exponential growth of computing. Observers are quick to criticize extrapolations of an exponential trend on the basis that the trend is bound to run out of “resources.” Notice; however, that exponential growth continued through paradigm shifts from vacuum tubes to discrete transistors to integrated circuits.

    Accordingly, some of the paradigm shifts we may see to accommodate the continued exponential growth of computing could be:

    Single Atom Computing (Quantum Computing)
    Terahertz Transistors
    Organic Semiconductors
    Racetrack Memory (non-volatile, solid state memory with the capacity of hard drives, but the durability and performance of flash drives
    Graphene Memory
    Phase-change Memory
    Self-programming Computers


    Thus, while we may soon reach the scalability limits of silicon transistors, we will also likely move to a new paradigm.
  • Reply 33 of 66
    aizmovaizmov Posts: 989member
    how late?
    Late like in August Late? Like in 'Fall Announcement date' Late? or December and 'February 2014 Announcement Late?'
    The physics is pretty amazing. 20nm is pretty darn small for large scale production runs... we're getting down to the quantum tunneling levels (~15nm), thus approaching the end of Moore's Law at the single core level. Will quality rates erode? Will Shroedinger's Cat Youtube Videos be both alive and dead on an Iphone at the same time?

    20nm by late 2013 is nothing. At best TSMC Is 18 months behind Intel. Not good! Pretty soon Intel will destroy ARM in power efficiency.
  • Reply 34 of 66

    Quote:

    Originally Posted by MacBook Pro View Post





    Moore's Law is an observation about computing design densities but there are many more factors to consider. The exponential growth of computing power per unit cost is far more important and guided by numerous advances outside of computing design density.

    There are multiple layers of innovation in computer design, e.g., pipelining, parallel processing, instruction look-ahead, instruction and memory caching, and many others. We can't state that we aren't eight times more powerful simply because the core generation hasn't increased incrementally eight times. In fact, innovation outside of process shrink has been at least as important to information technology as process technology.

    Moore's Law was not the first, but the fifth paradigm to provide exponential growth of computing. Kurzweil's belief is that once Moore's Law has been exhausted that a new paradigm will emerge to continue the meta-trend of the exponential growth of computing. Observers are quick to criticize extrapolations of an exponential trend on the basis that the trend is bound to run out of “resources.” Notice; however, that exponential growth continued through paradigm shifts from vacuum tubes to discrete transistors to integrated circuits.

    Accordingly, some of the paradigm shifts we may see to accommodate the continued exponential growth of computing could be:

    Single Atom Computing (Quantum Computing)

    Terahertz Transistors

    Organic Semiconductors

    Racetrack Memory (non-volatile, solid state memory with the capacity of hard drives, but the durability and performance of flash drives

    Graphene Memory

    Phase-change Memory

    Self-programming Computers

    Thus, while we may soon reach the scalability limits of silicon transistors, we will also likely move to a new paradigm.


     


     


    when you come to a point of discontinuity, the key word in your statement is 'may.'   Implicit in 'may' is 'may not'   Growth did occur through the vacuum tube era (and before that the mechanical era, and before that, the room full of accounts era).  there was significant overlap in these (vacuum tubes existed for quite a while during the mechanical eraa... and solid state overlapped vacuum tubes for a good 10 years).  I'm not seeing any candidate for overlap at this moment.


     


    I agree that it was an observation, but within a framework of a solidstate space: is that all the dimensions (weight, volume, required energy in, waste energy out, speed) have all decreased to the positive.  And almost all due to the shrinkage of the package.  Until we can get to single atom computing (I'll opt for single molecule transistors, quantum computers have been 30 years out for 30 years now),  I strongly feel the only ability to shrink a computer smaller will be integrating more into the SoC ,and massively parallelizing that.  Not a bad thing,   just a thing.  But eventually, we will reach a state where this technology will run into 'c' 'h' and eV as the barriers of construction, and performance growth will come at a heat/energy/size increase.  Not in 10 years, maybe, but we'll start seeing he pinch soon.


     


    Wthin this thread, I think you're seeing Apple moving to the obvious intermediate solution:  Custom SoCs optimized for the application, and each appliance manufacturer will be effectively be a chipless fab as well as a software design house.   That will likely keep costs/power/heat/size envelope as small as possible for the next few years. 


     


    As for self programming computers, heck, my undergrad robotics project was that in 1980. Sometimes by accident, but mostly by design (foo on virtual memory, if you need more A space, just overwrite it with new code [actually I was modifying code as I bootstrapped from a cold start to fully operational, ugly, but damned efficient] ;-). 

  • Reply 35 of 66

    Quote:

    Originally Posted by Aizmov View Post





    20nm by late 2013 is nothing. At best TSMC Is 18 months behind Intel. Not good! Pretty soon Intel will destroy ARM in power efficiency.


     


    Then Win Win for apple.  TSMC until Intel can build a chip for Apple that is cheaper and more efficient. Then Apple switches.

  • Reply 36 of 66


    Unless you all have missed it, Quantum Computing is years away from doing the most rudimentary calculations presently expected in computer systems. Before they resolve those issues, Material Science Engineering will provide better materials for Silicon solutions and allow for those nanometer processes start being described in picometer differentials. It'll be in the high hundreds, but then again they won't call it, a .1 nm process but instead will call it a 100 pm process.


     


    Stuff like Quantum Computing are being described as follows: http://aaas.confex.com/aaas/2012/webprogram/Session4151.html


     


     


    Quote:


    Quantum Computing: Current Status and Future Prospects


    Saturday, February 18, 2012: 8:30 AM-11:30 AM


    Room 118 (VCC West Building)

    Large-scale quantum computers, if and when they can be developed, will be capable of solving otherwise intractable problems with far-reaching applications to cryptology, materials science, and medicine. The quest for scalable quantum computers is a grand challenge for 21st century science and a highly interdisciplinary enterprise, drawing heavily from physical science, computer science, mathematics, and engineering. In this symposium, leading theorists and experimentalists will report on rapidly moving recent developments and assess the prospects for future progress. The speakers are major contributors to the subject who are also highly effective at conveying the excitement of the field to broad audiences. The symposium will address some of the central questions pursued in current research. What is the essential difference between quantum and classical information processing, and what is the source of a quantum computer's power? What can quantum computing teach us about fundamental physical law? Can a quantum computer operate reliably even though its elementary components are imperfect? What is the best way to construct a quantum processor, and how can we build the large systems needed to solve hard computational problems?


    Organizer:


    John Preskill, California Institute of Technology




    Speakers:




     


    John Preskill, California Institute of Technology

    The Entanglement Frontier


     





     





     


    Michael Freedman, Microsoft Station Q

    Topological Quantum Computing


     





     


    Charles Marcus, Harvard University

    Semiconductor Quantum Computing


     




     


    John Martinis, University of California

    Quantum Computing with Superconducting Circuits




     


    We are no where near the adoption of Quantum Computing to replace out present and near-term CPU/APU/GPGPU designs.


     


    Kurzweill is a wind bag who constantly gets credit as a futurist with his Singularity book. His predictions continue to move forward into the future to cover up the fact his predictions are piss poor.

     

  • Reply 37 of 66
    jragostajragosta Posts: 10,473member
    Skylake and Skymont are 14 and 10nm, respectively. Why be skeptical? They also have plans for unnamed 7nm (2017) and 5nm (2019) processes. CEO says they'll have to stop using silicon for those.

    I can't help but continue to be amazed at this.

    It really wasn't all that long ago that we were worrying about whether CPU manufacturers could reliably get below 100 nm.
  • Reply 38 of 66


    Originally Posted by jragosta View Post

    I can't help but continue to be amazed at this.

    It really wasn't all that long ago that we were worrying about whether CPU manufacturers could reliably get below 100 nm.


     


    It's truly amazing. I wonder… is part of the drive toward this sort of exponential betterment the very existence of Moore's Law? Because you'll definitely see other industries stagnate when one company has such a marked advantage over the others as Intel does here. 

  • Reply 39 of 66
    wizard69wizard69 Posts: 13,377member
    It's going to be frightening what the iPad 2014 will be able to do.
    Actually the next iPad ought to be impressive. A doubling of performance would most likely remove the last performance glitches for general use.

    For 2012-2013.
    The A6 puts the hurt on the A5/A5x in a big way. Smashes it in 'Geek Bench.'
    ...and I guess it isn't fully clocked at that due to it going into an iPhone.
    I wouldn't be surprised to find it hitting 2GHz while maintaining a respectable power profile.
    As it is now I wouldn't be surprised if it can go well beyond 2GHz if power use isn't critical. ????
    Put the A6'X'(?) with more GPU cores and higher clocked cpu...it should be a very impressive performer in the next iPad (4?) (New, new ;)
    Does the iPad need more GPU cores?

    A 2014 iPad is going to be pretty amazing. Will anybody want to use low to mid end laptops by then?
    IPad already has me thinking long and hard about my need for a laptop. If I do go laptop in the future it most likely would be a high end machine.

    I think Marv' posted a pretty impressive graph of the exploding performance of the iPhone since its debut.
    Apple making their own chips? They're already doing it...and seem set to do it in an even bigger way in future with the former AMD chip designer lured away from Samsung.
    Go Apple.
    Lemon Bon Bon.

    Honestly I think it is time for Apple to build its own foundry. If not that a more robust partnership with an established foundry.
  • Reply 40 of 66
    iqatedoiqatedo Posts: 1,823member

    Quote:

    Originally Posted by MacBook Pro View Post





    Not a correction... There is; however, something much larger at work than Moore's Law.

    I believe Tallest Skil is essentially referring to the same ideas I am promoting:

    "Exponential growth continued through paradigm shifts from vacuum tubes to discrete transistors to integrated circuits."

    The resources underlying the exponential growth of an evolutionary process...




    But where does Moore’s Law come from? What is behind this remarkably predictable phenomenon? I have seen relatively little written about the ultimate source of this trend. Is it just “a set of industry expectations and goals,” as Randy Isaac, head of basic science at IBM contends? Or is there something more profound going on?


     


    It's intriguing that a law that cannot be a law in a physical sense, just continues to behave as one nearly 50 years after the original observation (Gordon Moore, 1965). What will always be frightening however (or exciting if one is a masochist), is an exponential curve on a logarithmic plot lol!!!

Sign In or Register to comment.