Apple begins receiving shipments of A-series processors from TSMC - report

13»

Comments

  • Reply 41 of 55
    flybaboflybabo Posts: 2member
    I'd like to point out that you can't directly compare Intel's 14nm technology and TSMC/Samsung's 16/14nm technologies with the numbers.
    TSMC/Samsung's technologies are not real 16/14nm technologies - they just shrink the transistors 16/14nm but all the back-end stuffs (metal wires connecting transistors) are actually 20nm technolgy while Intel's technology shrinks everything in 14nm scale.
    Also, it's TSMC/Samsung's first attempt to productizing FinFET transistors. There's a significant risk to produce them with acceptable manufacturing yields.
    Intel is the only company delivered FinFETs to mass markets with their 22nm Trigate technology.
    General consensus is that Intel leads at least 2 years in manufacturing technology comparing with other bunches and the gap is not narrowing.
  • Reply 42 of 55
    normmnormm Posts: 653member
    Quote:

    Originally Posted by melgross View Post



    What surprised me is that Intel is claiming on their production model roadmap that not only will they have 10nm, but 7 and even 5! I find this hard to believe, as they are presenting it as fact, even though most experts have significant doubts about 7, and many are skeptical about 5nm being achievable. We have to understand that the average atom is about .5nm in diameter, particularly the ones needed here, such as copper, silicon, etc. this means that a line that is 14nm wide is just 28 atoms wide, 10nm is 20, 7 is 14, and 5 is just 10 atoms wide.

     

    It's worth mentioning that the reason the game has continued this long is that 2D lithography builds up 3D structures.  Capacitors that remember the bits (voltage levels) on DRAM chips have been made that are 100 times deeper than wide, so even at thin line widths they still contain lots of atoms (and electrons).  Entire memory structures have also been stacked, so counting merely by the 2D area per bit is misleading.  The only fundamental barrier keeping us from building processors with significant 3D depth is heat dissipation.  That can in principle be dealt with by not using all parts at once (which is how our brains deal with the heat issue---we really do only use 5% of our brain at a time), and by using a lot of thermodynamically (and logically) reversible structures that avoid heat production.

     

    We also have the example of biology, which routinely builds massive quantities of precise microscopic arrangements of atoms at low cost (biomolecules).  Atomically perfect structures can use their quantum nature as a feature, rather than a drawback (we don't need crazy-difficult large-scale coherence to build ordinary logic circuitry).  So I think it's hard to say that we're very near the end of the road!  It's possible progress might slow down for awhile, but then again it might not!

  • Reply 43 of 55
    ksecksec Posts: 1,569member

    While Digitimes are often wrong about anything Apple related,

     

    Quote:


     Because of weaker-than-expected yields, high 22nm processor inventories, and poor PC demand, Intel has postponed 14nm processor production, which is planned to be conducted at its Fab 42 in Arizona, the US, the sources said.


     

    The highlighted part pretty much says it all.

  • Reply 44 of 55
    melgrossmelgross Posts: 33,510member
    indyfx wrote: »
    You point on emulation is on the mark. Apple was able to do it before (rosetta) because of the -HUGE- speed difference that had accumulated between the neglected PPC architecture and Intel's new core. Because they had a 2 to 4 fold speed increase and some added array operators to the core series (to speed the PPC/altivec code execution) Apple -was- able to pull off the PPC—>Intel transition (but certanly not without dipping into "a big bag of hurt" for a few years)

    Here however, (A series in a macbook) we don't have that. Any A-series emulating (RTI) an intel core i series would almost asuridly be horribly, unacceptably, slow. Porting OS X would not be the problem (as a matter of fact I'm fairly sure that already exists in One of Sir Jon's development labs somewhere in Cupertino) The big problem (with an A-series mac) IMHO is not the processor or the OS it is third party software, they simply can't ask developers to again port their software and offer it in twin binary format meanwhile offering a poor performing emulator to bridge the gap. That just isn't going to fly (perhaps not much better than MS's horrid little baby surface did)
    And.. if you think the press is going to give Apple a "pass" on a total failure (like they did for MS's surface fiasco) you would be -very- wrong. They would crucify Apple and it would damage the Apple brand profoundly.

    Well, at the bottom of my post I mention how Apple could do it. Now, there's a rumor that the new A8 will again be two cores, though perhaps at 2GHz, or higher. This would give the ability of doing what I suggested, if they also do the rest of what I suggested.
  • Reply 45 of 55
    melgrossmelgross Posts: 33,510member
    normm wrote: »
    It's worth mentioning that the reason the game has continued this long is that 2D lithography builds up 3D structures.  Capacitors that remember the bits (voltage levels) on DRAM chips have been made that are 100 times deeper than wide, so even at thin line widths they still contain lots of atoms (and electrons).  Entire memory structures have also been stacked, so counting merely by the 2D area per bit is misleading.  The only fundamental barrier keeping us from building processors with significant 3D depth is heat dissipation.  That can in principle be dealt with by not using all parts at once (which is how our brains deal with the heat issue---we really do only use 5% of our brain at a time), and by <span style="line-height:1.4em;">using a lot of thermodynamically (and logically) reversible structures that avoid heat production.</span>


    We also have the example of biology, which routinely builds massive quantities of precise microscopic arrangements of atoms at low cost (biomolecules).  Atomically perfect structures can use their quantum nature as a feature, rather than a drawback (we don't need crazy-difficult large-scale coherence to build ordinary logic circuitry).  So I think it's hard to say that we're very near the end of the road!  It's possible progress might slow down for awhile, but then again it might not!
    Nah, it's thought that 7nm is likely the ends after that, new technology will be needed, such as carbon nanotubes, or something else. The chance of getting to 5bm is very small.
  • Reply 46 of 55
    melgrossmelgross Posts: 33,510member
    ksec wrote: »
    While Digitimes are often wrong about anything Apple related,


    The highlighted part pretty much says it all.

    Nonsense! They've been having a serious number of problems.
  • Reply 47 of 55
    normmnormm Posts: 653member
    Quote:

    Originally Posted by melgross View Post





    Nah, it's thought that 7nm is likely the ends after that, new technology will be needed, such as carbon nanotubes, or something else. The chance of getting to 5bm is very small.

     

    I was really talking about the problem of continuing to put more logic in a given volume, not whether current CMOS technology will continue down to 5nm (I'm less pessimistic than most about that, though). Moore's law is about size and amount, not any particular technology. The size of a bit in a mercury delay line or a vacuum tube memory falls on the same straight line on a semilog graph vs time as modern chips -- the technology keeps changing but the amount of logic we can fit in a small region keeps growing geometrically.  If we eventually are able to make atomically perfect structures, we would have to say that the ultimate feature size is that of an atom.  But there'll be a slowdown as we get close to the 3D atomic limit: the true curve should be a logistic, not an exponential.

  • Reply 48 of 55
    originalg wrote: »
    Looking forward to seeing how big of an impact this will have on real day to day battery life. Can't wait!

    Try to; you have no choice.
  • Reply 49 of 55
    ksecksec Posts: 1,569member
    Quote:

    Originally Posted by melgross View Post





    Nonsense! They've been having a serious number of problems.

     

    Like what? It isn't the first time Intel, or heck Any one going to new nodes without problems. Most of the time, Intel manages to fix them before or during production of first batch.

     

    But this is for the first time Intel is doing 3 Nodes variation at once! Getting smaller node is hard, do 3 variation makes things even harder.

    Not to mention they are now trying to work things out custom manufacturing with other Fabless companies.

     

    I am not arguing 14nm nodes doesn't have problem. Of coz it does like every other new nodes. It is only this time around Intel dont have the urgency to actually fix it fast.

  • Reply 50 of 55
    melgrossmelgross Posts: 33,510member
    normm wrote: »
    I was really talking about the problem of continuing to put more logic in a given volume, not whether current CMOS technology will continue down to 5nm (I'm less pessimistic than most about that, though). Moore's law is about size and amount, not any particular technology. The size of a bit in a mercury delay line or a vacuum tube memory falls on the same straight line on a semilog graph vs time as modern chips -- the technology keeps changing but the amount of logic we can fit in a small region keeps growing geometrically.  <span style="line-height:1.4em;">If we eventually are able to make atomically perfect structures, we would have to say that the ultimate feature size is that of an atom.  But there'll be a slowdown as we get close to the 3D atomic limit: the true curve should be a logistic, not an exponential.</span>

    Moore's law is everything about process technology. The on,y reason they've been able to stuff so much more into the same space is because of the many process shifts to smaller sizes. Without that, it's going to be really difficult because of the increased heat you get when squeezing more components into the same space when of the same size. And they already squeeze as many components as they can. There's no more room to do so.

    No, you misunderstand the technology, and the physics. They can't get to one atom. They can't get to ten atoms most likely, according to the real experts in the field. It doesn't matter if we could make perfect structures, which is unlikely because of the way the etching process works, physics will preclude maki g these very fine structures work. This is why other technologies are being investigated so heavily. Everyone knows we're close to the limit now. A couple more process shrinks, and we're done.
  • Reply 51 of 55
    tallest skiltallest skil Posts: 43,388member
    Originally Posted by melgross View Post

    They can't get to one atom. They can't get to ten atoms most likely, according to the real experts in the field. It doesn’t matter if we could make perfect structures…

     

    I just had a thought.

     

    So stanene seems to superconduct up to the boiling point of water And that’s spectacular, because now we have a superconductor that can be used in electronic circuits.

     

    But being a superconductor, wouldn’t the minimum size of transistors utilizing it be increased? Because without inherent capacitance to hold back the electrons, they’d jump across larger gaps, yeah?

     

    Or maybe the fact that it’s a superconductor and clock speeds can be ramped up would render that restriction moot.

  • Reply 52 of 55
    melgrossmelgross Posts: 33,510member
    ksec wrote: »
    Like what? It isn't the first time Intel, or heck Any one going to new nodes without problems. Most of the time, Intel manages to fix them before or during production of first batch.

    But this is for the first time Intel is doing 3 Nodes variation at once! Getting smaller node is hard, do 3 variation makes things even harder.
    Not to mention they are now trying to work things out custom manufacturing with other Fabless companies.

    I am not arguing 14nm nodes doesn't have problem. Of coz it does like every other new nodes. It is only this time around Intel dont have the urgency to actually fix it fast.

    No, this isn't the first time. The first major problem was way back at 90nm, as you likely remember, when Prescott was the fastest Netburst CPU Intel made, which closed out at 3.8GHz. That was when they severely underestimated the leakage problem, among a couple of others. They solved that problem, and 65 and 45 were fine. But problems arose at 32, though they were easily solved. But at 22, Intel had serious problems, which resulted in a six month delay. Now, going to 14, their problems have been much worse, and we've seen several delays. Some want to ascribe some of that to a 22nm backlog, but that's not really the answer. Intel had a very good quarter. That backlog is more a myth than anything else.

    It's moving to a new node that keeps Intel ahead of everyone else. If we look at performance vs AMD, for example, and even against Apple's A7, we can see that much of the performance advantage Intel has is due to being a node, or more ahead. Wipe that out, and much of the performance advantage is wiped out as well. And Apple's A7 performs better than all of Intel's Bay Trail Atoms other than the top version. What would happen if Apple catches up in process tech?

    In addition, Intel can only advance performance so much within a particular node. In order to sell more processors, and those companies selling more computers, there needs to be a minimum performance increase. A few percent a year won't do it. People will just wait longer before upgrading.
  • Reply 53 of 55
    melgrossmelgross Posts: 33,510member
    I just had a thought.

    So stanene seems to superconduct up to the boiling point of water And that’s spectacular, because now we have a superconductor that can be used in electronic circuits.

    But being a superconductor, wouldn’t the minimum size of transistors utilizing it be increased? Because without inherent capacitance to hold back the electrons, they’d jump across larger gaps, yeah?

    Or maybe the fact that it’s a superconductor and clock speeds can be ramped up would render that restriction moot.

    Right now, stanene isn't a real material. It's in the lab. But it isn't a superconductor all the way through. Only at the edges, I believe, from the report I've read. A problem is that as it's tin, it can't be used for transistors, only the lines that connect the dots, so to speak.
  • Reply 54 of 55
    tallest skiltallest skil Posts: 43,388member
    Originally Posted by melgross View Post

    Right now, stanene isn't a real material. It's in the lab. But it isn't a superconductor all the way through. Only at the edges, I believe, from the report I've read. A problem is that as it's tin, it can't be used for transistors, only the lines that connect the dots, so to speak.

     

    Right; still, using it to replace the wiring throughout logic boards (and wherever else applicable) would be beneficial to performance and waste heat, wouldn’t it? In fact, it’s probably better that it can’t be used as a transistor because of the edge restriction on its superconductivity. That would introduce another hard minimum size to building these things.

  • Reply 55 of 55
    melgrossmelgross Posts: 33,510member
    Right; still, using it to replace the wiring throughout logic boards (and wherever else applicable) would be beneficial to performance and waste heat, wouldn’t it? In fact, it’s probably better that it can’t be used as a transistor because of the edge restriction on its superconductivity. That would introduce another hard minimum size to building these things.

    I've now read more about this. It's not even in the lab. It's just a proposed possibility, a hypothesis. It's best to disregard concepts such as these until there is some testing done to show if there is any possibility for it working at all.

    Don't forget that a transistor can't be made from a conductor or superconductor. Semiconductors are required. So this material, even if possible, would not be capable of leading to transistors, or diodes.

    The major research is in nanotubes, which, with the right doping and other modifications, can act as semiconductors. It's interesting to note as well, that nanotubes that are being looked at for this have a diameter of about 10nm.
Sign In or Register to comment.