Rumor: TSMC now building quad-core 'A8' chips for Apple's next-gen iPhone

2

Comments

  • Reply 21 of 50
    mstone wrote: »
    Can someone explain why they manufacture rectangular chips on a round wafer?

    Mathematically, chip manufacturers have been able to "square the circle," a feat theoretical mathematicians had pursued up until the last century. Chasing Moore's Law has meant doing several things once considered impossible. Right now sub-20 nm chips are being produced, which is several orders of magnitude beyond what was once thought physically impossible. How was this accomplished, you may well ask? First off, we have long since left the world of macro-physics and entered into a quasi-Nano physical one. The next step beyond will be quantum physics. However, chip production has not got there yet, so let's contain this discussion to present-day quasi-Nano technology.

    In this qN environment some rules of mathematics and space have been breached (see the work of McKenzie and Peters for a complete over-view). You may have wondered, "Why do chip manufacturers only use one side of a chip?" Think about it. They are struggling to fit more and more on one side, why do they not use both sides, it would make so much sense?" The answer is simple, although the reason doesn't make much sense to the layman, but it all comes down to one of the properties of qN devices. Simply stated, there is no other side in the qN world.

    The nano-bit world handles the two-states of ones and zeros uniquely different from the macro-physics world. In the qN physics, "ones" are represented by "being present" while "zeros" are represented by being "not present" or, to be more accurate, zeros are on the other "side" of the chip. Using this new physics, fabricators were able to double what could be placed on a chip. Since the move to qN physics in chip design we've seen dual and quad core chips to become common and with only needing to produce "ones" the power consumption has remained the same and even dropped.
  • Reply 22 of 50
    bigmac2bigmac2 Posts: 639member
    Quote:
    Originally Posted by WS11 View Post

     

    In the case of Intel, they'll be moving to 14nm production by the end of this year, even if the architecture is not as power efficient, the 14nm process will offer them a reasonable leap.

     

    NVIDIA's Kepler based K1 has improved efficiency by a considerable margin making it pretty close to PowerVR 6.  Assuming the jump to Maxwell based SoC is anything like the cores seen in the 750/750 ti, it should allow NVIDIA to overtake Imagination.


     

    ARM SoC are already more efficient and cheaper to produce than Intel CPU with worst fab process  This is a game Intel can't win that way, any fab process avantages Intel will develop can be eventually apply to ARM SoC.

     

    Nvidia is really shy about giving the TDP specs of their Tegra CPU and since 4 generations later, near no phones and only a handful of tablets using Tegra processors have been ever produce, It proves they don't meet most mfg expectations. 

  • Reply 23 of 50
    thttht Posts: 5,616member
    Quote:
    Originally Posted by WS11 View Post

     

    Suddenly quad core CPU's are the best!


     

    I'll be sad if the quad-core rumor is true. It's a sign that Apple can't improve IPC much anymore, which isn't too surprising. Going the quad-core route won't benefit users as much as improving IPC. The last big low hanging fruit now will be using a SSD-in-a-package for storage.

     

    If they can't improve IPC around 50%, hope they'll have a turbo that can up-clock 50% to 70% for a couple of seconds. That combined with a 20% improving in IPC would get another 2x performance for most smartphone use cases.

  • Reply 24 of 50
    just_mejust_me Posts: 590member
    Quote:
    Originally Posted by BigMac2 View Post

     

     

    True, no ARM cpu will ever catch Intel on raw performance because they don't play with the same basic rules.  

     

    But Its also true, Intel will never catch ARM on efficiency or performance per watt. 

     

    Same applies to overhyped Nvidia Tegra, will having a more powerful graphics processor than Imagination PowerVR used by Apple, they can't compete on the efficiency level.


    Not according to recent benchmarks comparing devices with the same battery size

     

    BayTrail

     

     

    Tegra 4

  • Reply 25 of 50
    sudonymsudonym Posts: 233member

    So long SammaySungy!   Apple don't need you NO MORE!

     

    Apple has been working on this for years, ever since Samsung became SameSham.

     

    SlimeShame is getting exactly what they deserve.  ScamScum bit the hand that feeds them.  That's why they are losing so much money.  All they make is crappy phones, not like the iPhone which is a lot better because the SlimeShame phone is all plastic.

  • Reply 26 of 50
    pdq2pdq2 Posts: 270member
    Quote:

    Originally Posted by Andysol View Post

     

     

    This is why I choose not the believe this rumor.  Fool me once (and twice)...


     

    I think one doesn't change a major supplier on a whim. If you think such a supplier is copying your products, you first politely and privately ask them to stop. Then you insist, privately, that they stop. The next step (sadly) is threatening to sue, then actually suing. And you start making noise about finding other suppliers.

     

    You win a lawsuit, and you hope that that major supplier finally sees the writing on the wall. Only after all that, with no change, do you go through the hassle of actually moving production to a new (to you) company.

     

    I'm no expert, but that's what I think happened. I think they were willing to give Samsung every chance to engage, reverse, and keep their business.  I think if the S5 doesn't sell like gangbusters (which I think it won't), and they lose the second lawsuit starting this month (which I think they will), and Samsung continues to miss revenue and earnings expectations (which I think they will), I think their CEO is going to start facing uncomfortable questions about their strategy these last few years in the face of this slow but very obvious evolution.

     

    Changing major suppliers was a big, big deal. And Apple avoided it as long as they could. It's going to hurt both companies, but I think it's going to hurt Samsung a lot more, given the events I just mentioned. But then I'm an Apple fan, so I'm not exactly unbiased.

  • Reply 27 of 50
    ws11ws11 Posts: 159member
    Quote:
    Originally Posted by BigMac2 View Post

     

     

    ARM SoC are already more efficient and cheaper to produce than Intel CPU with worst fab process  This is a game Intel can't win that way, any fab process avantages Intel will develop can be eventually apply to ARM SoC.

     

    Nvidia is really shy about giving the TDP specs of their Tegra CPU and since 4 generations later, near no phones and only a handful of tablets using Tegra processors have been ever produce, It proves they don't meet most mfg expectations. 


    Intel isn't only improving efficiency through fab, architecture will have a large impact.  The move from Airmont to Goldmont will still be 14nm to 14nm, but you will see an improvement in efficiency.   And the "eventually" is what matters, if Intel can produce a 14nm fab before the competition that is a real advantage if their competition is on 22nm and 28nm.  By the time the competition gets to 14nm, Intel will be onto 10nm and so forth.  Always a step ahead.  Silvermont SoCs are very cheap, $32 and $37 respectively, which is very reasonable considering they offer the highest CPU compute in an x86-64 package.

     

    NVIDIA isn't shy about giving TDPs, heck mid-2013 they let Anandtech test a mobile Kepler SoC 1 year before the release.  Anand found that the pre-pre-production mobile Kepler offered ~5x the performance of the PowerVR inside the A6X with the same power consumption. The reason behind the lack of design wins in smartphones (in North America) is due to the lack of modem.  If you haven't noticed, Qualcomm has been dominating the SoC market in Android/Windows devices in North America due to their wireless capabilities.   

     

    Both NVIDIA and Intel are finally setting their focus in the mobile market, and you would be a fool to brush them off in the manner you do.  Previous attempts with Tegra 4 and Atom CloverTrail had been using ancient designs, but that's no longer the case.  Mobile Kepler and Silvermont are using the same technology found in a laptop/desktop equivalent.

  • Reply 28 of 50
    sockrolidsockrolid Posts: 2,789member

    Originally Posted by mstone View Post

     

    Can someone explain why they manufacture rectangular chips on a round wafer?


     

    Because the ingot "growing" process naturally creates a cylindrical silicon crystal.

    It's kind of like dipping candles.  The cylindrical crystal is then sliced into wafers which are then polished.

    Here: http://en.wikipedia.org/wiki/Silicon_wafer

  • Reply 29 of 50
    theothergeofftheothergeoff Posts: 2,081member
    Quote:
    Originally Posted by mjtomlin View Post

     

    True, but Apple could start designing their own x86_64 (AMD64) cores to use in Macs. 


    all risk, little reward.   Sometimes its better to pass the risk to Intel, and pay wholesale.   Just keep reminding them of who is the largest High End Laptop maker, and that you have a few 'tweaks' you'd like them to do to speed up OSX, and occasionally let leak through back channels Macbook Airs running on prototype A8x chips (may be slow as molasses, but it tells Intel they can't stick it to one of their largest customers).

     

    The fact that apple owns both the A'n' design and the coding of the only OS that is running on the chip, and controls the selection of all periphery chips and interfaces gives them an amazing edge drive what is done best in HW into HW, as well as optimize chip design for the code.   Doing the same in an x86_64 class chip just teaches intel and AMD how to do it for everyone else... why be someone else's R&D (See: Samsung/Google and smartphones before 2007)?

  • Reply 30 of 50
    mjtomlinmjtomlin Posts: 2,687member
    Quote:

    Originally Posted by THT View Post

     

     

    I'll be sad if the quad-core rumor is true. It's a sign that Apple can't improve IPC much anymore, which isn't too surprising. Going the quad-core route won't benefit users as much as improving IPC. The last big low hanging fruit now will be using a SSD-in-a-package for storage.

     

    If they can't improve IPC around 50%, hope they'll have a turbo that can up-clock 50% to 70% for a couple of seconds. That combined with a 20% improving in IPC would get another 2x performance for most smartphone use cases.


     

    I agree that quad-core on a smartphone is overkill and as I posted above I think the next iPhone will stick with a dual-core design and focus on efficiency to increase battery life. Apple's Cyclone core is extremely efficient and fairly powerful. With the next generation we may see customized changes to the ISA to further refine and optimize it.

     

    However, I do believe they will go quad-core for the iPad. Why? The iPad has great battery life already. If they can push the performance while keeping the same battery life, they'll still be ahead of the competition.

  • Reply 31 of 50
    misamisa Posts: 827member
    mjtomlin wrote: »
    True, but <span style="line-height:1.4em;">Apple could start designing their own</span>
    x86_64 (AMD64) <span style="line-height:1.4em;">cores to use in Macs. </span>

    They could, but it would not be cost effective to since Intel and AMD have 20 years of patents and cross-licensing under their belts.

    ARM on the other hand, Apple actually helped start.
    bigmac2 wrote: »
    ARM SoC are already more efficient and cheaper to produce than Intel CPU with worst fab process  This is a game Intel can't win that way, any fab process avantages Intel will develop can be eventually apply to ARM SoC.

    Nvidia is really shy about giving the TDP specs of their Tegra CPU and since 4 generations later, near no phones and only a handful of tablets using Tegra processors have been ever produce, It proves they don't meet most mfg expectations. 

    The most important factors for mobile is TDP above everything. This is why Intel, nVidia and AMD don't make parts that belong in devices with small batteries. We only see "ultrabooks" and devices like the Microsoft Surface at all because of improvements in battery life in addition to die shrinks that lower the TDP. But look at a laptop from 1999 versus one from 2009. The batteries in a 1999 model you were lucky if you got an hour or two out of them. That continued to be the case, every single time. There has been no game changer for laptops. Only Apple has ever produced a laptop that focused on battery life. Every other manufacturer would put the weakest parts in the laptops to sell them, who cares how long they last.

    It's this hardware/software disconnect that Apple is in the best position to stay on top of. Only Microsoft itself could do better, since they're the only ones who could actually design hardware that works with their OS. Android is completely broken in this regard as every manufacturer producing a device only cares about selling hardware.

    I'm not even sure any software written for Android even can take advantage of threading of a quadcore CPU. There seems to be an intractable problem in software development in general that very little software is written with threads in mind, and instead insist on creating new processes (see Google Chrome) that require more RAM rather than just design software that works efficiently.
  • Reply 32 of 50
    mdriftmeyermdriftmeyer Posts: 7,503member
    Quote:

    Originally Posted by Misa View Post





    They could, but it would not be cost effective to since Intel and AMD have 20 years of patents and cross-licensing under their belts.



    ARM on the other hand, Apple actually helped start.

    The most important factors for mobile is TDP above everything. This is why Intel, nVidia and AMD don't make parts that belong in devices with small batteries. We only see "ultrabooks" and devices like the Microsoft Surface at all because of improvements in battery life in addition to die shrinks that lower the TDP. But look at a laptop from 1999 versus one from 2009. The batteries in a 1999 model you were lucky if you got an hour or two out of them. That continued to be the case, every single time. There has been no game changer for laptops. Only Apple has ever produced a laptop that focused on battery life. Every other manufacturer would put the weakest parts in the laptops to sell them, who cares how long they last.



    It's this hardware/software disconnect that Apple is in the best position to stay on top of. Only Microsoft itself could do better, since they're the only ones who could actually design hardware that works with their OS. Android is completely broken in this regard as every manufacturer producing a device only cares about selling hardware.



    I'm not even sure any software written for Android even can take advantage of threading of a quadcore CPU. There seems to be an intractable problem in software development in general that very little software is written with threads in mind, and instead insist on creating new processes (see Google Chrome) that require more RAM rather than just design software that works efficiently.

     

    Apple can easily purchase AMD, if it so chooses. Apple can easily become an OEM vendor for AMD and with both having ARM licenses [especially Apple having access to all ARM IP] could start stamping out AMD APU Excavator based solutions.

     

    The only caveat is THUNDERBOLT.

  • Reply 33 of 50
    bigmac2bigmac2 Posts: 639member
    Quote:
    Originally Posted by WS11 View Post

     

    Intel isn't only improving efficiency through fab, architecture will have a large impact.  The move from Airmont to Goldmont will still be 14nm to 14nm, but you will see an improvement in efficiency.   And the "eventually" is what matters, if Intel can produce a 14nm fab before the competition that is a real advantage if their competition is on 22nm and 28nm.  By the time the competition gets to 14nm, Intel will be onto 10nm and so forth.  Always a step ahead.  Silvermont SoCs are very cheap, $32 and $37 respectively, which is very reasonable considering they offer the highest CPU compute in an x86-64 package.

     

    NVIDIA isn't shy about giving TDPs, heck mid-2013 they let Anandtech test a mobile Kepler SoC 1 year before the release.  Anand found that the pre-pre-production mobile Kepler offered ~5x the performance of the PowerVR inside the A6X with the same power consumption. The reason behind the lack of design wins in smartphones (in North America) is due to the lack of modem.  If you haven't noticed, Qualcomm has been dominating the SoC market in Android/Windows devices in North America due to their wireless capabilities.   

     

    Both NVIDIA and Intel are finally setting their focus in the mobile market, and you would be a fool to brush them off in the manner you do.  Previous attempts with Tegra 4 and Atom CloverTrail had been using ancient designs, but that's no longer the case.  Mobile Kepler and Silvermont are using the same technology found in a laptop/desktop equivalent.


     

    You see ARM Soc and Intel SoC aren't in the same league, ARM SoC are design to runs on battery powered platform ever since his born more than 20 years ago.  Every things in the A7 and every ARM SoC are design with power preservation and efficiency in mind. Where intel best mobile SoC TDP are above 4 watts, the Ax series TDP target is around 1 watt.  The A7 at 28nm already beats any current 14nm Intel chips on performance per watts, the next A8 will surely be a 22nm fab and got other tweaks.   BTW, 32$ to 37$ is pretty expensive when compared to the A7, around 19$ which already is one of the most expensive ARM SoC. 

     

    If Nvidia isn't shy about efficiency specs where are they? According to Anandtech own test here is what they have to said about the Tegra 4:

    Quote:


    "Unfortunately compared to the Nexus 7 (2013) the Tegra Note lasts quite a bit less on battery."


     

     And about the modem misinformation, here a quote of Nvidia's Chief Marketing officer from mid 2012:

    Quote:


     "Contrary to misinformation likely spread by our competitors, Tegra 3 does work with external LTE modems. Fujitsu will be shipping their Tegra 3-based Arrows X LTE phone starting July 20th, and more Tegra 3-based LTE phones from other vendors are coming later this year."


    This was 2 years ago, still hard to find a Tegra phone today.

     

    Every one is now focusing on today fastest growing market and leaves the aging desktop computing behind, Nvidia and Intel aren't fools, they want to protect their IP.  But using the same old technology found in Desktop computing designed for raw power and conquer the Ghz war and trying magically negate all their legacy inefficiencies and compete with an architecture designed for efficiency from day one is just doomed to fail.

  • Reply 34 of 50
    bigmac2bigmac2 Posts: 639member
    Quote:
    Originally Posted by Just_Me View Post

     

    Not according to recent benchmarks comparing devices with the same battery size

     

    BayTrail

     

     

    Tegra 4


     

    Can't say the Tegra 4 shines in terms of battery life, BTW what is that PRISM things needed for having average battery life?

     

    Here is more disastrous Tegra Note graph from Anandtech:

     

    3D Battery Life - GLBenchmark 2.5.1 

  • Reply 35 of 50
    jexusjexus Posts: 373member
    Yes, because Apple would totally trust the Foundry that has botched up multiple node transitions, including the current 22/20nm for both Nvidia and Qualcomm. You keep on fantasizing there Taiwain Commercial Times.
  • Reply 36 of 50
    ivabignivabign Posts: 61member
    I can see Apple finally making the move to quad core with a smaller process (like 20-22nm). The A7 cores are so far ahead of everyone else who uses ARM I just don't see how Apple is going to be able to make their usual "double the performance of last year" claim with the A8 simply by sticking to two cores and the same clock speed. <span style="line-height:1.4em;">But a quad core A8 (using A7 cores) would be a beast.</span>

    <span style="line-height:1.4em;">I think Intel should be worried. Apple has been making huge improvements to their processors at a rate that surpasses Intel. When's the last time a new Intel processor doubled the performance of the previous version?</span>

    My 2013 Air gets 13 hours with the Haswell chip in it. It may not have increased speed, but that was quite an accomplishment. If Intel wants to be relevant, they need to work on their graphics capabilities. There is MUCH room for improvement there.
  • Reply 37 of 50
    just_mejust_me Posts: 590member
    Quote:

    Originally Posted by BigMac2 View Post

     

     

    Can't say the Tegra 4 shines in terms of battery life, BTW what is that PRISM things needed for having average battery life?

     

    Here is more disastrous Tegra Note graph from Anandtech:

     

    3D Battery Life - GLBenchmark 2.5.1 


    That graph shows Tegra 4 (Nvidia Tegra Note 7) performs almost the same a Qualcomm ARM SOC (Nexus 7 2013) which have almost the same battery capacity (15.01Whr vs 15.17Whr). Not a very "disastrous" graph

  • Reply 38 of 50
    ash471ash471 Posts: 705member
    ivabign wrote: »
    My 2013 Air gets 13 hours with the Haswell chip in it. It may not have increased speed, but that was quite an accomplishment. If Intel wants to be relevant, they need to work on their graphics capabilities. There is MUCH room for improvement there.
    Agreed. In fact, why doesn't Intel buy AMD? They should be able to pass the regulatory hurdle now. They certainly don't have market power anymore. I think Intel-AMD merger makes more sence than the HP-Compaq merger.
  • Reply 39 of 50

    In 6 months time, when the new iPhone is released, this story will be debunked and we'll find out that Samsung designed the A8 as well.

  • Reply 40 of 50
    ws11ws11 Posts: 159member
    Quote:
    Originally Posted by BigMac2 View Post

     

    1.

    You see ARM Soc and Intel SoC aren't in the same league, ARM SoC are design to runs on battery powered platform ever since his born more than 20 years ago.  Every things in the A7 and every ARM SoC are design with power preservation and efficiency in mind. Where intel best mobile SoC TDP are above 4 watts, the Ax series TDP target is around 1 watt.  The A7 at 28nm already beats any current 14nm Intel chips on performance per watts, the next A8 will surely be a 22nm fab and got other tweaks.   BTW, 32$ to 37$ is pretty expensive when compared to the A7, around 19$ which already is one of the most expensive ARM SoC. 

     

    2.

    If Nvidia isn't shy about efficiency specs where are they? According to Anandtech own test here is what they have to said about the Tegra 4:

     

    3.

     And about the modem misinformation, here a quote of Nvidia's Chief Marketing officer from mid 2012:

    This was 2 years ago, still hard to find a Tegra phone today.

     

    4.

    Every one is now focusing on today fastest growing market and leaves the aging desktop computing behind, Nvidia and Intel aren't fools, they want to protect their IP.  But using the same old technology found in Desktop computing designed for raw power and conquer the Ghz war and trying magically negate all their legacy inefficiencies and compete with an architecture designed for efficiency from day one is just doomed to fail.


    1.

     

    ASUS T100 - Windows 8.1 - 31.0 Whr - battery life =  10:40

     

    iPad Air - iOS - 32.4 Whr - battery life = 13:45

     

    Source: Engadget

     

    The Bay Trail tablet running complete Windows 8.1 with a slightly smaller battery manages to be only 3:05 behind the iPad Air running iOS with a slightly larger battery.

     

    Bay Trail is 22 nm, not 14 nm.  Cherry Trail will be 14 nm.

     

    2.

     

    The Tegra 4 in the Note 7 uses 4x Cortex A15 cores, not exactly NVIDIA's doing as they hadn't yet designed their own CPU cores as they are with the 64-bit Denver.  There is also the performance gap between Tegra 4 and the Snapdragon S4 Pro inside the Nexus 7, as well as software differences.

     

    Mobile Kepler is a very different design than Tegra 4.  Again you seem to have forgotten NVIDIA is a powerhouse in GPUs, not CPUs.

     

    3.

     

    Why would a company want to spend extra on integrating a modem when they can use Qualcomm's superior modem on a one solution SoC with comparable performance?

     

    Tegra 4i is only launching this year (quite late mind you), but it is NVIDIA's first attempt at an integrated LTE solution in an SoC.

     

     

    4.

     

    All we've seen from both Intel and NVIDIA are some extremely impressive reductions in power consumption: 

     

    Ivy Bridge --> Haswell

     

    Kepler --> Maxwell

     

    I think you're stuck living in the past (at least your mindset is), a place that neither of these companies are bothering to spend their time dwelling. 

Sign In or Register to comment.