Rumor: TSMC now building quad-core 'A8' chips for Apple's next-gen iPhone

13»

Comments

  • Reply 41 of 50
    poksipoksi Posts: 482member
    Quote:

    Originally Posted by EricTheHalfBee View Post

     

    I can see Apple finally making the move to quad core with a smaller process (like 20-22nm). The A7 cores are so far ahead of everyone else who uses ARM I just don't see how Apple is going to be able to make their usual "double the performance of last year" claim with the A8 simply by sticking to two cores and the same clock speed. But a quad core A8 (using A7 cores) would be a beast.

    I think Intel should be worried. Apple has been making huge improvements to their processors at a rate that surpasses Intel. When's the last time a new Intel processor doubled the performance of the previous version?


     

    This rumour might be completely false, of course. But if hypothetically Apple is going to put quad core in production, then we shall most probably have 4x graphics on larger screen iPhones. Which might be overkill for iPhone, but if you look at iPads and mirroring to 4K screens, then it does make a sense.

  • Reply 42 of 50
    ws11ws11 Posts: 159member
    Quote:
    Originally Posted by EricTheHalfBee View Post

     

    I think Intel should be worried. Apple has been making huge improvements to their processors at a rate that surpasses Intel. When's the last time a new Intel processor doubled the performance of the previous version?


    The jump from Clover Trail to Bay Trail was between 3x~6x (depending on task).  That was in 2013.  

     

    Intel is moving faster than anyone right now, try to keep up!

  • Reply 43 of 50
    8thman8thman Posts: 31member
    Where are the CURRENT 64 bit apps??
    I want some bragging rights.
  • Reply 44 of 50
    bigmac2bigmac2 Posts: 639member
    Quote:

    Originally Posted by Just_Me View Post

     

    That graph shows Tegra 4 (Nvidia Tegra Note 7) performs almost the same a Qualcomm ARM SOC (Nexus 7 2013) which have almost the same battery capacity (15.01Whr vs 15.17Whr). Not a very "disastrous" graph


     

    I case you've missed it, here is a quote from Anandtech review: 

    Quote:


     "Unfortunately compared to the Nexus 7 (2013) the Tegra Note lasts quite a bit less on battery."


  • Reply 45 of 50
    mdriftmeyermdriftmeyer Posts: 7,503member
    Quote:

    Originally Posted by WS11 View Post

     

    The jump from Clover Trail to Bay Trail was between 3x~6x (depending on task).  That was in 2013.  

     

    Intel is moving faster than anyone right now, try to keep up!


     

    When you're dead last in the embedded space, jumps seem remarkable.

  • Reply 46 of 50
    mdriftmeyermdriftmeyer Posts: 7,503member
    Quote:
    Originally Posted by Jexus View Post



    Yes, because Apple would totally trust the Foundry that has botched up multiple node transitions, including the current 22/20nm for both Nvidia and Qualcomm. You keep on fantasizing there Taiwain Commercial Times.

     

    If you think Samsung exclusively stamps out A-series SoC's you're delusional.

     

    TSMC, Samsung, GloFlo are all partners with ARM who together designed this 28nm, now 20nm/16nm FinFET system.

     

    SemiAccurate has a better presentation on what's going on.

     

    http://semiaccurate.com/2014/03/03/6-core-16nm-finfet-arm-cortex-a57-chips-spotted-wild/

  • Reply 47 of 50
    bigmac2bigmac2 Posts: 639member
    Quote:
    Originally Posted by WS11 View Post

     

    ASUS T100 - Windows 8.1 - 31.0 Whr - battery life =  10:40

     

    iPad Air - iOS - 32.4 Whr - battery life = 13:45

     

    Source: Engadget

     

    The Bay Trail tablet running complete Windows 8.1 with a slightly smaller battery manages to be only 3:05 behind the iPad Air running iOS with a slightly larger battery.

     

    Bay Trail is 22 nm, not 14 nm.  Cherry Trail will be 14 nm.


     

    Yes I agree,  Intel SoC will go better overtime, but Apple and other ARM partenaire won't sit on their hands waiting for Intel to pass them. My point is a 28nm ARM SoC is more efficient than any current Intel SoC, and  going 14nm won't change the game. Next gen of ARM SoC will be 22 or 14nm and retain its efficiency edge over the overweight Pentium legacy architecture.  BTW have you ever heard about PWRficient and P.A Semi?

     

    Quote:

    The Tegra 4 in the Note 7 uses 4x Cortex A15 cores, not exactly NVIDIA's doing as they hadn't yet designed their own CPU cores as they are with the 64-bit Denver.  There is also the performance gap between Tegra 4 and the Snapdragon S4 Pro inside the Nexus 7, as well as software differences.

     

    Mobile Kepler is a very different design than Tegra 4.  Again you seem to have forgotten NVIDIA is a powerhouse in GPUs, not CPUs.



     

    I'm not forgetting anything, like you said Nvidia make energy hungry GPU, many Tegra test show a big impact on battery life when running graphics benchmark.  On video playback bench Nvidia uses some trick named PRISM to get a decent battery life. I've heard so much overhyped review of pre released Tegra SoC in the past, and none have made into a good user end product. In mobile devices SoC, TDP is every things, and Nvidia don't wan't us to know about it. 

     

    Quote:

    Why would a company want to spend extra on integrating a modem when they can use Qualcomm's superior modem on a one solution SoC with comparable performance?

     

    Tegra 4i is only launching this year (quite late mind you), but it is NVIDIA's first attempt at an integrated LTE solution in an SoC.



     

    What prevent the Tegra to use a Qualcomm modem just like Apple does with the iPhone? The Tegra 4i integrate a new Soft modem (Remember WinModem crap?).  Most current phones have a separate modem chips, the real issue with the Tegra is battery life.  

     

    Quote:

    All we've seen from both Intel and NVIDIA are some extremely impressive reductions in power consumption: 

     

    Ivy Bridge --> Haswell

     

    Kepler --> Maxwell

     

    I think you're stuck living in the past (at least your mindset is), a place that neither of these companies are bothering to spend their time dwelling. 

     



     

    I'm living in the present thank you very much. And in this reality ARM SoC have been designed to beat classic desktop architecture.  ARM is currently more power efficient than Intel x86.  Of course Intel is working hard for getting things better, but you are blind if you consider ARM SoC won't evolve at the same pace. 

     

    BTW, I'm curious how you consider Nvidia against Intel.  Which one you think gives you more performance per watts? Nvidia Tegra or Intel Mobile SoC?

  • Reply 48 of 50
    jexusjexus Posts: 373member
    Quote:
    Originally Posted by mdriftmeyer View Post

     

    If you think Samsung exclusively stamps out A-series SoC's you're delusional.

     

    TSMC, Samsung, GloFlo are all partners with ARM who together designed this 28nm, now 20nm/16nm FinFET system.

     

    SemiAccurate has a better presentation on what's going on.

     

    http://semiaccurate.com/2014/03/03/6-core-16nm-finfet-arm-cortex-a57-chips-spotted-wild/

     

    I find it amusing how pseudo defensive this sounds. You can scold me all you want, the article says the same thing, so I suppose AI writers are potentially delusional.

    "To date, Samsung has produced all of the mobile CPUs for Apple's iPhone and iPad, including the 64-bit A7 processor found in the iPhone 5s, iPad Air and iPad mini with Retina display."

    "Also suspect is the claim that TSMC will occupy most of the chip production capacity for Apple's iOS devices. While Samsung has proven capable of providing adequate silicon to Apple, there have been some concerns that TSMC may not be able to keep up with consumer demand for the iPhone and iPad."

    It doesn't take an industry analyst to realize that moving the bulk of orders from Samsung(as much as Apple needs to get away from them) to TSMC in one fell swoop is just asking for trouble. They lied about 40nm, their 32nm was basically non existent, their 28nm was delayed and now both Nvidia and Qualcomm both find themselves having to reorganize their product launches due to the fact that TSMC has once again bored a node transition.

    GloFo has had its share of problems, but GloFo's more primary customer is AMD, which requires fabrication for hardware much greater in complexity(CISC). Nvidia's GPU's are complex, but the amount of ARM intake for TSMC is far higher than its CISC.

    You can claim common partner and "But 20/16nm FinFET's" all you want, these things are facts.
    ---------------------------------------
    And I see you took the liberty of posting a Charlie article.

    Again, Charlie can say all he wants, and people can cite his industry connections. It doesn't change the fact that half of the news he reports on/ his predictions turn out to be laughably false. "Intel Larbee dead, AMD to soon overtake Intel!" "AMD Kabini out-muscle Intel core, the year of the AMD chipset is upon us!"
  • Reply 49 of 50
    thttht Posts: 5,616member
    Quote:

    Originally Posted by mjtomlin View Post

     

    I agree that quad-core on a smartphone is overkill and as I posted above I think the next iPhone will stick with a dual-core design and focus on efficiency to increase battery life. Apple's Cyclone core is extremely efficient and fairly powerful. With the next generation we may see customized changes to the ISA to further refine and optimize it.

     

    However, I do believe they will go quad-core for the iPad. Why? The iPad has great battery life already. If they can push the performance while keeping the same battery life, they'll still be ahead of the competition.


     

    I think of it from the outside in. If they are going to put a quad-core CPU into an iOS device, it's because they are going to ship FCPX, Logic X, Safari 7, etc, on it, not because there is TDP headroom or larger battery capacities. Adding another 2 cores (for a total of 4) only pushes performance for parallel or high throughput applications. These sorts of applications typically run for a long time (like transcoding a video). Safari 7 (or Chrome) type apps where each webpage tab is a process may make use of the cores, but not sure. Much more memory intensive than CPU intensive I suppose.

     

    The non-parallel apps basically get nothing with going from 2-core to 4-core. Non-parallel apps are like 90%, maybe 95%, of the apps available. Apps can be multi-threaded, but they aren't really parallel type apps to me. Most multi-threaded apps are single threaded dominant with child threads waiting on input.

  • Reply 50 of 50
    haarhaar Posts: 563member
    my problem with this rumour is that it is supposed to be a quad core processor... it's more important for an iphone to last longer than it is to have more cores in the phone.
    my guess for the a8 is that it will use less power, and have a stronger GPU...
Sign In or Register to comment.