iPhone 6 rivals by Samsung, LG, HTC suffering delays in Qualcomm's 64-bit Snapdragon answer to Apple

13567

Comments

  • Reply 41 of 134
    tmaytmay Posts: 6,470member
    Quote:
    Originally Posted by tooltalk View Post

     

     

    @JBDragon : Sure, and Qualcomm, Mediatek, Samsung, etc will have their next generation AP's by the time A9 is out.

     

    Now, as for your multicore comment, there is a good reason why almost all processor makers stopped competing for higher clock frequency in favor of multi-cores in mid 2000: power efficiency.  Adding more cores reduces power consumptions and heat dissipation for the same performance, not the other way around.  According to a IEEE article by Phillip Ross : 

     


    Apple does not "own" all of its hardware stack, prominently lacking its own baseband chip and GPU (for now anyway), but as a licensee of ARM architecture, it gains the advantage of optimizing to its ecosystem, and more specifically, for iOS and its development tools. It is apparent that these advantages have been extended with Apple's early 64bit designs, especially as Apple is able afford the die necessary for its power efficient designs, something that would be a difficult sell for Qualcomm and Nvidia designs to price sensitive OEM's.

     

    I haven't any idea on how many cores Apple's A9 might have, but I would surmise that it will be optimized for the use case that Apple is creating for the next iPhone or iPad. I don't find that Qualcomm, Samsung or Nvidia are able to derive the same optimizations to the Android ecosystem and as well, there may be die constraints that trade efficiency for cost.

     

    I don't buy that Qualcomm is in as much trouble as it is portrayed by the story, but it would be true that delivery delays into late spring would be impacted by rumors of the iPhone 6S and the A9, something that the OEM's could ill afford.

     0Likes 0Dislikes 0Informatives
  • Reply 42 of 134
    relicrelic Posts: 4,735member
    I've said it before and I'll say it again.

    Apple is so far ahead of everyone it's actually embarrassing.

    Even the Denver K1, which Nvidia hyped to no end, is still behind the Apple A7 from one year ago. PATHETIC.

    It's behind the A8x, it beats both the A8 and A7, so why is that pathetic, out of the all of the ARM CPU's, it's the second fastest. Why does it have to beat the A8x to be considered a decent chip and your also forgetting that this is Nvidias first iteration, the QuadCore version will be released in Q1 and will be just as fast as the A8X, if not faster, the single core benchmarks for the Denver shows this. The only reason why the A8 is faster is because it has an extra core. Why does it matter anyway, these chips are fast enough to run anything you through at them, calling them pathetic is just silly fanboy rhetoric, my chip is better than yours. I'm constantly amazed as to what I can do with my Nvidia K1 devices and iPad Air 2's. I also hope that the largest most powerful computer company in the world has the best mobile CPU.
     0Likes 0Dislikes 0Informatives
  • Reply 43 of 134
    solipsismysolipsismy Posts: 5,099member
    relic wrote: »

    It's behind the A8x, it beats both the A8 and A7, so why is that pathetic...

    The only reason they "beat" the A8 and A7 is because they are much higher clocked and using a lot more power.
     0Likes 0Dislikes 0Informatives
  • Reply 44 of 134



    This is all Apple holiday propaganda. I'm sure Samsung will be just fine. (cough, cough)

     0Likes 0Dislikes 0Informatives
  • Reply 45 of 134
    relicrelic Posts: 4,735member
    This is something I've been dreaming about for a while.

    I would love that to but I highly doubt we'll see anything like that for a while. Thankfully they're are some really nice convertibles out there to fill that need.
     0Likes 0Dislikes 0Informatives
  • Reply 46 of 134
    relicrelic Posts: 4,735member
    solipsismy wrote: »
    The only reason they "beat" the A8 and A7 is because they are much higher clocked and using a lot more power.
    Regardless, it's still faster, I'm getting 7.5 to 8 hours of battery with my Nexus 8 so it's a mute point. It's not the 8.5 to 9 that I'm getting with my iPad Air 2 but it isn't bad either.
     0Likes 0Dislikes 0Informatives
  • Reply 47 of 134
    solipsismysolipsismy Posts: 5,099member
    relic wrote: »
    Regardless, it's still faster, I'm getting 7.5 to 8 hours of battery with my Nexus 8 so it's a mute point. It's not the 8.5 to 9 that I'm getting with my iPad Air 2 but it isn't bad either.

    There is no regardless and you can't compare the battery life of the device when comparing how advanced an SoC is or isn't.
     0Likes 0Dislikes 0Informatives
  • Reply 48 of 134
    But wait... I thought 64bit chips were just a gimmick.
     0Likes 0Dislikes 0Informatives
  • Reply 49 of 134
    Quote:

    Originally Posted by tmay View Post

     

    Apple does not "own" all of its hardware stack, prominently lacking its own baseband chip and GPU (for now anyway), but as a licensee of ARM architecture, it gains the advantage of optimizing to its ecosystem, and more specifically, for iOS and its development tools. It is apparent that these advantages have been extended with Apple's early 64bit designs, especially as Apple is able afford the die necessary for its power efficient designs, something that would be a difficult sell for Qualcomm and Nvidia designs to price sensitive OEM's.

     

    I haven't any idea on how many cores Apple's A9 might have, but I would surmise that it will be optimized for the use case that Apple is creating for the next iPhone or iPad. I don't find that Qualcomm, Samsung or Nvidia are able to derive the same optimizations to the Android ecosystem and as well, there may be die constraints that trade efficiency for cost.

     

    I don't buy that Qualcomm is in as much trouble as it is portrayed by the story, but it would be true that delivery delays into late spring would be impacted by rumors of the iPhone 6S and the A9, something that the OEM's could ill afford.


     

     

    Apple's supposedly working on their own baseband chips, as they've bought some talent recently. And given that they modified IMGTech's design for the GXA6850 in the A8X, I think a custom GPU may not be far behind.

     0Likes 0Dislikes 0Informatives
  • Reply 50 of 134
    Quote:

    Originally Posted by Relic View Post





    It's behind the A8x, it beats both the A8 and A7, so why is that pathetic, out of the all of the ARM CPU's, it's the second fastest. Why does it have to beat the A8x to be considered a decent chip and your also forgetting that this is Nvidias first iteration, the QuadCore version will be released in Q1 and will be just as fast as the A8X, if not faster, the single core benchmarks for the Denver shows this. The only reason why the A8 is faster is because it has an extra core. Why does it matter anyway, these chips are fast enough to run anything you through at them, calling them pathetic is just silly fanboy rhetoric, my chip is better than yours. I'm constantly amazed as to what I can do with my Nvidia K1 devices and iPad Air 2's. I also hope that the largest most powerful computer company in the world has the best mobile CPU.

     

    How dense can you be. I've CLEARLY explained this to you several times and yet you keep coming back with the same response. It's getting to the point of trolling when you REFUSE to look at the FACTS.

     

    When you adjust the processor for a baseline clock speed of 1.0GHz, the Denver K1 is even slower than the A7. Now some people claim you can't compare clock speeds, but you can since we're talking about the same architecture (ARMv8 vs ARMv8). Where you CAN'T compare clock speeds is looking at something like x86 vs ARM.

     

    But the most telling aspect of this is that it's Nvidia themselves who brought up the clock speed argument when they hyped the K1 prior to launch. THEY'RE the ones bragging about how clock vs clock the K1 is more efficient and faster than other ARM designs. I'm only using Nvidias own argument against them.

     

     

    In the end the K1 gets its performance from a MUCH higher clock speed than the A7/8/8X, not through their LIES of claiming their architecture is faster clock vs clock.

     0Likes 0Dislikes 0Informatives
  • Reply 51 of 134
    solipsismysolipsismy Posts: 5,099member
    Apple's supposedly working on their own baseband chips, as they've bought some talent recently. And given that they modified IMGTech's design for the GXA6850 in the A8X, I think a custom GPU may not be far behind.

    Don't those Nortel patens Apple bought contain detail a lot of LTE-based solutions?
     0Likes 0Dislikes 0Informatives
  • Reply 52 of 134
    tmaytmay Posts: 6,470member
    Quote:

    Originally Posted by TheWhiteFalcon View Post

     

     

     

    Apple's supposedly working on their own baseband chips, as they've bought some talent recently. And given that they modified IMGTech's design for the GXA6850 in the A8X, I think a custom GPU may not be far behind.


    I agree on the GPU; Apple has hired a number of ex AMD guys, but I haven't a clue if Apple has been able to accumulate enough IP or licenses to IP to compete with Qualcomm on the baseband chip.

     0Likes 0Dislikes 0Informatives
  • Reply 53 of 134
    relicrelic Posts: 4,735member
    How dense can you be. I've CLEARLY explained this to you several times and yet you keep coming back with the same response. It's getting to the point of trolling when you REFUSE to look at the FACTS.

    When you adjust the processor for a baseline clock speed of 1.0GHz, the Denver K1 is even slower than the A7. Now some people claim you can't compare clock speeds, but you can since we're talking about the same architecture (ARMv8 vs ARMv8). Where you CAN'T compare clock speeds is looking at something like x86 vs ARM.

    But the most telling aspect of this is that it's Nvidia themselves who brought up the clock speed argument when they hyped the K1 prior to launch. THEY'RE the ones bragging about how clock vs clock the K1 is more efficient and faster than other ARM designs. I'm only using Nvidias own argument against them.


    In the end the K1 gets its performance from a MUCH higher clock speed than the A7/8/8X, not through their LIES of claiming their architecture is faster clock vs clock.

    See I honestly don't care, I look at the end results, is the power there, is the battery life there, how they do it is of little concern to me. The K1 is the fastest ARM chip that I have ever used for my development projects, period, absolutely leaves everything else in it's dust and I have had many. I can't compare the A8x because it's stuck in an iPad, that's fine but you saying that the chip is crap just isn't correct. Show me something faster that I can use and I will but their just isn't anything. I haven't underclocked the Denver yet but I did with the 32Bit version to 1.5Ghz, I lost less than 8 percent from it's top benchmark numbers, so the chip is running at it's optimal clockspeed and thermal limits, I'm assuming the Denver will be about the same, I can't test that yet because their aren't any Kernels available yet that can underclock, I will when they become available though. Look until you do what I'm doing with the K1 and are seeing the real, tangible results that I'm getting, your just being a silly fanboy, blowing steam. I believe what it's in front of me, there is no amount of rhetoric that could change that belief. "see it's crap because of this, this and this", but than why is it so fast and does what I need it to do better than anything I've had before it, "just take my word for it, it's crap, see I have these specs and...", that's nice, thanks.
     0Likes 0Dislikes 0Informatives
  • Reply 54 of 134
    wizard69wizard69 Posts: 13,377member
    The underlying advantage of which is that, with fewer and fewer 32-bit apps installed on a person's 64-bit iPhone or iPad, there will be fewer contexts within which that person will be running at least one 32-bit app and therefore fewer times when the iPhone or iPad has to load the 32-bit libraries, resulting in an overall better performance of the device.

    I have to wonder if Apple has long term goals here. By pushing developers hard to move to 64 bit they can then rely upon a large base of software that is 64 bit. At that point they can start to design 64 bit only processors further reducing complexity and thus transitor count. That leaves Apple free to reduce power further. This would give Apple very lean processor cores.
     0Likes 0Dislikes 0Informatives
  • Reply 55 of 134
    wizard69wizard69 Posts: 13,377member

    Apple's supposedly working on their own baseband chips, as they've bought some talent recently. And given that they modified IMGTech's design for the GXA6850 in the A8X, I think a custom GPU may not be far behind.

    It is my understanding that Apple has hired more than a few AMD GPU designers so you might get your wish. I suspect thought that the GPU would be image Tech compatible.
     0Likes 0Dislikes 0Informatives
  • Reply 56 of 134

    This is what happens when you have to run to catch up.

     0Likes 0Dislikes 0Informatives
  • Reply 57 of 134
    jfc1138jfc1138 Posts: 3,090member

    Hadn't they gone on record that 64-bit was simply an Apple gimmick? So this delay shouldn't be relevant to their product's performance or success. Nice for them. :)

     0Likes 0Dislikes 0Informatives
  • Reply 58 of 134
    Quote:

    Originally Posted by Relic View Post





    See I honestly don't care, I look at the end results, is the power there, is the battery life there, how they do it is of little concern to me. The K1 is the fastest ARM chip that I have ever used for my development projects, period, absolutely leaves everything else in it's dust and I have had many. I can't compare the A8x because it's stuck in an iPad, that's fine but you saying that the chip is crap just isn't correct. Show me something faster that I can use and I will but their just isn't anything. I haven't underclocked the Denver yet but I did with the 32Bit version to 1.5Ghz, I lost less than 8 percent from it's top benchmark numbers, so the chip is running at it's optimal clockspeed and thermal limits, I'm assuming the Denver will be about the same, I can't test that yet because their aren't any Kernels available yet that can underclock, I will when they become available though. Look until you do what I'm doing with the K1 and are seeing the real, tangible results that I'm getting, your just being a silly fanboy, blowing steam. I believe what it's in front of me, there is no amount of rhetoric that could change that belief. "see it's crap because of this, this and this", but than why is it so fast and does what I need it to do better than anything I've had before it, "just take my word for it, it's crap, see I have these specs and...", that's nice, thanks.

     

    For someone who "honestly doesn't care" you sure do write a lot on the subject. Over and over and over.

     

    Face it, Nvidia LIED when they promoted the Denver K1. It's only marginally faster than older 32bit ARM architectures clock vs clock, and is behind ALL of Apple's 64bit processors clock vs clock.

     0Likes 0Dislikes 0Informatives
  • Reply 59 of 134
    Quote:

    Originally Posted by wizard69 View Post





    I have to wonder if Apple has long term goals here. By pushing developers hard to move to 64 bit they can then rely upon a large base of software that is 64 bit. At that point they can start to design 64 bit only processors further reducing complexity and thus transitor count. That leaves Apple free to reduce power further. This would give Apple very lean processor cores.

     

    Bingo. Developers of Mac 64bit software are finding it easier to port that code over to iOS.

     

    Android completely lacks this. There's no "parent" OS that has a long history of development and huge software base to draw from like you get with Windows or Mac. And before someone pipes up and says Androids parent is Linux, sorry, no. Android is a stripped down version of Linux that is completely incapable of running actual Linux software.

     0Likes 0Dislikes 0Informatives
  • Reply 60 of 134
    relicrelic Posts: 4,735member
    For someone who "honestly doesn't care" you sure do write a lot on the subject. Over and over and over.

    Face it, Nvidia LIED when they promoted the Denver K1. It's only marginally faster than older 32bit ARM architectures clock vs clock, and is behind ALL of Apple's 64bit processors clock vs clock.
    That's nice, thanks.
     0Likes 0Dislikes 0Informatives
Sign In or Register to comment.