One year after Apple's A7, Nvidia announces first 64-bit ARM CPU for Android

1235

Comments

  • Reply 81 of 114
    jungmarkjungmark Posts: 6,927member
    Whaaaat? Get out of here. Relic's cool.

    I know but I like teasing her.
  • Reply 82 of 114
    wizard69wizard69 Posts: 13,377member
    tht wrote: »
    Yes. I know of his reputation as well as his history. He's a drama queen and has a sensationalist style wherever he works all the way back to the Inquirer.
    That style does attract a certain type of person.
    But he's been in the tech hardware gossip rag business for over a decade, especially for CPUs, and knows the fundentals of EE and processors. He does have moles in East Asia in regards to foundries and stuff like that.
    I've yet to see anything incite full or not public knowledge on his open web site. I'm not sure what he offers beyond the pay wall. He does aggregate information well.
    His negativity towards Tegra has been right for 4 or 5 years now. Maybe he was lucky, but for that long? Nvidia has missed their promised Tegra road maps and perf/watt targets for basically 5 years now. They don't have any significant OEM wins for 3 years. That's three hype and ship cycles there. They can't even get the 32 bit Tegra K1 shipping in a flagship from a major OEM. It's been nothing but Snapdragons for like 2 or 3 years running. Frankly, I'm getting bored with seeing nothing but Qualcomm SoCs in Android flagships. Even the non flagships don't use Tegra SoCs.
    So have a lot of other people been negative on NVidias ARM initiatives. The problem NVidia and Intel for that matter have is that they make the products they want to make not the products their customers need. This is where Qualcomm has done wonders, they make highly tailored SoC that are ideal for their customers.
    Now, you think Nvidia is going to ship a custom CPU arch, a VLIW one at that, that'll be competitive to ARMH cores or Qualcomm cores?
    The architecture is still ARM, at least inthe sense of the native instruction set. How much VLIW processing is going on has yet to be seen. Given that all CPUs these day crack instructions and translate them into lower level formats. Well the so called applications processors do.
    I don't, and wouldn't be surprised if it is dead. I'm skeptical until they ship. Maybe they'll ship in embedded devices or servers, but the people on this board could care less about that space.

    I see no reason to go after this chip when so many more promising designs exist.
  • Reply 83 of 114
    wizard69wizard69 Posts: 13,377member
    tundraboy wrote: »
    I have never ever heard anyone say or write "I really prefer Android but the only smartphone I am able to, or am allowed to purchase, and this is completely against my will, is an iPhone".   That is what an iPhone 'captive market' would be.

    You really didn't follow what I was saying. Apples market for A7 is captive. As such it really doesn't make any sense to see this NVidia chip in any Apple context. Apple will never buy it! Apple will never sell A7's, A8's or anything else to the component market.

    In other words if you want to discuss NVidias new chip you really have to compare it to Intel and Qualcomm offerings.
  • Reply 84 of 114
    wizard69wizard69 Posts: 13,377member
    blastdoor wrote: »
     My impression is that Denver is an evolution of VLIW -- that is, using the compiler to extract parallelism from the code and bundling those instructions together to be executed simultaneously. 

    Yes I see a lot of discussion about that in this thread. To be honest I only read the AI article, as such I did not get the impression that this was a VLIW design. It is common practice in processor design these days to take an industry standard architecture and break those instructions down in various ways to facilitate processing what is encoded in the instruction. For example in some places Intel breaks down i86 instructuons into RISC like instructions or sometimes translates the instruction into a series of microcode instructions.
  • Reply 85 of 114
    wizard69wizard69 Posts: 13,377member
    cali wrote: »
    (......guys....there's a girl in here.......)

    Girls are young shy individuals that haven't ventured far from mama. What we have here is a woman with a lot of spunk that isn't afreaid to stand up for herself. This is something we need more of in this world.
  • Reply 86 of 114
    relicrelic Posts: 4,735member
    Quote:
    Originally Posted by jungmark View Post





    You are family. Sort of like that creepy aunt with lots of cats. Haha.

    Hahaha, can it be small yelping chihuahua's instead, I hate cats. Then I can invite people over for tea and have the dogs dressed in doll outfits while also sitting at the table. Then behind the tea table on the mantle above the fireplace, will be my collection of taxidermied Chihuahua's in various funny poses to show off my wacky side to the kidnape uh ..... visitors I'm currently entertaining.  You know, my beloved doggy's that died over the years, which surprisingly quite a few by their own paws. Which of course will make for great tea conversation. Bootsy number 12, their, yea,the brown one with the white spots, she managed to climb into the oven while I was making a chicken pot pie, poor thing was burnt to a crisp but the dog was delicious. 

  • Reply 87 of 114
    wizard69wizard69 Posts: 13,377member
    Ok guys, went to the horses mouth here to see what this chip is all about. Frankly NVidia just confirmed to me that they suck when it comes to documentation. In any event the blog is here: http://blogs.nvidia.com/blog/2014/08/11/tegra-k1-denver-64-bit-for-android/. From there you can find a link to "some" technical information.

    In any event at no time did they mention VLIW processing. That could be to disassociate themselves with what many see asa failed technology. They mention translation to microcode which of course could be VLIW like. However it looks like much of the recording and other optimatization is targetting the ARM archtecture. So maybe we have a mixed platform here.

    In any event the story just gets plain ugly. For one the 128MB of microcode cache lives in main memory. Further the 7 instructions per clock apparently only happen after translation to microcode. So I fully expect a whole bunch of phoney benchmarks that don't reflect real world usage. Further because the optimization and profiling runs outside of the ARM ISA this has to have a significant imoact on power usage.

    There certainly is a strong flavor of Transmeta here, I'm just not sure exactly what they have produced. It could be a Transmeta like VLIW core or something more optimized. The constant reference to microcode has me thinking that what Transmeta hatched has been morphed into something a bit different. There seems to be a much stronger marriage between the ARM archtecture and the underlying microcode processing.
  • Reply 88 of 114
    relicrelic Posts: 4,735member
    Quote:
    Originally Posted by wizard69 View Post



    Ok guys, went to the horses mouth here to see what this chip is all about. Frankly NVidia just confirmed to me that they suck when it comes to documentation. In any event the blog is here: http://blogs.nvidia.com/blog/2014/08/11/tegra-k1-denver-64-bit-for-android/. From there you can find a link to "some" technical information.



    In any event at no time did they mention VLIW processing. That could be to disassociate themselves with what many see asa failed technology. They mention translation to microcode which of course could be VLIW like. However it looks like much of the recording and other optimatization is targetting the ARM archtecture. So maybe we have a mixed platform here.



    In any event the story just gets plain ugly. For one the 128MB of microcode cache lives in main memory. Further the 7 instructions per clock apparently only happen after translation to microcode. So I fully expect a whole bunch of phoney benchmarks that don't reflect real world usage. Further because the optimization and profiling runs outside of the ARM ISA this has to have a significant imoact on power usage.



    There certainly is a strong flavor of Transmeta here, I'm just not sure exactly what they have produced. It could be a Transmeta like VLIW core or something more optimized. The constant reference to microcode has me thinking that what Transmeta hatched has been morphed into something a bit different. There seems to be a much stronger marriage between the ARM archtecture and the underlying microcode processing.

    Uh oh, this isn't sounding very good is it, I friggen hated Transmeta CPU's, they were ssssllloooowwww, I know, I had a Sony PictureBook. Will it be at least faster then the 32bit version, if not what's the point?

  • Reply 89 of 114
    relicrelic Posts: 4,735member

    What the heck does all this mean;

     

    "One of Denver’s architectural quirks involves dynamic code optimization, a computing technique that hearkens back to the days of Transmeta, a high-profile startup that also used code interpretation and optimization in the early 2000s. But Transmeta failed in part because it simply couldn’t deliver the performance that customers expected, which Boggs ascribed to the “cliff” between natively executed code and what was interpreted. (Intel and AMD, among others, also quickly ramped up the performance of their own mobile chips to compensate.)     

    Denver solves that problem, Boggs said, by having a much more powerful hardware platform to run non-native code, however inefficiently. It also waits until a core sits idle to begin the code translation process, and it includes new low-power states beyond what the 32-bit K1 itself offers.”The issues you saw with Transmeta devices you won’t see with Tegra,” he said.

     

    The Denver chip includes a 128KB, 4-way level 1 instruction cache, a 64KB, 4-way level 2 data cache, and a 2MB, 16-way level 2 cache, all of which can service both cores. Denver also sets aside 128MB of main memory as an interpretation cache, that the main operating system won’t be able to see or access."

     

    Well their claiming it will be twice as fast as the 32bit model, which is already a screamer in it's own right, twice as fast is unlikely but if it reaches 1.5 times the speed, then they will defiantly have a hit on their hands. We'll just have to wait until someone sticks it into something.

     

    Update: Aaaahhh, okay, now it's starting to make sense as to why they went in this direction.

     

    "As part of the Dynamic Code Optimization process, Denver looks across a window of hundreds of instructions and unrolls loops, renames registers, removes unused instructions, and reorders the code in various ways for optimal speed. This effectively doubles the performance of the base-level hardware through the conversion of ARM code to highly optimized microcode routines and increases the execution energy efficiency.

    The slight overhead of the dynamic optimization process is outweighed by the performance gains of already having optimized code ready to execute. In cases where code may not be frequently reused, Denver can process those ARM instructions directly without going through the dynamic optimization process, delivering the best of both worlds"

    - See more at: http://blogs.nvidia.com/blog/2014/08/11/tegra-k1-denver-64-bit-for-android/#sthash.udp6pcfq.dpuf

     

     

    If it actually works then we're defiantly looking at potential i3 speeds on a ARM chip, the current 32bit model is already reaching Celeron territory,  though it's the GPU 3D Mark scores that are crazy awesome, 2722 overall, I ran the test myself, that's insane. Still skeptical about the 64bit chip though, we need a development board, let's go Nvidia, clock be ticking.

  • Reply 90 of 114

    I fail to see how this will impact my Netflix viewing and Waze nav-ing experiences, on any device. This hardware race is a bit silly for 99% of actual device usage.

  • Reply 91 of 114
    I fail to see how this will affect my Netflix viewing and Waze nav-ing experiences, on any device. This hardware race is a bit silly for 99% of actual device usage.

    You've got a fair point. If you represent the typical Android user, there's not much point in 64 bit if all you use is Netflix and Waze.

    Therein lies the quandary for Google. On the one hand, they are being forced out of the high-end by Apple, and have declared their focus on the low-end with Android L. On the other hand, they are attempting a race to 64 bit and dominance in performance. It's all going to end up in a clusterfuck. Perhaps that's why you need two hands to operate Android phones...
  • Reply 92 of 114
    thttht Posts: 5,599member
    Quote:

    Originally Posted by wizard69 View Post



    I've yet to see anything incite full or not public knowledge on his open web site. I'm not sure what he offers beyond the pay wall. He does aggregate information well.

     

    That someone is wrong at times or often times and sensationalistic doesn't mean they are wrong all the time. You interpret information from all sources as fairly as possible, apply common sense with the proper filters. Turning a blind eye on someone who is wrong at times or is sensationalistic pretty much means you are ignoring any and all people you meet. Nobody bats a 1000. (I would be ignoring Mark Gurman if I took his prima donna attitude into account.) Like, would Demerjian know of anything involving Apple's software strategy, software design? No. But he knows the semiconductor business.

     

    If one thinks Demerjian doesn't know processor architecture or fab technology, that's being a bit delusional. I don't actually read semiaccurate for the most part. I do follow him on Twitter (as well as Kantar, Moorhead, Anand, etc). You just have to filter out all the other crap, and read the nuggets. He's been saying Apple is going to take all of most of TSMC's fab capacity for awhile now. That appears to be coming true with reports of some TSMC customers looking elsewhere to get their chips fabbed. That sounds like someone with moles in Taiwan to me.

  • Reply 93 of 114
    Quote:
    Originally Posted by Benjamin Frost View Post





    You've got a fair point. If you represent the typical Android user, there's not much point in 64 bit if all you use is Netflix and Waze.



    Therein lies the quandary for Google. On the one hand, they are being forced out of the high-end by Apple, and have declared their focus on the low-end with Android L. On the other hand, they are attempting a race to 64 bit and dominance in performance. It's all going to end up in a clusterfuck. Perhaps that's why you need two hands to operate Android phones...

     

    I actually do a fair bit of remote desktop, spreadsheets, docs, chrome, FB, and gmail. None of those are screaming for processor speed though. Neither are any of these:

     

  • Reply 94 of 114
    I fail to see how this will affect my Netflix viewing and Waze nav-ing experiences, on any device. This hardware race is a bit silly for 99% of actual device usage.

    You've got a fair point. If you represent the typical Android user, there's not much point in 64 bit if all you use is Netflix and Waze.
    That's the same for the average Apple user as well.
  • Reply 95 of 114
    tundraboytundraboy Posts: 1,908member
    wizard69 wrote: »
    You really didn't follow what I was saying. Apples market for A7 is captive. As such it really doesn't make any sense to see this NVidia chip in any Apple context. Apple will never buy it! Apple will never sell A7's, A8's or anything else to the component market.

    In other words if you want to discuss NVidias new chip you really have to compare it to Intel and Qualcomm offerings.

    Yes, I misunderstood you. My bad, apologies.
  • Reply 96 of 114
    relicrelic Posts: 4,735member
    Quote:
    Originally Posted by THT View Post

     

     

    That someone is wrong at times or often times and sensationalistic doesn't mean they are wrong all the time. You interpret information from all sources as fairly as possible, apply common sense with the proper filters. Turning a blind eye on someone who is wrong at times or is sensationalistic pretty much means you are ignoring any and all people you meet. Nobody bats a 1000. (I would be ignoring Mark Gurman if I took his prima donna attitude into account.) Like, would Demerjian know of anything involving Apple's software strategy, software design? No. But he knows the semiconductor business.

     

    If one thinks Demerjian doesn't know processor architecture or fab technology, that's being a bit delusional. I don't actually read semiaccurate for the most part. I do follow him on Twitter (as well as Kantar, Moorhead, Anand, etc). You just have to filter out all the other crap, and read the nuggets. He's been saying Apple is going to take all of most of TSMC's fab capacity for awhile now. That appears to be coming true with reports of some TSMC customers looking elsewhere to get their chips fabbed. That sounds like someone with moles in Taiwan to me.


    He's also been saying that Apple will dump Intel for the last four years, I guess time will only tell but it's highly unlikely anytime in the near future, you can't expect Apple to use a 64bit ARM chip in their new workstation or even their desktop models. Charlie is convinced Apple will also merge OSX and iOS into one closed platform, meaning Apple will remove every advantage OSX has over iOS including the Finder and then just use the A series ARM processor to run their entire line up. Sounds interesting but the day that happens I will defiantly be jumping ship for my desktop needs, so will every other power user. 

     

    May 5, 2011

    "Word has reached SemiAccurate that Apple (NASDAQ:APPL) is going to show Intel (NASDAQ:INTC) the door, at least as far as laptops are concerned. It won’t be really soon, but we are told it is a done deal."

     

    Sep 24, 2012

    "There were a lot of incredulous and dismissive comments when SemiAccurate said that Apple would be going Intel free in the next few years. Those sneers soon became begrudging acceptance as people dug in to the details, so we thought it was time for an update to the players and timetables."

  • Reply 97 of 114
    croprcropr Posts: 1,139member
    Quote:

    Originally Posted by EricTheHalfBee View Post

     

    Here's the big problem for Nvidia:

     

    They built a powerful processor that isn't suitable for phones (tablets only due to power consumption). Apple (1st) and Samsung (2nd) have the high-end tablet market locked down. That leaves a very small market of potential customers that could actually use this processor.

     

    Nvidia should have been thinking of who they could sell a processor to instead of making a "beat" (we'll have to wait and see on this one) that has a very limited market potential.


     

    Acer has just announced 2 Chome books with the NVidia processor, so apparently it could have some sales in that area, as Acer Chrome books don't sell that bad.

  • Reply 98 of 114
    dnd0psdnd0ps Posts: 253member

    Welcome, nVidia. Seriously

  • Reply 99 of 114
    relicrelic Posts: 4,735member
    Quote:
    Originally Posted by cropr View Post

     

     

    Acer has just announced 2 Chome books with the NVidia processor, so apparently it could have some sales in that area, as Acer Chrome books don't sell that bad.


    Chromebook is a single word , we don't say Mac Book, it's MacBook as it's the name of a product, just being picky.  I will defiantly be buying one of new Acer ChromeBook 13's that you mentioned above, 380.00 for a powerful ARM Chromebook with the new Nvidia K1 32BIT CPU, a 1080p display, 4GB of RAM, 32GB of storage, 13 hours of battery and 2 years of 100GB Google Drive storage is just too cool for me to pass up. Plus I can install Ubuntu, Debian or Arch Linux if I want and I want. Who knows maybe someone will even figure out how to flash the bios to allow the installation of Android on it. Why Android you might ask, well running Android on a laptop might seem odd to a lot of people, especially the members here but when done right it actually works quite well. An example of a god Android laptop would be HP's fairly recently released SlateBook 14, however with a price tag of 430.00 it's defiantly not cheap, especially now that we're getting into iPad money. The idea is still intriguing enough for me to want too experiment with on the Acer ChromeBook 13.

     

    Here is a review of the HP SlateBook 14, so you'll understand why it might be interesting for someone like me to pursue. (fast forward to the 2:50 mark too see what Android on a laptop would be like)

     

    image 

  • Reply 100 of 114
    timmydaxtimmydax Posts: 284member

    Maybe stop lying, please?

    Except everything I said was true. Last year... let me make sure you got that... LAST YEAR it was a marketing gimmick.

    Of course it was. There were no apps taking advantage of it in 2013!

    I still have yet to find a game or any app that is in any manner noticeably different side-by-side on my 5S as my old 5. Show me one. It wasn't about 2013.

    The genius idea is of course that they can sell devices with A7s for years to come, as they always have in the past with their other chips.

    It was about the future. It's not say that the A7 isn't a significant step up and will provide Apple a solid foundation for years to come. It's not to say that it's performance can't be tapped to provide luscious visuals and massively more ram. But it wasn't last year.

    None of this was logically unsound or "lies" so get over yourself bud.
Sign In or Register to comment.