Apple's A8X powers iPad Air 2 graphics faster than Google's Nexus 9 with Nvidia Denver Tegra K1

1457910

Comments

  • Reply 121 of 181
    Originally Posted by TechLover View Post

    Get this, I have this one light that is outdoors in a covered fixture that I leave on 24hs/365. This last time I replaced it with an LED.  Supposed to last like 5 years.  So we'll see.  Where's that receipt anyway?


     

    Hey, I have a few strings of ambient LEDs I keep on top of the kitchen cupboards. Just the right amount of light for nighttime without blowing your eyes out and they’ve been up there for... Christmas will be three years now, 24/7.

     

    Have you ever heard of the Phoebus Cartel and the lightbulb conspiracy associated therewith? As with all conspiracy theories there’s a grain of truth in everything. The short of it is that early incandescent lightbulbs were pushing 2500 hours until the cartel forcibly reduced lifespan to 1000 hours. Same with clothing, etc. You know that 100+ year lightbulb in that firehouse? Made before the cartel came into being.

     

    To this day, you can’t find a package of incandescent bulbs that will say they last more than 1000 hours each. But LEDs will say 23 years (at 3 hours per day). Hopefully there won’t be a cartel there.

  • Reply 122 of 181
    Quote:

    Originally Posted by Tallest Skil View Post

     

    You know that 100+ year lightbulb in that firehouse?

     

     


    Dude - I shit you not I have a few working antique light bulbs.  

     

    One of which is almost identical to the firehouse bulb and yes it still works.

     

    :)

  • Reply 123 of 181
    relicrelic Posts: 4,735member
    Quote:
    Originally Posted by Corrections View Post

     

     

    Don't worry, nobody is taking your K1 development devices.

     

    Why not, oh your making a joke, well fair enough, it's just as well, most of you wouldn't know what to do with it anyway. If you don't mind I'm going to write my reply like this, I'm not in the mood to break up you comments, yes I'm being lazy but I have a pretty good reason why, I'm high as friggen kite, new pain killers.



    However, it's interesting that you find it critically important on an ideological level that mobile devices be able to run an open OS (at least Nvidia's Linux) while you also find it compelling to learn about GPU development using Nvidia's proprietary CUDA graphics language. Sounds more like you're just a Nvidia fan, as revealed in your repeating the idea that the K1 as "192 cores," pure marketing nonsense as the article stated.

     

    I think you missing my point, I don't find it critical that a device needs to run an alternative OS. I was trying to get the point across that the Nvidia Denver K1 isn't crap because of some arbitrary benchmark. I added examples as to what else you can use this chip for to try and elevate it's status a little. I have a real problem with comments that don't go into further detail as to why the device or component in question is complete garbage. They look at some benchmark test and than boldly declares, yep, see, I knew it, why do companies even try as you can clearly see that this device in question just got it's ass handed to them. So until a chip beats an A8x their all crap. This is simply not true, the things I can accomplish with some of these ARM chips just amazes me.

     

    My Nokia 2520 and I know I always bring this little guy up, is probably my favorite tablet, more so than any Android tablet that I have owned, iPad or even my Surface Pro 3. Strange too, as it's just a Windows 8.1 RT system running on a Qualcomm 800 ARM CPU with only 2GB of RAM. Here's the thing though, it's quick, real quick, I have yet to see it lock up, pause for a second to think, nothing, it just keeps on chugging. I have my apps that I like and Office, including OutLook. But the big reason I enjoy it so much is that it's a 10" tablet with a desktop browser, LTE and it only costs 300 bucks. As I use mostly web apps nowadays it's the perfect web client. The point being, the Qualcomm 800 CPU is no where near the performance of the A8x or K1 but it manages to run everything I throw it with zero complaints. Just because the K1 failed to beat the A8x does not mean that it is worthless or Nvidia failed to produce  a fast chip, because they did, it's fast. Again as a reminder this is only the first in a series of K1 64bit chips to be released, the 4 core version is coming next, sometime in Q1 2015. Which will defiantly be faster than the A8x as the single-core benchmarks for the K1 shows. The upcoming Surface 3 RT is hinted to have a K1 as well, which makes since as the first 2 Surface RT's both contained Nvidia processors.

     

    Yes, I like Nvidia, for many reasons but I became a real fangirl in the last few months.  I recently donated 40, 20 Shield tablets and 20 Shield Portables to a children's ward of the hospital I am staying at. My doctor is a child's cancer specialist but since he has been a friend of the family for many years he took on my case. This meant however I would have to be placed in the children's ward as well. Being around these children profoundly changed me, I love them all. So as a surprise I contacted a Nvidia's public relations person in Switzerland and they arranged for me to receive the devices. The woman I was in contact with even came out to visit the hospital and personally congratulated me, gave me a bunch stickers, T-shirts, patches, posters, figurines from popular games, etc., for the kids and even gave me a pre shipped demo version of the Shield Tablet. The hospital even received their order a week before it was actually released. I've also been invited out to their office when I finally get out of this hospital bed and one of their help desk people helped me hunt down a demo version of Lenovo ThinkVision 4K monitor that also contains a K1 32Bit CPU and board, it's meant to be used an entertainment system when your computer is off, very cool.

     

    That's all fine and dandy but what first got me hooked on Nvidia was the amount of development materials they have to support CUDA. Look I'm also leaning Open CL as I think it's the future as well, so I agree with you on that, however have you tried learning OpenCL, it's not easy, there is plenty of manuals and sample code but my goodness, the CUDA world is just a lot more, gosh I don't know, let's just say mature. The Tegra4Linux development OS that ships with the Jetson K1 dev board is packed full with finished application that you can takes apart easily make it your own. I wrote 2 programs and a bunch of scripts in the first couple of days of playing around. Then their are the modules, classes, libraries the libraries, holy crap, it's a endless buffet of fantastic code to play around with. Not just that but the community is very helpful and get very excited about the smallest things, I can go on their forums and describe what kind of program I want to write, come back 15 minutes later and there is already 10 comments with suggestions, links to example code that my help me and some even offer to help write it by using a cloud IDE that handles multiple code contributors.

     

    Yes, Open CL probably offers a similar experience, but I haven't found anyone that not only has a development board for Open CL but also has a community that even closely resembles what Nvidia has. They even have Jetson K1 clubs now, where people meet to discuss their current projects, let it be a robot, security scanners that check facial recognition, creating clusters, render machines, etc. I love this, reminds me of the computer clubs I used to go to in the 90's as a kid with my grandpa. Sure it was more of a pirated software sharing group but I learned a lot and always had the latest releases.

     

    Which is fine, but looking at how few people are using the K1 in shipping products, its mobile products aren't likely to remain around. So if you're really wanting to learn about mobile GPUs, it might be smarter to learn something that's actually open (like OpenGL ES) or something that's going to be commercially viable (Metal).

     

    This is where people get confused, the K1 isn't just a mobile CPU/GPU, it's an all purpose embeded chip, that can be found in things like a TSA scanner all the way to a cars LCD dash board computer. I think it's smart to learn the things that make you the happiest, not what's popular or will make you money in the future, I'm done working, retired and I haven't even reached 40 yet. I'm not interested in writing mobile apps, at least not at the moment, maybe when iOS finally allows apps to run in the background I will mess around with Metal and write a media encoder app that utilizes the GPU. Though I really don't like programming for the PowerVR, before the Jetson K1, I bought a ODROID-XU Development Board with a PowerVR SGX544MP3 GPU and Exynos 5410 CPU. I was able to get Arch Linux up and running fairly easily, except for the PowerVR drivers as they suck for Linux, installed Java for ARM so I could use NetBeans, compiled Open CL from scratch and also installed PowerVR's SDK for ARM. Okay now what, code example, I need code examples, I started to look for Open CL projects that pertained to encoding, rendering and parallel computing utilizing the GPU in a cluster. Though I found a few things, got some to work, wrote a few programs, it was a very frustrating experience to say the least. Everything I wanted to do would have to be written from scratch. Most Open CL/PowerVR forums I visited were mostly games orientated. So I kind of lost interest after a month, put the board in it's little plastic case, installed Chromium OS(development version of Chrome OS) on it, I used the build from the guys who made the PcDuino8 dev boards as it also uses a PowerVR, connected it to my TV in my little studio where it's still at today. It's basically a TV tuner, I use a site called Zattoo that streams every Swiss channel in HD, the quality is actually better than our cable, so much so that we canceled our cable service and now use ChromeBox's through out our house, no I don't want an Apple TV, we use Logitech diNovo Mini's as our remotes.

     

     

     

    Also, the idea of using a Nexus 9 for distributed rendering is completely silly. Despite all of Nvidia's marketing, the K1, like the A8X, Snapdragon/Adreno, etc, is a mobile chip optimized for energy efficient rendering of 2D/3D on relatively small displays with 1-4K pixel resolutions.

     

    Of course it's silly, that's why I'm doing it, using a little imagination and my programming skills to do something out of the ordinary is how we grow. This is how I learn, by doing, do you know how much work and different technologies that are involved in completsuch a project. Enough that it will introduce me to a whole lot of new technologies that I wouldn't normally use  Nothing I do is done is in vane, this is a major homework assignment for me and when it's done I will come out of it knowing a whole lot more than I did going in, which in return I will put what I have leaned towards more ambitious projects that I thought were out of my league.  I never stop learning, especially because the thing I'm learning might not exist in two years. You know how fast technology moves, who's to say that a newer more powerful GPU related language pops up and totally negates the use of Open CL. Why do people still learn Latin, it's a dead language, because it brightens your prospective of things, makes you think in ways that weren't there before and not to mention makes you a sharper person.

     

    The Nvidia/AMD graphics in a laptop or desktop completely blow away the graphics power being installed in mobile devices. Nvidia might like the idea of selling a string of proprietary devices with its Tegra K1, but that doesn't even make sense on any level outside of marketing fluff. More problematically, nobody is using the K1 apart from a couple demonstration tablets designed to look like an iPad rather than a conventional 16:9 Android tab.

     

    There is always something faster and better coming to the market. I have a Nvida Quadro 6000 and a Tesla board in my workstation, it wasn't until I got my Jetson K1 dev board that I started to understand what can actually be done on a GPU. It's a learning tool for me. Who cares if the Nexus 9 is just meant as a proof of technology product. Here's the thing though, it's the best Android tablet on the market, regardless if it only has 32Gb, people who buy Samsung S, Note, Tab tablets are foolish and misinformed buyers who should of done their homework. Why does something I want need huge sells to be considered good. Anyone, including the people on this board who purchases a Nexus 9 will enjoy it, yes, you will, even if it's running Android. Yes Android, vanilla Android 5 is quite good, especially when paired with hardware like the Nexus 9.

     

     

    And that really makes you wonder: why are Xiaomi and Google leaving the Honeycomb model and adopting Apple's 2010 format for tablets? But the answer is too obvious. 

     

    Also, when you say "I have zero doubt that the Nexus 9’s GPU performance will be on par if not faster than the Nvidia Shield or the Jetson Dev Board. In fact I would bet money on it," it also highlights what else you missed by not actually reading the article: it's not that the Nexus 9's K1 is slower than the Shield's K1, it's that its doing more. The Shield has a stretched smartphone resolution. Nexus 9 has 50% more pixels because it has the same resolution as an iPad. So the Nexus 9 (and iPad Air 2) are working on adult tasks while the Shield is doing kid stuff.

     

    That's fine, but it still doesn't prove that the K1 Denver isn't a decent chip. Again, this is just the first iteration and it's pretty darn good.

     

    In other benchmarks, the two K1 variants deliver slightly different CPU benchmarks (the 64-bit K1 has half the cores, so its larger, two CPU cores beat the smaller quad CPU cores of the 32-bit K1 in single core tests; the 64-bit K1 is about as fast as the A8X in single core, but the A8X has three cores, so it's faster than the Nexus 9 in CPU). 

     

    ?But as Gatortroll notes, Google only makes Nexus products as a way to spoonfeed the third rate hardware vendor who can't figure out how to effectively copy Apple on their own, so nobody--not even Google--expects the Nexus 9 to actually sell in meaningful quantities. The non-iPad tablets sold in 2015 will likely be just like those sold this year: some Lenovo Atoms, some Samsung Qualcomm/Exynos, and lots of MediaTek stuff that sells for $50 in China and on walmart.com.

     

    I answered this above.

     

    ?Apart from Google experiments, no mobile makers have expressed any real interest in Nvidia's 64-bit Tegra K1, which is unsurprising after four years of over-promising, under-delivering Tegra flops. 



    And really, Nvidia's Project Denver is looking a lot like another ambitious VLIW project that was similarly hyped to hell before it finally arrived with performance no better than its competitors: Intel's Iitanium. Funny how history keeps repeating like that. 

     

    No better than their competition, the only chip the beats the K1 is the A8x, calling it underwhelming is ridiculous. What is it that you need such a fast CPU for, iOS can only runs one app at a time, except for a few like iTunes. There isn't a program that the A7 couldn't run and run well. The Nvidia chip is the fastest ARM CPU for Android devices available on the market, under-delivering, dude, they delivered and than some. Especially when you use this CPU for other things than Angry Birds.


     

    This is getting old, The K1 32 Bit and 64Bit variant beat every single mobile ARM CPU on the market right now, even the A8, except for 1, just 1, CPU, that's it, the 3 core A8x, yea, it sucks, whatever.  Listen, I really don't care, it's your opinion. I like the Nexus 9 and the CPU that powers it, it will give me much joy in the months to come and that's all that matters.

  • Reply 124 of 181
    solipsismxsolipsismx Posts: 19,566member
    relic wrote: »
    The K1 32 Bit and 64Bit variant beat every single mobile ARM CPU on the market right now, even the A8...

    I haven't seen that. Where did it beat the A8 in performance using the same power envelope. Or are you saying that the K1 was clocked much higher and beat the A8 without any regard for power requirements, which makes it a pointless test when comparing a mobile processor?
  • Reply 125 of 181
    relicrelic Posts: 4,735member
    Quote:
    Originally Posted by SolipsismX View Post





    I haven't seen that. Where did it beat the A8 in performance using the same power envelope. Or are you saying that the K1 was clocked much higher and beat the A8 without any regard for power requirements, which makes it a pointless test when comparing a mobile processor?

     

    No, just normal running speed, GeekBench. This is also using 32bit, the Nexus 9 should have higher results when Geekbench updates to 32bit, same with the graphics test for GFX, the Nexus 9 numbers are just preliminary at this point.

     

  • Reply 126 of 181
    solipsismxsolipsismx Posts: 19,566member
    relic wrote: »
    No, just normal running speed, GeekBench.

    "Normal running speed" is a pointless metric for a chip. The only way that term is viable is referring to the Nexus 9 whose normal running speed is 2.4GHz and the iPhone 6/6+ whose normal running speed is 1.4GHz, but even just stating the clock rate is worthless without other metrics. In this case, Geekbench showing single and multi-core performance to show that the A8 has better performance at a much lower clock rate, but we still can't figure out the power cost for a given performance threshold. All we can do is use adductive reasoning to infer the A8 is likely far superior given what we know.

    This also shows that Denver is definitely a good contender for the Android market but we still can't figure out certain aspects, again, power consumption for a given performance level, and are even stemmed further when comparing the Nexus 9 to the Note 4 because there are at least two different options for the Note 4 (Quad-core 2.7 GHz Krait 450 (SM-N910S) or Quad-core 1.3 GHz Cortex-A53 & Quad-core 1.9 GHz Cortex-A57 (SM-N910C)). Those quad core chips simply aren't that impressive compared to a quality dual core chip.
  • Reply 127 of 181
    relicrelic Posts: 4,735member
    Quote:

    Originally Posted by SolipsismX View Post





    "Normal running speed" is a pointless metric for a chip. The only way that term is viable is referring to the Nexus 9 whose normal running speed is 2.4GHz and the iPhone 6/6+ whose normal running speed is 1.4GHz, but even just stating the clock rate is worthless without other metrics. In this case, Geekbench showing single and multi-core performance to show that the A8 has better performance at a much lower clock rate, but we still can't figure out the power cost for a given performance threshold. All we can do is use adductive reasoning to infer the A8 is likely far superior given what we know.



    This also shows that Denver is definitely a good contender for the Android market but we still can't figure out certain aspects, again, power consumption for a given performance level, and are even stemmed further when comparing the Nexus 9 to the Note 4 because there are at least two different options for the Note 4 (Quad-core 2.7 GHz Krait 450 (SM-N910S) or Quad-core 1.3 GHz Cortex-A53 & Quad-core 1.9 GHz Cortex-A57 (SM-N910C)). Those quad core chips simply aren't that impressive compared to a quality dual core chip.

     

    Fine, the speed at which the chip is shipped at, same difference though, you knew what I meant, an unmolested benchmark test.
  • Reply 128 of 181
    nagrommenagromme Posts: 2,834member
    There's a big barrier to Metal games (or anything that makes use of the latest performance) being developed.

    Developers can't currently restrict what iOS devices their game can run on. They can only restrict by iOS version. That means, even if you limit your game sales to iOS 8, an ancient iPad 2 HAS to run it. Making your game adaptive for quality is doable... making it adapt to THAT wide a range is much more work, or even calls for changes to the game design itself. Yes, you can still use Metal a little easily. But there's more you could do with Metal that developers simple won't choose to do, thanks to all those iPad 2 buyers who will leave bad reviews (and justifiably so) if the game is unplayably slow. (Remember there's no simple trial/refund process on the App Store.)

    And Apple still sells the original iPad Mini which is pretty equivalent to an iPad 2. That's fine--except when Apple expects the latest iPad Air 2 Metal games to run on it.

    This is not an insurmountable barrier, but it will reduce the number of high-end GPU-intensive titles, and make it needlessly harder for small indie companies to build them.

    Until, that is, Apple adds a model-restriction feature to their App Store developer options. Right now all you get is app descriptions (which nobody reads) warning iPad 2 users away.

    Let the Mini 1 live on as a nice color e-Reader, video, and social media device... don't try to insist that it play ALL the latest games. Not realistic.
  • Reply 129 of 181
    Well, yeah. If you can afford it. But why not get the Toyota (e.g. Nvidia Shield) for $200 less?
  • Reply 130 of 181
    As the "Joel Hruska" quoted in your article, I take exception to your characterization of my opinion.

    While it's absolutely true that I wrote the article you link, I wrote multiple ones after. Including:

    http://www.extremetech.com/mobile/166790-iphone-5s-reviews-prove-apple-got-everything-right-including-64-bit-performance

    In which I openly acknowledge I was wrong about the 64-bit performance issue.

    Furthermore, I and my colleague Sebastian have repeatedly discussed the A7's excellent -- best-in-class performance (at least for 2013-204).

    http://www.extremetech.com/computing/179473-apples-a7-cyclone-cpu-detailed-a-desktop-class-chip-that-has-more-in-common-with-haswell-than-krait

    Since there may still be some confusion on this point, let me clear it up: The Apple A7/A8 family is currently the highest-performing single-thread mobile processor on the market. Whether NV can match it with a Denver-based Tegra product is unknown. Qualcomm's first set of 64-bit ARM cores will be based on ARM's Cortex-A57, we'll have to wait for their custom architecture to determine if they can beat it directly.

    I appreciate you linking my work, but would appreciate it if you'd update the story to include the later article.
  • Reply 131 of 181

    As the "Joel Hruska" quoted in your article, I take exception to your characterization of my opinion. 

    While it's absolutely true that I wrote the article you link, I wrote multiple ones after. Including: 

    http://www.extremetech.com/mobile/166790-iphone-5s-reviews-prove-apple-got-everything-right-including-64-bit-performance

    In which I openly acknowledge I was wrong about the 64-bit performance issue. 

    Furthermore, I and my colleague Sebastian have repeatedly discussed the A7's excellent, best-in-class performance (at least for 2013-204). 

    http://www.extremetech.com/computing/179473-apples-a7-cyclone-cpu-detailed-a-desktop-class-chip-that-has-more-in-common-with-haswell-than-krait

    Since there may still be some confusion on this point, let me clear it up:  The Apple A7/A8 family is currently the highest-performing single-thread mobile processor on the market. Whether NV can match it with a Denver-based Tegra product is unknown. Qualcomm's first set of 64-bit ARM cores will be based on ARM's Cortex-A57, we'll have to wait for their custom architecture to determine if they can beat it directly. 

    I appreciate you linking my work, but would appreciate it if you'd update the story to include the later article.

  • Reply 132 of 181
    Dan_DilgerDan_Dilger Posts: 1,584member
    Quote:

    Originally Posted by nagromme View Post



    There's a big barrier to Metal games (or anything that makes use of the latest performance) being developed.



    Developers can't currently restrict what iOS devices their game can run on. They can only restrict by iOS version. That means, even if you limit your game sales to iOS 8, an ancient iPad 2 HAS to run it. Making your game adaptive for quality is doable... making it adapt to THAT wide a range is much more work, or even calls for changes to the game design itself. Yes, you can still use Metal a little easily. But there's more you could do with Metal that developers simple won't choose to do, thanks to all those iPad 2 buyers who will leave bad reviews (and justifiably so) if the game is unplayably slow. (Remember there's no simple trial/refund process on the App Store.)



    And Apple still sells the original iPad Mini which is pretty equivalent to an iPad 2. That's fine--except when Apple expects the latest iPad Air 2 Metal games to run on it.



    This is not an insurmountable barrier, but it will reduce the number of high-end GPU-intensive titles, and make it needlessly harder for small indie companies to build them.



    Until, that is, Apple adds a model-restriction feature to their App Store developer options. Right now all you get is app descriptions (which nobody reads) warning iPad 2 users away.



    Let the Mini 1 live on as a nice color e-Reader, video, and social media device... don't try to insist that it play ALL the latest games. Not realistic.



    That's a trivial problem to solve in the App Store. If it ever becomes a problem. 

     

    Developers are not going to say "hey we can't use Metal because there are iPad minis out there"

  • Reply 133 of 181
    Dan_DilgerDan_Dilger Posts: 1,584member
    Quote:
    Originally Posted by Relic View Post

     

     

    No, just normal running speed, GeekBench. This is also using 32bit, the Nexus 9 should have higher results when Geekbench updates to 32bit, same with the graphics test for GFX, the Nexus 9 numbers are just preliminary at this point.

     




    You're comparing a fringe tablet against mainstream phones. 

     

    One big problem for Nvidia is that it doesn't even pretend to participate in phones anymore, so the K1 will only ever appear in tablets, and so far, only in a show off tablet from Google that isn't going to sell more than a million units per quarter.

     

    That's not enough revenue to support Tegra development.

     

    Apple's A8 is shipping in the world's leading smartphones, and A8X is shipping in a high volume tablet, both using the same architecture. Meanwhile, two more leading middle tier phones, last years's leading phones, and the rest of Apples iPads all use A7. That's a ton of money supporting A9 development.

     

    If Tegra K1 were significantly faster than Apple's A8/A8X, it probably wouldn't matter because there is not a real high volume demand for high end components running Android or Nvidia Linux anyway. But it's not really faster at all. The fact that Apple has stomped all over Tegra K1's supposed high tech lead with its mass market A8 chips means that Nvidia has nothing to sell in the mobile market. The PC market is growing stagnant. That's a huge problem for Nvidia in general (and why it's rather desperately looking at cars).

     

    The Tegra K1 is at a dead end, just like TI OMAP was when TI killed it. And having a dead end chip in your device means little more than that Google won't support it past 18 months with new Android releases, just like it killed updates for the <2 year old OMAP-based Galaxy Nexus.

     

    You can compare K1 in a tablet benchmarks against an iPhone 6 with an A8 if it makes you feel good, but its really quite delusional. 

  • Reply 134 of 181
    relicrelic Posts: 4,735member
    Quote:

    Originally Posted by Corrections View Post

     



    You're comparing a fringe tablet against mainstream phones. 

     

    One big problem for Nvidia is that it doesn't even pretend to participate in phones anymore, so the K1 will only ever appear in tablets, and so far, only in a show off tablet from Google that isn't going to sell more than a million units per quarter.

     

    That's not enough revenue to support Tegra development.

     

    Apple's A8 is shipping in the world's leading smartphones, and A8X is shipping in a high volume tablet, both using the same architecture. Meanwhile, two more leading middle tier phones, last years's leading phones, and the rest of Apples iPads all use A7. That's a ton of money supporting A9 development.

     

    If Tegra K1 were significantly faster than Apple's A8/A8X, it probably wouldn't matter because there is not a real high volume demand for high end components running Android or Nvidia Linux anyway. But it's not really faster at all. The fact that Apple has stomped all over Tegra K1's supposed high tech lead with its mass market A8 chips means that Nvidia has nothing to sell in the mobile market. The PC market is growing stagnant. That's a huge problem for Nvidia in general (and why it's rather desperately looking at cars).

     

    The Tegra K1 is at a dead end, just like TI OMAP was when TI killed it. And having a dead end chip in your device means little more than that Google won't support it past 18 months with new Android releases, just like it killed updates for the <2 year old OMAP-based Galaxy Nexus.

     

    You can compare K1 in a tablet benchmarks against an iPhone 6 with an A8 if it makes you feel good, but its really quite delusional. 


     

    Okay, what ever you say.

  • Reply 135 of 181
    Dan_DilgerDan_Dilger Posts: 1,584member
    Quote:

    Originally Posted by Relic View Post

    Story about trying to use OpenCL on PowerVR

     

    The only vendor shipping PowerVR in quantity is Apple, and OpenCL (despite being invented by Apple) is not supported on iOS.

     

    So your story about trying to find support for OpenCL on PowerVR under Linux is quite bizarre.

     

    You were responding to my suggestion "if you're really wanting to learn about mobile GPUs, it might be smarter to learn something that's actually open (like OpenGL ES) or something that's going to be commercially viable (Metal)" about graphics. If you're doing GPGPU, you can now do that in Metal with similar low overhead compared to OpenGL ES. 

     

    Metal is super optimized for A7/A8, which are all PowerVR / Series6 high end mobile GPUs. 

     

    But as I said, if you're just getting Nvidia gear for free have fun with it. My point is that Nvidia's mobile GPUs are a dead end. 

  • Reply 136 of 181
    Fake alert. The Shield Tablet (K1 32 bits with A15 cores) cannot be be faster than the Nexus 9 (K1 64 bits with Denver cores).

    The benchmark numbers are therefore wrong. This probably means the Nexus 9 is actually faster than the Ipad Air 2. We will only know once the Ipad Air 2 ships.
  • Reply 137 of 181
    relicrelic Posts: 4,735member
    The only vendor shipping PowerVR in quantity is Apple, and OpenCL (despite being invented by Apple) is not supported on iOS.

    So your story about trying to find support for OpenCL on PowerVR under Linux is quite bizarre.

    You were responding to my suggestion "if you're really wanting to learn about mobile GPUs, it might be smarter to learn something that's actually open (like OpenGL ES) or something that's going to be commercially viable (Metal)" about graphics. If you're doing GPGPU, you can now do that in Metal with similar low overhead compared to OpenGL ES. 

    Metal is super optimized for A7/A8, which are all PowerVR / Series6 high end mobile GPUs. 

    But as I said, if you're just getting Nvidia gear for free have fun with it. My point is that Nvidia's mobile GPUs are a dead end. 

    iOS 8 has OpenCL but it hasn't been exposed to developers yet, which sucks, not sure what their waiting for. even though Apple started OpenCL they are no longer responsible for it's development, the Khronos Group is. The developer board that I used for OpenCL and PowerVR was designed for Linux and Android as seen here , so it's not bizarre at all, why did you even think it was as there are many boards like this, you do realize that iOS is based off of BSD(Unix). I have no interest in Metal right now as it's only limited to iOS and it's designed to be used with apps, to much overhead for me, I need to direct access to the hardware witout going through a bunch of API's, especially closed source ones. I'm designing a cluster to be used for rendering and encoding media files as well as other GPU computing related projects and for that CUDA just has so much more to offer in terms of libraries, applications and code with far better API's and toolchain's, which I can utilize to make my projects come to fruition quicker. I am not dismissing OpenCL as I am also learning it as well but as CUDA offers much more with the hardware that I'm using it only makes sense, and I don't believe that Nvidia's mobile division is going anywhere soon, besides the K1 with it's 192 CUDA cores is a monster and an absolute joy to use, the CUDA optimizations are fantastic. If Apple sold an A8x development board I would be using it but they don't so all of this is moot. Maybe one of these days when Apple starts selling MacBooks with ARM I will use it.
  • Reply 138 of 181
    Umm, did you look at the Shield Tablet in this review... Ask yourself why the 32bit Shield Tablet smokes the A8x in the iPad Air 2 and yet the 64bit is so far behind? You think maybe that has more to do with drivers and code optimization or would you prefer to take these stupid claims, from an obviously biased source, at face value?

    Wait for better reviews from sources that can be confirmed.

    Oh and I'm typing all of this on my iPad Mini Retina because my iPhone is too small.
  • Reply 139 of 181
    GrangerFX wrote: »
    NVidia first demonstrated their K1 processor at Siggraph in 2013. Their strategy appears to be to show off next years processor and compare its performance to this years shipping processors and GPUs. In that light it appears to be extremely impressive but by the time it ships in an actual phone or tablet, it is merely average. The K1 has decent performance but I suspect it is being held back by Google's interpreted/JIT code. No matter how much Google tries, native compiled code will always run rings around it.
    Android lollipop runs the new ART engine,JIT is replaced with ahead of time compilation. According to other benchmarks the nexus 9 is faster than the iPad air 2, not sure why this article has been selective and failed to mention this?
  • Reply 140 of 181
    Dan_DilgerDan_Dilger Posts: 1,584member
    Quote:

    Originally Posted by Relic View Post





    iOS 8 has OpenCL but it hasn't been exposed to developers yet,

     

    Wrong, there is no OpenCL for iOS devices. Metal is Apple's GPGPU solution. Nobody is doing computation on an A5/A6.  

Sign In or Register to comment.