Apple's new A8X powered iPad Air 2 smokes new Android tablets, including Nvidia's Tegra K1 Shield Ta

123457»

Comments

  • Reply 121 of 138
    misamisa Posts: 827member
    relic wrote: »
    With new 64Bit ARM chips from AMD, Opteron, Apple, A8x, Nvidia, Denver, ARM performance is fastly becoming on par with their x86 brothern.

    Nope nope nope. You're comparing a scooter to a car on the basis they can both go highway speed.

    Assuming:
    1. If the CPU's are 2Ghz, and then therefor desktop x86-64 class. Setting aside that desktop CPU's under 3Ghz are not game machines. Synthetic benchmarks only test the ALU or FPU. All the other features a CPU supports (eg MMX) doesn't have equivalents in ARM and may only be implemented as coprocessors.
    2. The GPU cores aren't even in the same ballpark as a desktop GPU.

    Sure it's fine that the iPad pushes pixels at a higher resolution than most gamers high end gaming machines do, but the lack of memory (1 or 2GB), Cache (A CPU's performance is entirely scaled by L2 cache) and different register sizes tend to give a false sense of "OMG ARM IS SO MUCH BETTER"

    No the story doesn't work that way at all.

    Geekbench's single core score for the iPad Air 2 is 1484, next non-ARM device of that score is equal to "Intel Pentium E5300" a 2.6Ghz Dual core x86 chip with only 2MB cache released in 2008. The chip that actually wound up in peoples computers in 2008 was the Core 2 Duo E7300 which had 3MB of cache.

    So the current model iPad Air 2 has a ranking of a device released 6 years ago. That's 6 CPU generations ago in Intel terms. So we'd have to be on the iPad 8 or iPhone 12 before we get to "desktop parity" at this rate.

    Fortunately, desktop CPU's and GPU's have hit a brick wall for the last 2 years due to TSMC failing at getting a die shrink out for nVidia and AMD, which means that performance gap closed by at least one entire generation for GPU's. Mobile CPU and GPU's however will never reach desktop parity because there are thermal limitations. If there's only one die shrink left available (14nm) then mobile CPU's and GPU's hit a brick wall next year as well. The only way to catch up to desktop performance then is to tradeoff battery life and make them heavier to dissipate the heat better.
  • Reply 122 of 138

    Hmm so what do you guys think about the Exynos 5433 outscoring the A8X?

     



    And this is without the added benefit of Aarch64 (it only needs a 64-bit kernel and Android L now(boot loader supports 64-bit)). For a mobile SoC this is pretty darn impressive. The A8X does have the upper hand when it comes to the GPU and the performance at such a low clockspeed is staggering, but I'm not sure how well optimised these apps are for both operating systems (would the A8X score the same on android as it does on iOS and vice versa?).  

  • Reply 123 of 138
    relicrelic Posts: 4,735member
    Quote:
    Originally Posted by Misa View Post





    Nope nope nope. You're comparing a scooter to a car on the basis they can both go highway speed.



    Assuming:

    1. If the CPU's are 2Ghz, and then therefor desktop x86-64 class. Setting aside that desktop CPU's under 3Ghz are not game machines. Synthetic benchmarks only test the ALU or FPU. All the other features a CPU supports (eg MMX) doesn't have equivalents in ARM and may only be implemented as coprocessors.

    2. The GPU cores aren't even in the same ballpark as a desktop GPU.



    Sure it's fine that the iPad pushes pixels at a higher resolution than most gamers high end gaming machines do, but the lack of memory (1 or 2GB), Cache (A CPU's performance is entirely scaled by L2 cache) and different register sizes tend to give a false sense of "OMG ARM IS SO MUCH BETTER"



    No the story doesn't work that way at all.



    Geekbench's single core score for the iPad Air 2 is 1484, next non-ARM device of that score is equal to "Intel Pentium E5300" a 2.6Ghz Dual core x86 chip with only 2MB cache released in 2008. The chip that actually wound up in peoples computers in 2008 was the Core 2 Duo E7300 which had 3MB of cache.



    So the current model iPad Air 2 has a ranking of a device released 6 years ago. That's 6 CPU generations ago in Intel terms. So we'd have to be on the iPad 8 or iPhone 12 before we get to "desktop parity" at this rate.



    Fortunately, desktop CPU's and GPU's have hit a brick wall for the last 2 years due to TSMC failing at getting a die shrink out for nVidia and AMD, which means that performance gap closed by at least one entire generation for GPU's. Mobile CPU and GPU's however will never reach desktop parity because there are thermal limitations. If there's only one die shrink left available (14nm) then mobile CPU's and GPU's hit a brick wall next year as well. The only way to catch up to desktop performance then is to tradeoff battery life and make them heavier to dissipate the heat better.



    Oh come on, you could have given me the benefit of doubt that I am not a complete idiot. I would have assumed that since marvin and I were talkin about Atom processors that you would have probably got what I meant. I wasn't comparing the A8x or the Nvidia Denver to a  Intel i5 or i7, that would be ridiculous. I was referring to ARMS direct competition in the Intel world, that would be the Atom line of CPU's. As we have seen in recent benchmarks these new ARM CPU's are faster. The AMD Opteron will be in direct competition with low powered Xeon chips, with 8 64BIT A57 cors,  4MB L2 and 8MB L3 Cache to start, with a follow up that will contain a custom ARM v8, 16 core version. I think we will than start to see servers and even workstations based on these chips. 

     



    I thought ARM's SIMD equivalent to MMX/SSE is NEON, with NEON Advanced SIMD unit being the newer variant, yes it's coprocessor but why does that matter. Anyway for a laptop and even a desktop computer used for general computing the ARM processor is more than enough. I shouldn't have used on par but the ARM is still fastly becoming a decent competitor to x86 despite not having models that directly compete with an i5 or i7 but they will come. I buy a lot of ARM development boards, the latest one I have is a Nvidia Jetson K1 32Bit and I will later purchase the 64Bit variant which uses the new Denver when it becomes available. I really like this board, especially when running programs like Blender, as I have direct access to the GPU, 192 CUDA cores, rendering takes a fraction of a time than if I was to do the same project on a MacBook Air i5, so in some areas this little ARM CPU is faster than the i5. Now I'm sure I could do the same with the Intel HD 4000 that's currently in the Air but I would have to write the program from scratch using OpenCL, there is already so many applications written for CUDA that doing this sort of thing is a breeze. In fact the Jetson board comes packed with such utilities and code examples, perfect for rendering and encoding jobs. Now this of course has everything to do with offsetting a lot of instructions to the GPU but like with iPad(A8x) and Metal I think this is the answer to pushing ARM to the next level. Smarter distribution of computing power between the components, including co-processors like Neon. The CPU just doesn't need to do all of the work anymore and in a lot of areas really shouldn't as the GPU is more efficient and faster in certain tasks.

     



    I'm not really sure what you meany by CPU's and GPU's are stalling, in the Intel world maybe but if you look at the performance charts from the last 3 years ARM has been jumping up significantly quickly, same with GPU's, Have you seen what a dual GPU Nvidia Titan can do, a Quadra K6000 or even a Tesla card my goodness and their going to get even faster with the next generation when they start packing over 4,0000 CUDA cores onto a single die. Even my little Jetson board with it's puny 192 CUDA cores can do things that I simply didn't think were possible a couple of years back, I encode a lot of media files as I still buy BluRays and than rip/encode/upload them as MP4's into the cloud to be shared amongst my family and multiple devices, encoding times takes a fraction of the time if I was to do it on an i5 or even my dual CPU Xeon HP Z workstation, it's mind blowing when most of the waiting comes from the ripping as mechanical drives can only spin so fast, the process is so much faster that I will never even consider using a CPU to do that kind of work again, it's just a waste of time. I can't wait to see an OS mostly written in OpenCL or Cuda, can you imagine. So your right, Intel chips are faster than ARM but I have to say there isn't many times when I'm using my Jetson where I say to my, gosh I wish this was faster, in fact if I took the entry Mac Mini entry model, installed Ubuntu 14.04, connected both the Jetson and Mac Mini to the same monitor, keyboard and mouse through a switch box, covered both of them up with a small box labled with, I and II, than asked you to tell me which one was which, you would have a very hard time telling them apart. They both start up about the same, programs launch at about the same time, it won't be until you start using programs like blender that you will start to see differences, the UI is defiantly a little faster on the i5 but when it comes to rendering the Jetson will wipe the floor with Mac Mini. However things like Open Office, email, surfing, it's really hard to tell, even when watching a movie at 1080P. 



    I'm defiantly more excited about the world of ARM than I am Intel and it's just going to get better and better., there is no stalling, I think you just might be impatient. 

  • Reply 124 of 138
    knowitallknowitall Posts: 1,648member
    misa wrote: »
    Nope nope nope. You're comparing a scooter to a car on the basis they can both go highway speed.

    Assuming:
    1. If the CPU's are 2Ghz, and then therefor desktop x86-64 class. Setting aside that desktop CPU's under 3Ghz are not game machines. Synthetic benchmarks only test the ALU or FPU. All the other features a CPU supports (eg MMX) doesn't have equivalents in ARM and may only be implemented as coprocessors.
    2. The GPU cores aren't even in the same ballpark as a desktop GPU.

    Sure it's fine that the iPad pushes pixels at a higher resolution than most gamers high end gaming machines do, but the lack of memory (1 or 2GB), Cache (A CPU's performance is entirely scaled by L2 cache) and different register sizes tend to give a false sense of "OMG ARM IS SO MUCH BETTER"

    No the story doesn't work that way at all.

    Geekbench's single core score for the iPad Air 2 is 1484, next non-ARM device of that score is equal to "Intel Pentium E5300" a 2.6Ghz Dual core x86 chip with only 2MB cache released in 2008. The chip that actually wound up in peoples computers in 2008 was the Core 2 Duo E7300 which had 3MB of cache.

    So the current model iPad Air 2 has a ranking of a device released 6 years ago. That's 6 CPU generations ago in Intel terms. So we'd have to be on the iPad 8 or iPhone 12 before we get to "desktop parity" at this rate.

    Fortunately, desktop CPU's and GPU's have hit a brick wall for the last 2 years due to TSMC failing at getting a die shrink out for nVidia and AMD, which means that performance gap closed by at least one entire generation for GPU's. Mobile CPU and GPU's however will never reach desktop parity because there are thermal limitations. If there's only one die shrink left available (14nm) then mobile CPU's and GPU's hit a brick wall next year as well. The only way to catch up to desktop performance then is to tradeoff battery life and make them heavier to dissipate the heat better.

    MMX is just an extended instruction set, ARM has lots of SIMD and 'media' instruction too. Apple uses super fast coprocessors for some media related code, but this could be ported to the ARM instruction set (the coprocessor is used because of energy constraints).

    The A7 of the iPad Air is as fast as my iMac of late 2009 (rouge graphics of the A7 is faster) so CPU performance is on par with hardware (in real use) 4 years ago, not six.
    i5 performance is about 2 times faster, but that's less with the current A8 and even less with the A8X. When you stick 2 A8X's on a board performance beats that of the i5.
    iOS uses GCD (grand central dispatch) and apps can use GCD and OpenCL to accelerate code and this makes it possible to beat i5 CPU performance easily (even the A7).
    At a certain performance point, increased performance isn't usefull most of the time and my iMac of 2009 is fast enough even for 720p encoding, so a dual A8X (possibly on the same die) or an A8XX with double the amount of cores is more than enough for a MacBook Air (and next years A9 on 14nm will smoke the i5 and possibly the i7 with similar clock rates).
    The Air will be $200 cheaper in this setup, so that's a huge bonus.
  • Reply 125 of 138
    knowitallknowitall Posts: 1,648member
    relic wrote: »
    ... I just got my iPad Air 2 today but in all honestly I'm looking forward to the Nexus 9 more. ...

    Anything is a disappointment after the iPad Air 2.
    You can use OpenCL instead of CUDA, the only difference is that it's easier to use and can be combined with GCD.
  • Reply 126 of 138
    melgrossmelgross Posts: 33,510member
    misa wrote: »
    Nope nope nope. You're comparing a scooter to a car on the basis they can both go highway speed.

    Assuming:
    1. If the CPU's are 2Ghz, and then therefor desktop x86-64 class. Setting aside that desktop CPU's under 3Ghz are not game machines. Synthetic benchmarks only test the ALU or FPU. All the other features a CPU supports (eg MMX) doesn't have equivalents in ARM and may only be implemented as coprocessors.
    2. The GPU cores aren't even in the same ballpark as a desktop GPU.

    Sure it's fine that the iPad pushes pixels at a higher resolution than most gamers high end gaming machines do, but the lack of memory (1 or 2GB), Cache (A CPU's performance is entirely scaled by L2 cache) and different register sizes tend to give a false sense of "OMG ARM IS SO MUCH BETTER"

    No the story doesn't work that way at all.

    Geekbench's single core score for the iPad Air 2 is 1484, next non-ARM device of that score is equal to "Intel Pentium E5300" a 2.6Ghz Dual core x86 chip with only 2MB cache released in 2008. The chip that actually wound up in peoples computers in 2008 was the Core 2 Duo E7300 which had 3MB of cache.

    So the current model iPad Air 2 has a ranking of a device released 6 years ago. That's 6 CPU generations ago in Intel terms. So we'd have to be on the iPad 8 or iPhone 12 before we get to "desktop parity" at this rate.

    Fortunately, desktop CPU's and GPU's have hit a brick wall for the last 2 years due to TSMC failing at getting a die shrink out for nVidia and AMD, which means that performance gap closed by at least one entire generation for GPU's. Mobile CPU and GPU's however will never reach desktop parity because there are thermal limitations. If there's only one die shrink left available (14nm) then mobile CPU's and GPU's hit a brick wall next year as well. The only way to catch up to desktop performance then is to tradeoff battery life and make them heavier to dissipate the heat better.

    You're maki g poor comparisons. Many notebook use Intel's i3 low power, or ultra low power chips. The A8x beats most of those . It's well ahead of the top Atom line. Both of those are x86 lines. Are you deliberately forgetting them? Did you think intel discontinued those?

    If Apple chose to put this chip in a bigger machine, with much bigger battery capacity, and better cooling, they could possibly run it at 1700MHz, rather than 1500MHz. That would add an additional 11+% performance, not insignificant, as intel increases performance between 5-15% a year. That would increase this chip's performance by 50% over last year for the CPU, and about 2.7 times for the graphics.

    Im waiting to see the review from Anandtech for a final determination, but this looks really good to me.
  • Reply 127 of 138
    I find it suspicious that only one GPU testing software was used...Nvidia Shield Tablet beats the iPad Air 2 in 3D mark. Now granted the CPU performance is better on the iPad Air 2. But considering the base model is 2x the price of the Nvidia tablet base model I feel the Android tablet still wins. http://www.laptopmag.com/reviews/tablets/apple-ipad-air-2
  • Reply 128 of 138
    relicrelic Posts: 4,735member
    Quote:
    Originally Posted by knowitall View Post





    Anything is a disappointment after the iPad Air 2.

    You can use OpenCL instead of CUDA, the only difference is that it's easier to use and can be combined with GCD.

     

    Saying OpenCL is easier than CUDA is subjective, I believe the latter to be easier as CUDA has just so much in the way of code examples, code snippets, programs, etc. When I first got my Jetson Nvidia K1 board I had a project up and working the very day I got it. Not to say you couldn't do the things I'm doing with OpenCL, it would just take longer. The iPad is also not an ideal platform for GPU computation development, though you could probably make a handy little media encoder app it would nothing on the level that I could create with a platform that is as opened as a Nexus 9, where most see just another Android tablet, I see 192 CUDA cores just waiting to be tapped into. With the Nexus 9 I can install Nvidia's development platform, which is a Linux based off of Ubunti 14.04. Within in this dev OS you have hundreds of handy little applications and scripts that can be used to assist in rendering, encoding media files, clustering in which you can connect multiple Nvidia K1's to a computing grid, etc. very exciting stuff. Since I already have a 32Bit Jetson K1 development board, a new Lenovo 4K touchscreen monitor that also contains a Nvidia 32Bit K1, the addition of the Nexus 9 would mean I can connect all 3 of them to my grid, when Nvidia finally releases a 64Bit Denver development board, I will be one of the first to get an order of 3 boards in bringing my total up to 6, except I will probably make my Lenovo monitor the head node, with the 3 client nodes made up of the 64Bit variant, diagram and pictures below will give you a better idea of what I am talking about. I will than use the Nexus 9 as a development platform and workstation, connected to a monitor, mouse and keyboard via the dock, will add the Nexus 9 to the cluster when I need the extra power. The 32Bit dev board will be used for another project that I have been wanting to do for a while, a roaming robot that can be controlled over the net, so I can be home even when I'm not.

     

    Potentially if three works, I might extend the nodes to look something like this;

    Project Turbo, an actually simple design with all plans and code posted, even parts available for sale for others to build there own. I will use a single rotating camera though instead of two fixed.

  • Reply 129 of 138
    [@]Relic[/@] - regarding your last 2 posts:

    Could these render boards, stackable server farms also be connected via Thunderbolt to a Mac? Is there an inexpensive Intel board with chipset that would facilitate this?

    I ask because to me, it was the promise of Thunderbolt not only for fast data servers, but also for offline GPU tasks that made it very interesting.

    You might have a product on your hands... or I'm not savvy enough to find something similar on the net without asking you to go into production.

    *** Your robot project is pure GEEK!.... Love it!... :D
  • Reply 130 of 138
    relicrelic Posts: 4,735member
    Quote:

    Originally Posted by ThePixelDoc View Post



    @Relic - regarding your last 2 posts:



    Could these render boards, stackable server farms also be connected via Thunderbolt to a Mac? Is there an inexpensive Intel board with chipset that would facilitate this?



    I ask because to me, it was the promise of Thunderbolt not only for fast data servers, but also for offline GPU tasks that made it very interesting.



    You might have a product on your hands... or I'm not savvy enough to find something similar on the net without asking you to go into production.



    *** Your robot project is pure GEEK!.... Love it!... image

     

    Funny, I have also been researching this as well. I'm interested in using a Mini ITX Z87 as my head node that way I can connect it directly to my Mac Pro via Thunderbolt. You have good timing as well as there is about to be an influx of such boards to the market.

     

    Zotac will be an interesting one to watch out for, as their products are not only good but fairly reasonably priced.

     

    But the one I am interested in will be MSI's replacement to their very popular MSI Z87I series which should also contain 2 ThunderBolt ports. As you can see the kit includes a mini Nvidia 760GT, so cool and a great card for CUDA programming.

     

    Give me a little more time and I will come up with a good but decently priced system. Do you have any idea how many nodes you want to start out with, I recommend 3, plus the head. Are you more interested in GPU or CPU clustering.

  • Reply 131 of 138
    relic wrote: »
    Funny, I have also been researching this as well. I'm interested in using a Mini ITX Z87 as my head node that way I can connect it directly to my Mac Pro via Thunderbolt. You have good timing as well as there is about to be an influx of such boards to the market.

    Zotac will be an interesting one to watch out for, as their products are not only good but fairly reasonably priced.
    <img alt="" class="lightbox-enabled" data-id="51367" data-type="61" src="http://forums.appleinsider.com/content/type/61/id/51367/width/500/height/1000/flags/LL" style="; width: 500px; height: 340px">


    But the one I am interested in will be MSI's replacement to their very popular MSI Z87I series which should also contain 2 ThunderBolt ports. As you can see the kit includes a mini Nvidia 760GT, so cool and a great card for CUDA programming.
    <img alt="" class="lightbox-enabled" data-id="51368" data-type="61" src="http://forums.appleinsider.com/content/type/61/id/51368/width/500/height/1000/flags/LL" style="; width: 500px; height: 330px">


    Give me a little more time and I will come up with a good but decently priced system. Do you have any idea how many nodes you want to start out with, I recommend 3, plus the head. Are you more interested in GPU or CPU clustering.

    Actually I was pointing out the good idea and your mastering the details to create such a rendering box, and it being a viable product if there already isn't one on the market.

    As for me personally, it would be something I would like to recommend to clients (mostly photographers) that are starting to get into film production. They mostly have iMacs and MBPs of the TB generations both 1 and 2. This is something, along with RAID servers that I think many would go for, rather than moving up to a MacPro. In fact, even though I *was* planning on getting a MacPro in January... I'm seriously considering a new fully loaded 5k iMac, and selling this years model to a client. If I need to help with post-production for these clients if they ever get serious about adding film services to their photo services, then a rendering farm of the kind that you describe would be something I would seriously take a look at as well.

    About nodes: I wouldn't have the slightest idea and would surely need your consultation on the benefits of a config. Also I suppose as with most things tech, it would come down to price vs. performance gains for the majority of my clients. I would expect anything under €1k with double or more performance would be intriguing.

    Again, the most awaited advantage of TB tech for me, was the ability to get clients to trust in iMacs and MBPs outfitted with TB, and be able to extend the power of those machines with break-out boxes for whatever bottleneck they perceived in their workflow in the future, from GPUs to specialty processing cards. Unfortunately, everything has centered mostly around simple data throughput devices and *I'm* still waiting to make good on my promises for extending the actual power of the machines themselves. Heck, I even envisioned TB at some point coming to the iPad to turn it into a workhorse when you needed it, similar to your Nexus 9. Even though Apple hates connections and the TB port now is almost larger/thicker than an iPad, I thought they might find an elegant unobtrusive way to integrate it it for an iPad Pro version, something I'm still patiently waiting for. Maybe it's in Apple's labs now, who knows... but TB is being woefully underutilized at the moment by Intel, Apple, and the tech industry as a whole. It offers so much more than just USB.
  • Reply 134 of 138
    relicrelic Posts: 4,735member
    Actually I was pointing out the good idea and your mastering the details to create such a rendering box, and it being a viable product if there already isn't one on the market.

    As for me personally, it would be something I would like to recommend to clients (mostly photographers) that are starting to get into film production. They mostly have iMacs and MBPs of the TB generations both 1 and 2. This is something, along with RAID servers that I think many would go for, rather than moving up to a MacPro. In fact, even though I *was* planning on getting a MacPro in January... I'm seriously considering a new fully loaded 5k iMac, and selling this years model to a client. If I need to help with post-production for these clients if they ever get serious about adding film services to their photo services, then a rendering farm of the kind that you describe would be something I would seriously take a look at as well.

    About nodes: I wouldn't have the slightest idea and would surely need your consultation on the benefits of a config. Also I suppose as with most things tech, it would come down to price vs. performance gains for the majority of my clients. I would expect anything under €1k with double or more performance would be intriguing.

    Again, the most awaited advantage of TB tech for me, was the ability to get clients to trust in iMacs and MBPs outfitted with TB, and be able to extend the power of those machines with break-out boxes for whatever bottleneck they perceived in their workflow in the future, from GPUs to specialty processing cards. Unfortunately, everything has centered mostly around simple data throughput devices and *I'm* still waiting to make good on my promises for extending the actual power of the machines themselves. Heck, I even envisioned TB at some point coming to the iPad to turn it into a workhorse when you needed it, similar to your Nexus 9. Even though Apple hates connections and the TB port now is almost larger/thicker than an iPad, I thought they might find an elegant unobtrusive way to integrate it it for an iPad Pro version, something I'm still patiently waiting for. Maybe it's in Apple's labs now, who knows... but TB is being woefully underutilized at the moment by Intel, Apple, and the tech industry as a whole. It offers so much more than just USB.

    I have a lot of great ideas about my render box concept, in fact I have been acquiring quotes from numerous manufactures that produce custom cases. To start out I would like a case that can accommodate 5 Nvidia boards, one of which will be immobile, with the other 4 in slide out drawers similar to what you get in a raid case, that way people will be able to add more boards easily in the future or even upgrade them. I will use my MakerBot to build those, which is great as I finally have a real project instead of just using it to replace missing battery covers for remotes and toys, damn Firby's was almost impossible to make. The case will include one Nvidia Jetson K1 board, an Ethernet switch and of course a power-supply. I plan on building 10 to start out with with a planned price of 400 - 500 each, I already have 7 people interested in one, which is why I'm making 10, that and most of these case manufactures won't build me anything without at least an order of 25, I figure I can sell the rest of the cases on the Nvidia board, though after seeing a fully stocked version, people might buy those instead. This isn't a get rich scheme, I actually just want to break even but if I loose a couple of grand it won't be a big deal either. This is strictly for fun and something I always wanted to do anyway.

    Never count on Apple to do anything for the professional community, their mostly a consumer computer manufacture now, even though they have a Mac Pro. Did you know Apple charges 3,500 dollars to upgrade from the Xeon E5-1620 4 core to the E5-2697 12 core version. The E5-2697 now retails for 2300 dollars, so you would actually save 1200 dollars if you bought the chip your self, 1200 hundred dollars, sometimes I think Apple just lives on another planet. In fact you can buy thew new 14 core Xeon E5-2695 for 2700 dollars and still save 800 dollars. As shown by this tear down, you can upgrade the CPU yourself, in fact now that I know how much I can save, I'm going to buy the quad core unit, than the 14 Core CPU separately and use the 800 dollars I just saved towards the D700 graphic cards. Same with the memory, Kingston 64GB DDR3 1866 ECC costs 690 dollars vs. the 1200 dollars Apple wants for theirs, which is probably the same RAM. So in the end I'm just going to buy the base unit, with the upgraded graphic cards, purchase the memory and CPU's myself, will have saved some money and come out with a faster machine. Yes, the new Xeon E5-269 is pin and power compatible, in fact I have no doubt that Apple will use it in the next update. I will also keep the SSD drive at 256GB, I have over 20GB of NAS storage so internal storage isn't important to me.

    Sorry Doc, got side tracked, the problem with ThunderBolt is that the external PCIe extension box's are just so outrageously expensive. Almost to the point where it would actually be better to just buy a PC that contains PCIe slots. Though for laptops you don't have a choice, so in this case I would recommend a Mlogic Mlink R external ThunderPort case, in terms of available power and price it's hard to beat. Then fill it with a NVIDIA Quadro K4000 with 3GB GDDR5. You can't go wrong with this card, it supports 10bit video, something I am very passionate about, so much so that I have 4 10bit NEC monitors, performance for Photoshop is great but where this card will blow your mind is within Premiere, just fantastic. However like I said before, this setup isn't cheap, expect to pay 1,100 dollars for it. For a superb mobile workstation I personally wouldn't have a problem buying one and maybe your customers will feel the same way once they've seen how much of a difference it makes that is.

    1000
    1000
  • Reply 135 of 138
    relic wrote: »
    I have a lot of great ideas about my render box concept, in fact I have been acquiring quotes from numerous manufactures that produce custom cases. To start out I would like a case that can accommodate 5 Nvidia boards, one of which will be immobile, with the other 4 in slide out drawers similar to what you get in a raid case, that way people will be able to add more boards easily in the future or even upgrade them. I will use my MakerBot to build those, which is great as I finally have a real project instead of just using it to replace missing battery covers for remotes and toys, damn Firby's was almost impossible to make. The case will include one Nvidia Jetson K1 board, an Ethernet switch and of course a power-supply. I plan on building 10 to start out with with a planned price of 400 - 500 each, I already have 7 people interested in one, which is why I'm making 10, that and most of these case manufactures won't build me anything without at least an order of 25, I figure I can sell the rest of the cases on the Nvidia board, though after seeing a fully stocked version, people might buy those instead. This isn't a get rich scheme, I actually just want to break even but if I loose a couple of grand it won't be a big deal either. This is strictly for fun and something I always wanted to do anyway.

    Never count on Apple to do anything for the professional community, their mostly a consumer computer manufacture now, even though they have a Mac Pro. Did you know Apple charges 3,500 dollars to upgrade from the Xeon E5-1620 4 core to the E5-2697 12 core version. The E5-2697 now retails for 2300 dollars, so you would actually save 1200 dollars if you bought the chip your self, 1200 hundred dollars, sometimes I think Apple just lives on another planet. In fact you can buy thew new 14 core Xeon E5-2695 for 2700 dollars and still save 800 dollars. As shown by this tear down, you can upgrade the CPU yourself, in fact now that I know how much I can save, I'm going to buy the quad core unit, than the 14 Core CPU separately and use the 800 dollars I just saved towards the D700 graphic cards. Same with the memory, Kingston 64GB DDR3 1866 ECC costs 690 dollars vs. the 1200 dollars Apple wants for theirs, which is probably the same RAM. So in the end I'm just going to buy the base unit, with the upgraded graphic cards, purchase the memory and CPU's myself, will have saved some money and come out with a faster machine. Yes, the new Xeon E5-269 is pin and power compatible, in fact I have no doubt that Apple will use it in the next update. I will also keep the SSD drive at 256GB, I have over 20GB of NAS storage so internal storage isn't important to me.

    Sorry Doc, got side tracked, the problem with ThunderBolt is that the external PCIe extension box's are just so outrageously expensive. Almost to the point where it would actually be better to just buy a PC that contains PCIe slots. Though for laptops you don't have a choice, so in this case I would recommend a Mlogic Mlink R external ThunderPort case, in terms of available power and price it's hard to beat. Then fill it with a NVIDIA Quadro K4000 with 3GB GDDR5. You can't go wrong with this card, it supports 10bit video, something I am very passionate about, so much so that I have 4 10bit NEC monitors, performance for Photoshop is great but where this card will blow your mind is within Premiere, just fantastic. However like I said before, this setup isn't cheap, expect to pay 1,100 dollars for it. For a superb mobile workstation I personally wouldn't have a problem buying one and maybe your customers will feel the same way once they've seen how much of a difference it makes that is.

    1000
    1000

    Actually, it's exactly the Mlogic above that I was hoping to avoid.

    I like your concept, with stackable board cases, and daisy-chained to a ThunderBolt controller board. Something similar, yet far more advanced than daisy-chaining FireWire , eSATA or (olden day) SCSI devices. If it was needed I could see a PCIe adapter board being developed, but a case (or multiple single-use cases) for a single PCIe is not what I thought would come out of this technological advance.
  • Reply 136 of 138
    knowitallknowitall Posts: 1,648member
    relic wrote: »
    Saying OpenCL is easier than CUDA is subjective, I believe the latter to be easier as CUDA has just so much in the way of code examples, code snippets, programs, etc. When I first got my Jetson Nvidia K1 board I had a project up and working the very day I got it. Not to say you couldn't do the things I'm doing with OpenCL, it would just take longer. The iPad is also not an ideal platform for GPU computation development, though you could probably make a handy little media encoder app it would nothing on the level that I could create with a platform that is as opened as a Nexus 9, where most see just another Android tablet, I see 192 CUDA cores just waiting to be tapped into. With the Nexus 9 I can install Nvidia's development platform, which is a Linux based off of Ubunti 14.04. Within in this dev OS you have hundreds of handy little applications and scripts that can be used to assist in rendering, encoding media files, clustering in which you can connect multiple Nvidia K1's to a computing grid, etc. very exciting stuff. Since I already have a 32Bit Jetson K1 development board, a new Lenovo 4K touchscreen monitor that also contains a Nvidia 32Bit K1, the addition of the Nexus 9 would mean I can connect all 3 of them to my grid, when Nvidia finally releases a 64Bit Denver development board, I will be one of the first to get an order of 3 boards in bringing my total up to 6, except I will probably make my Lenovo monitor the head node, with the 3 client nodes made up of the 64Bit variant, diagram and pictures below will give you a better idea of what I am talking about. I will than use the Nexus 9 as a development platform and workstation, connected to a monitor, mouse and keyboard via the dock, will add the Nexus 9 to the cluster when I need the extra power. The 32Bit dev board will be used for another project that I have been wanting to do for a while, a roaming robot that can be controlled over the net, so I can be home even when I'm not.

    <img alt="" class="lightbox-enabled" data-id="51331" data-type="61" src="http://forums.appleinsider.com/content/type/61/id/51331/width/500/height/1000/flags/LL" style="; width: 500px; height: 290px">

    <img alt="" class="lightbox-enabled" data-id="51332" data-type="61" src="http://forums.appleinsider.com/content/type/61/id/51332/width/500/height/1000/flags/LL" style="; width: 500px; height: 335px">

    Potentially if three works, I might extend the nodes to look something like this;
    <img alt="" class="lightbox-enabled" data-id="51335" data-type="61" src="http://forums.appleinsider.com/content/type/61/id/51335/width/500/height/1000/flags/LL" style="; width: 500px; height: 467px">

    Project Turbo, an actually simple design with all plans and code posted, even parts available for sale for others to build there own. I will use a single rotating camera though instead of two fixed.
    <img alt="" class="lightbox-enabled" data-id="51334" data-type="61" src="http://forums.appleinsider.com/content/type/61/id/51334/width/500/height/1000/flags/LL" style="; width: 500px; height: 587px">

    Sorry that I didn't respond earlier, but I didn't see your reply (kind of hard, considering the size).
    Your absolutely right that the iPad is restricted as a development board.
    Apple has a lot of work to do to make it comparable to other full blown Unixes and it has to remove the artificial restrictions.
    Maybe next year when the 13 inch iPad Pro sees the light of day.

    Success with your projects, they look very interesting (and impressive).
  • Reply 137 of 138
    knowitallknowitall Posts: 1,648member
    [@]Relic[/@] - regarding your last 2 posts:

    Could these render boards, stackable server farms also be connected via Thunderbolt to a Mac? Is there an inexpensive Intel board with chipset that would facilitate this?

    I ask because to me, it was the promise of Thunderbolt not only for fast data servers, but also for offline GPU tasks that made it very interesting.

    You might have a product on your hands... or I'm not savvy enough to find something similar on the net without asking you to go into production.

    *** Your robot project is pure GEEK!.... Love it!... :D

    Excellent idea!
    My thoughts were on the same line, I was thinking about using Thunderbolt to add new (main) memory to the new Mac minis (to get around the soldered RAM).
    But your idea is even better.
  • Reply 138 of 138
    Tegra k1 real benchmarks(src geekbench)
    http://browser.primatelabs.com/geekbench3/1142302

    single core 2034
    multi core 3553

    A8X

    single core 1796
    multi core 4450

    Remember the geekbench app is running in 32bit mode on k1, so in 64bit mode we can expect to see
    10% increase.

    Apple added that third core for nothing. ios apps are not optimized to use that extra core and developers are not gonna recode their apps just to support that third core.


    But apple tablet has more apps that are designed to use on a tablet compared to android.
    Most of the popular games will work on both platforms.

    If you are planning to buy a new tablet to use lot of apps buy ipad air 2.

    for games either of them will work, but i recommend you the ipad air 2 again. Bcz some games wont run on K1 as of now.

    But if you are buying a tablet to watch movies then buy nexus 9. bcz android can play any type of file formats and $100 cheaper than ipad air 2.
Sign In or Register to comment.