TSMC confirmed as manufacturer of Apple's 20nm A8 processor

12346»

Comments

  • Reply 101 of 112
    gtrgtr Posts: 3,231member
    Quote:
    Originally Posted by cnocbui View Post

     

    The situation in India appears to be a case of an Indian court getting on it's high horse and basically summoning Samsung's CEO, even though the case does not appear to involve him directly.  It is as if an Indian court were to issue a summons on Tim Cook because an Indian business claimed an Apple subsidiary in Oman owed them money.  And it's not millions, it's $1.4 million and involves a subsidiary - Samsung Gulf Electronics - who in turn are alleging that they were the victims of a multi million dollar fraud.


     

    If you're attempting to convince anybody here that Samsung is a pure and innocent organisation then you're doing a terrible job.

     

    The situation in India is not just the case of "an Indian court" as you claimed. It's a case involving India's SUPREME COURT, a court that would not issue an arrest warrant without justification.

     

    Your post also fails to take into consideration the multitude of other times that Samsung has been caught engaging in similar behaviour.

     

    Samsung are just not trustworthy.

     

    And the sooner that businesses that do business with them move away from them the better.

  • Reply 102 of 112
    Quote:

    Originally Posted by Relic View Post





    If that isn't crazy enough that CPU clocks in at 2.5GHZ and has twice the L1 cache and four times the L2 cache than Apples A8. The real 64,000 question will be the battery life. NVIDIA claims that it is better then the current 32BIT version, we will just have to wait and see. I will defiantly be buying a development board, love My current K1 dev board, I can encode an entire BueRay image using it's 192 Cuda cores and Nvidias DEV apps in under 40 minutes.

     

    The reason the K1 has the larger caches is because it needs them in order to function (because of how it tries to optimize instructions on the fly). You cannot imply that Denver is somehow superior because it has a larger cache. A better way to put it would be if it had the same size cache as the A8 then it would take a serious performance hit. Denver REQUIRES larger caches just to simply function normally.

     

    And battery life is the big issue. While your development board can encode your Blu-Ray very quickly, have you ever bothered to monitor how much power it consumed over that 40 minutes? I bet it's so high that it could also kill a battery in that same 40 minutes. Tests of the Shield K1 show good battery life doing normal tasks, but short battery life when gaming (and those games didn't even stress the 192 cores to their limits).

     

    There's a reason Nvidia has a reputation of over promising and under delivering.

  • Reply 103 of 112
    Quote:

    Originally Posted by Relic View Post





    Thanks NHT, though I already discussed and stated that the Exynos 5433, the CPU used in the Samsung Note, was a 64Bit CPU before being neutered, it's now a 32Bit chip, which makes more sense as the 64Bit android version hasn't been released yet. Though the 2 x 32Bit memory controller still remains and after checking out the benchmarks for Note 4 i have to say it's performance is pretty stellar. The A8 is still faster and I don't think the performance would have changed much if the Eynos 5433 remained at 64 Bit.

     

    What is fantastic, and people who praise K1, Note 4 or whatever is the fantastic efficency of the A8 at a low clock speed and the claimed absence of throttling.

     

    That alone sets the A8 apart from all other chips. The K1 will have fantastic peek performance, but if it keeps it up even a little while it will kill the battery. That's a fact. Current benchmark seem to fixated on very short tests and not enough on sustained tests. There's very little that needs to be done on a CPU that would require CPUs functioning at very high intensity for just a few minutes at a time on many cores (maybe video and photo processing, but that is probably be better served by being offloaded to specilised hardware). Seemingly a lot of Apple's extra 1 billion transistors are used for video and photo processing.

     

    The main reason for having a fast CPU core and GPU that doesn't throttle on a mobile.. Gaming of course; right now I don't see other major use case, though there may be.

     

    The inefficiency of Android chips is less an issue on the tablets; they'll compete much better there; though I suspect the A8 CAN go to 2Ghz and really show what it can do there. We will see in October.

  • Reply 104 of 112
    relicrelic Posts: 4,735member

    Quote:
    Originally Posted by EricTheHalfBee View Post

     

     

    The reason the K1 has the larger caches is because it needs them in order to function (because of how it tries to optimize instructions on the fly). You cannot imply that Denver is somehow superior because it has a larger cache. A better way to put it would be if it had the same size cache as the A8 then it would take a serious performance hit. Denver REQUIRES larger caches just to simply function normally.

     

    And battery life is the big issue. While your development board can encode your Blu-Ray very quickly, have you ever bothered to monitor how much power it consumed over that 40 minutes? I bet it's so high that it could also kill a battery in that same 40 minutes. Tests of the Shield K1 show good battery life doing normal tasks, but short battery life when gaming (and those games didn't even stress the 192 cores to their limits).

     

    There's a reason Nvidia has a reputation of over promising and under delivering.

     


     

    The Dynamic Code Optimization or DCO doesn't use the cache memory from either the L1 or L2, it has it's one 128M cache. The Denver could actually get away with using less cache because the two CPU cores use a 7-way superscalar microarchitecture meaning you can execute 7 instructions per CPU cycle. The Nvidia Denver is the only ARM CPU that has 7, the rest deploy 2. There are no benchmarks available as of yet so we will have to wait to see if this has any effect on performance. The extra cache size is their for the same reason Apples A8 has cache, faster executions times for frequently used code. Though the DCO is one of the most beneficial features on the Denvers SOC, it optimizes frequently used software routines at run-time into dense, highly tuned, microcode routines which are then stored in a dedicated, 128MB main-memory-based optimization cache, which again means it doesn't need to utilize the L1 or L2 cache. Also note that the rest of CPU can't access that cache. DCO, however, is not a required function of the CPU as you've suggested. In cases where code may not be frequently reused, i.e. wouldn't benefit from the DCO's process, Denver can process those ARM instructions directly, thus effectively bi-passing the DCO all together.

     

    If you read my post I'm using a desktop development board not a mobile device, I'm interested in using the Denver K1 in the same manner as it's 32Bit brother, you know, mad scientisty things. The encoding process I used would have brought any battery to it's knees, including an iDevice with the A8 processor. That's if the A8's PowerVR GPU had the same access to it's GPU cores for computations. Right now the process to encode one of my BluRays is really complicated so I'm currently working through the workflow and writing a program that will automate the entire process. I first have to login into my little Nvidia Jetson K1 (still plugged in under my bed at home) using XServer, I'm using my sons Nvidia Shield. I then launch an instance of LXDE environment, open up a terminal from there, login into one my Sony HES-V1000 200 Disc BlueRay changer, I have two with about 360 BlueRays (I didn't buy them new, got one on eBay and the other from Ricardo (Swiss version of eBay). The HES-V1000 can copy it's physical media to a local internal 500GB drive that I of course upgraded to a 3GB. I then copy over a movie that I want to watch to an external hard drive (Lacie Cloud Drive) that is connected to my Nvidia Jetson, then start the encoding process using an old Python script that I wrote when I encoded all of my DVD's (I built a machine with 6 DVD drives, it used a AMD CPU that I overclocked), this time though I'm using Nvidia Encode (the program that uses the GPU cores to encode), process takes between 40m and 1hr (that's fast guys, try encoding just a DVD using your MacBook, don't be surprise if it takes a few hours even with an i7), when finished it automatically uploads the freshly created .mp4 file to OneDrive (we currently have 4 1TB accounts, one for each family member), it then deletes the original or copy, which ever way you think it is from the external drive and sends me an SMS saying it's finished. Using any of my devices including my little Nokia 1020, I just open up OneDrive, go to the movies folder, click on the movie and then fall asleep after 30 minutes. I'm not talking bad about the A8 here, so don't take it the wrong way, I would love to buy an A8 development board. For me the Nvidia K1 is the perfect CPU to play with, both the 32Bit and 64Bit version are incredibly powerful. I'm not just talking about speed here, even if the 64bit version doesn't manage to match or surpass Apples A8, the things I will be able to do with it is simply geek magic, not only with the Cuda cores but I can actually tell Denver how to execute my code, the possibilities are endless. Sorry for the over exaggerated excitement, I just love really powerful tech that I have complete control over.

     

    I have a Nvidia Shield Tablet so I am in the unique position to discuss the battery life, it's defiantly on the above average side. I get between 9 to 10 hours when the display is at 50%, Wifi, BlueTooth disabled, just LTE is enabled. Normal activities for me consists mostly of surfing, maybe watching a film or two, watching a few episodes of Hells Kitchen, music is defiantly being streamed almost all of the time or played locally using Spotify. I probably use the SSH terminal and X terminal the most, as I use them to do some light programming, either with VI or I start an instance of NetBeans through X . The Shield works extremely well with X Server, I have a white wall to the right of my bed, so I connect my little Vivitek Q7, smallest beamer I could find before going with a Pico, via the built in HDMI, then sync up my MS Wedge Keyboard and Mouse through bluetooth. I then have a pretty good little programming environment. When playing, say, Modern Combat 5, I get 4.5 to 5.5 hours of solid game play. So it's not iPad battery but decent.

     

    The Shield is going to my son, I'm waiting for the new Nexus 9 that is going to be the device that not only launches Nvidia Denver but 64Bit to the Android community with the release of Android 5 or as we know it, "L". Specs of the Nexus 9 are supposed to include a Nvidia K1 Denver 64Bit, 4GB RAM, HDMI out (it's for gaming), Miracast wireless desktop and built by HTC (Google went with HTC because they were impressed with the job they did with the Project Tango Tablet), will also be HTC's first consumer tablet in 4 years. I like HTC, they make pretty good hardware, hopefully we will see something a little nicer then what Asus did with Nexus 7. The Nexus 10 is ugly as sin but the build quality and hardware specs where, are still pretty decent.

     

    My plans for the Nexus 9 is to install the same Ubuntu image that is on the Jetson K1 development board and make it into a dual boot system with Android as the other OS. With the built in HDMI out it should make a pretty good little workstation. These chips are as powerful as the Haswell Celeron that is in my daughters HP ChromeBox and it's runs Ubuntu beautifully, so it's only natural that my Jetson K1 32Bit runs Ubuntu without so much as a hiccup. Why all the fuss you ask, are you crazy Relic. Nope, GPU computing is cool, using Cudamat and Theano in Python, I'm going to start rewriting some of my older applications, stuff mostly pertaining to trading, a lot of market data analysis and will create some new apps, like my reworked BluRay encoder that will use the GPU as a CPU. Some of my market data analyzers take a very long too calculate, create reports, create graphs, I'm hoping to speed them up at least 10 fold. I was instantly convinced the first time I used the GPU to encode a movie, holy crap was it fast. I also want to learn more about load balancing/Distributed Computing/clustering using multiple Jetsons to make compiling, rendering, encoding, ect. faster and faster and faster. At 180 bucks a pop their a steal, the CUDA community is also awesome to work with, they love helping out new people and creating community projects. I just need to keep my mind busy and active guys, I think it's the only way I'm going to beat this f*cking breast cancer.

  • Reply 105 of 112
    relicrelic Posts: 4,735member
    [QUOTE name="foggyhill" url="/t/182405/tsmc-confirmed-as-manufacturer-of-apples-20nm-a8-processor/80#post_2603019"]
     

    What is fantastic, and people who praise K1, Note 4 or whatever is the fantastic efficiency of the A8 at a low clock speed and the claimed absence of throttling.

    That alone sets the A8 apart from all other chips. The K1 will have fantastic peek performance, but if it keeps it up even a little while it will kill the battery. That's a fact. Current benchmark seem to fixated on very short tests and not enough on sustained tests. There's very little that needs to be done on a CPU that would require CPUs functioning at very high intensity for just a few minutes at a time on many cores (maybe video and photo processing, but that is probably be better served by being offloaded to specialized hardware). Seemingly a lot of Apple's extra 1 billion transistors are used for video and photo processing.

    The main reason for having a fast CPU core and GPU that doesn't throttle on a mobile.. Gaming of course; right now I don't see other major use case, though there may be.

    The inefficiency of Android chips is less an issue on the tablets; they'll compete much better there; though I suspect the A8 CAN go to 2Ghz and really show what it can do there. We will see in October.
    [/QUOTE]
    In no way do I want come off as an a-hole as I like you, I think your a smart cookie and probably look good in tuxedo, sorry I just finished watching a James Bond film. Do you honestly believe that a mobile CPU in 2014, that took 5 years to complete wouldn't have CPU throttling? Not only does the Nvidia Denver have [COLOR=000000]low latency power-state transitions, power-gating but it has dynamic voltage with CPU clock scaling which of course is based on workloads. Just like the A8, now Apples solution is probably better, more elegant, ect. but devices that utilize the K1 64Bit will all have above average battery life, just like it's 32Bit sibling. The Shield for instance can easily reach 9hr plus hours of normal usage. Yes, gaming does tend to eat up the battery quickly but 5 hours of play time with a games like Nova 3, Dead Trigger, Call of Duty; Strike Team, etc. isn't so bad. It will also most defiantly get better after Google releases the finished Android 5, my Nexus 5 gained an 1hr and 30m of batter time with the "L" preview. Like your said, let's wait to see what kind of battery life the Nexus 9 offers, which will be utilizing the new 64Bit Denver before jumping to conclusions about terrible battery life.[/COLOR]

    [COLOR=000000]Was I too mean, you can spank me if you want, aaaahhhh damn hormone shot's.[/COLOR]
  • Reply 106 of 112
    nhtnht Posts: 4,522member
    Quote:

    Originally Posted by Relic View Post

     

    That's if the A8's PowerVR GPU had the same access to it's GPU cores for computations. 


     

    You probably can via Metal but I have not looked.

     

    Quote:

    The Shield is going to my son, I'm waiting for the new Nexus 9 that is going to be the device that not only launches Nvidia Denver but 64Bit to the Android community with the release of Android 5 or as we know it, "L". Specs of the Nexus 9 are supposed to include a Nvidia K1 Denver 64Bit, 4GB RAM, HDMI out (it's for gaming)

     


     

    Mmmm...I was going to go for the Nvidia Shield Tablet rather than wait for the nexus.  What do you think of the build quality of the nvidia?  I guess I might as well wait until october.

     

    Quote:


    At 180 bucks a pop their a steal, the CUDA community is also awesome to work with, they love helping out new people and creating community projects. I just need to keep my mind busy and active guys, I think it's the only way I'm going to beat this f*cking breast cancer.


     

    Huh...I didn't know it was that cheap.  Maybe I'll get one but honestly I don't have the time to play. G'luck on the latter.

  • Reply 107 of 112
    nhtnht Posts: 4,522member
    Quote:
    Originally Posted by Relic View Post



    Do you honestly believe that a mobile CPU in 2014, that took 5 years to complete wouldn't have CPU throttling? 

     

    Seems Apple spent a lot of the 20mm die shrink benefits on thermal headroom and extra GPU cores.  It'll throttle eventually but I suspect not for normal iOS gaming.

  • Reply 108 of 112
    hill60hill60 Posts: 6,992member

    Well that's ten million A8 processors sold in 3 days that Samsung missed out on.

  • Reply 109 of 112
    relicrelic Posts: 4,735member
    Wow, kick them while their down. What's next, sleep with Samsungs wife, call the home owners association and complain about their novelty post box, because you know, it clearly states in the manual, under home improvements, page 14, paragrah 8. "Mail boxes are to be of one unifide design, if this does not meet the motif or design of the house, the homeowner make ask for a special session to be voted on." Samsung wrote the manual, didn't he. Yea why. I can tell, the guy couldn't spell if a gun was pointing at his prized beagle Baily Foster Samsung. It's unified, note unifide. This isn't one of your silly forums Terry, you know Samsung has dyslexia, dont be a jerk.
  • Reply 110 of 112
    relicrelic Posts: 4,735member
    nht wrote: »
    You probably can via Metal but I have not looked.

    As Metal finially brings OpenCL to iOS, yes, well in theory you can as it's been never done before. Just keep in mind though, using Swift and OpenCL to create even a simple media encoder iOS app that utilizes the GPU in the same fashion as the K1, will be one hell of a daunting project. There just isn't much in the way of information or even sample code on using the GPU in say an iPad for complex computations. At least no where even close to the literally of thousands of open source library's, code templates, scripts, entire programs or the awesome online support that Nvidia offers for free, with their online forums and email. OpenCL, though paraded around as the next big thing from Apple has been a very slow uptake for them. IOS8 finally has support for OpenCL so it's only a matter of time where an iPad could possible be used in conjunction with say Photoshop to scrape time off of applying filters or rendering.

    Mmmm...I was going to go for the Nvidia Shield Tablet rather than wait for the nexus.  What do you think of the build quality of the nvidia?  I guess I might as well wait until october.

    You might as well but you can't go wrong with Shield either. The built quality is actually fairly nice for an all plastic tablet, it's a lot like my Nokia 2520 in that it feels very solid, I especially like the grippy back. I'm a huge fan of devices that have built in HDMI, when the Shield is connected to a monitor it looks like any other normal computer desktop and with the power of the K1 it definitely runs like one as well. The new Nexus I'm sure will also be a fairly decent tablet, if not for the power alone, 4GB of RAM and that new monster CPU is going to be a great start for 64Bit computing on Android. I'm actually looking forward to installing Arch Linux and ChromeOS on it, the Nexus has an open boot loader making the installation of other OS's a fairly trivial thing to do. There will also be a keyboard case available for it out of the gate so this will great for those who liked the idea of a UMPC but was always frustrated with the power they offered. I still have my OQO, maybe one of the most attractive mini computers ever made, what I would give to have the new K1, 4GB of RAM and a 1080p screen inserted into that design.

    Huh...I didn't know it was that cheap.  Maybe I'll get one but honestly I don't have the time to play. G'luck on the latter.

    For what it is and does I think so, more so that it's now given me something really interesting to do while I'm still stuck in this bed. I actually chose OpenCL as the GPU language to learn. In fact, before I even considered the Jetson or even knew of it's existence, I tried purchasing a development board from PowerVR. Well, the board wasn't from them, no it's made by a Chinese company by the name of HardKernel, strange name. Even with that name, their development boards are actually very nice and come highly recommended. PowerVR worked very closely with HardKernel, man that name, sounds like a horny teenage nerds AOL handle from the 90's, in designing this board. It contained an Exynos 5410 which is the same CPU that Samsung uses for the Samsung S4, a PowerVR SGX544MP3 and slew of inputs including USB 3.0. Wasn't the most powerful setup but I wanted it to play around with PowerVR's similar GPU computational project as Nvidia. Unfortunately the board was DOA on arrival, when I sent it back I decided to cancel my order and buy Nvidia's Tegra 4 development board. I'm glad I did as a really important thing for me when learning a new language is code templates. Code framework that I can use to built upon. Now OpenCL of course has lots of learning materials and code to use but my God, CUDA has just an overwhelming amount of it. Making it extremely easy to get my code projects up and running. In comparison my OpenCL code was just taking to long to yield positive result, I of course blame this solely on myself as when things aren't going my way, or when I'm frustrated, I get bored and lose interest, I'm sure if I spent more time reading and learning I could have gotten over that initial unproductive hump. CUDA though just gave me such great results so quickly so it was hard for me to ignore. The Nvidia boards also come with so many great tools that simply don't exist on the OpenCL side, open source is great but there's also something to be said about elegantly made development tools from a determined company.

    The Nvidia Jetson K1 board is a blast to own, I highly recommend in getting one. Even if it turns out that you simply just don't have the time to use it, you can still use it as a computer, server or entertainment system for your TV.
  • Reply 111 of 112
    nhtnht Posts: 4,522member
    I haven't done gpgpu coding but I have looked at the docs and tutorials and I agree about Cuba vs OpenCL in terms of developer support (code examples, tutorials, tools, etc). My visceral reaction was kinda like my feelings of directX vs OpenGL. Sure one is proprietary and stuck on one platform and the other multi platform but I just liked it better.
Sign In or Register to comment.