A5X chip in Apple's new iPad doubles RAM to 1GB - report

12346»

Comments

  • Reply 101 of 114
    dfilerdfiler Posts: 3,420member
    Quote:
    Originally Posted by Hiro View Post


    Scheduling overhead is a myth. Multi-core schedulers aren't any more complex than non-trivial single core schedulers.



    The place where multi-core does not get the true n-times linear increase in performance has to do with accessing shared OS resources. So the apps do get linear increase in performance for their non-sharing code, but the probability an app will need to wait for access to its own shared data or the OS to be ready to do something when the app makes an API call goes up. As OSes get better designed with more granular division of capabilities and lock-free accesses this also reduces the frequency and length of waits.



    Point being though, for most tasks there normally isn't a full doubling of performance associated with a doubling of cores. This discussion started with speculation if clock speed had remained constant or if we could infer that by a graph loosely depicting a doubling of performance. Personally, I don't think there is enough evidence yet to make a declaration either way.
  • Reply 102 of 114
    wizard69wizard69 Posts: 13,377member
    It really looks like Apple beat a bunch of bugs into submission as that old IPad one is running better than it ever has.



    Quote:
    Originally Posted by dfiler View Post


    Wow! 4 or 5 times an hour? What sites are you visiting that result in so many crashes?



    It wasn't site specific. Sometimes I'd go for a long time and then get a couple of crashes almost immediately. Other times the crashes where just plain random.

    Quote:

    If it isn't specific web sites, you definitely have defective hardware.



    I don't think so, everything else ran fine.

    Quote:

    Guess you'll just have to upgrade.



    Well hopefully that upgrade will be here soon. However I'm very much impressed with iOS 5.1 on this iPad 1. The whole machine runs much better and Safari is far more stable ( I've been stress testing ). If this is any indication about how the new iPad will Run then I think I will be extremely pleased when it arrives.
  • Reply 103 of 114
    wizard69wizard69 Posts: 13,377member
    GPUs get almost 100% of any additional cores added as processing for this need is inherently parallel. Apps running on a CPU are an entirely different story.



    Quote:
    Originally Posted by dfiler View Post


    Point being though, for most tasks there normally isn't a full doubling of performance associated with a doubling of cores. This discussion started with speculation if clock speed had remained constant or if we could infer that by a graph loosely depicting a doubling of performance. Personally, I don't think there is enough evidence yet to make a declaration either way.



    Well it should be obvious that the actual hardware needs to be investigated. Frankly it is a little disappointing not to see even a little gain in clock rate for either the GPU or CPU. Faster is always nice.
  • Reply 104 of 114
    hirohiro Posts: 2,663member
    Quote:
    Originally Posted by dfiler View Post


    Point being though, for most tasks there normally isn't a full doubling of performance associated with a doubling of cores. This discussion started with speculation if clock speed had remained constant or if we could infer that by a graph loosely depicting a doubling of performance. Personally, I don't think there is enough evidence yet to make a declaration either way.



    That's being awfully pessimistic. Typical average performance will be in the 180-190% range with the second core. If the app/apps are well written with no need to share data or OS functionality non-linear increases yielding performance of over 200% is possible because fewer forced context switches are needed when requesting OS services. That is possible because the second core removes the necessity for the context switch while the OS does some high-protection required internal operation such as memory management tasks. OSes are getting better at this kind of service provision. One of the most public examples of this is Apples GPU related graphics performance gains in the past couple years that are from driver OpenGL & optimizations like this. Grand Central is all about this too.



    To the user without a very calibrated CPU use measuring ability, well they can't tell the difference between those small variations around the 200% threshold.
  • Reply 105 of 114
    dfilerdfiler Posts: 3,420member
    Quote:
    Originally Posted by Hiro View Post


    That's being awfully pessimistic. Typical average performance will be in the 180-190% range with the second core.



    Not pessimistic at all. In fact that 180 to 190% is what I was getting at. If the performance did indeed double, it could suggest a minor bump in clock speed. We really won't know either way until people get their hands on them next week. Apple's graph wasn't intended for such precise analysis. I brought this up in response to speculation that the clock speed had remained the same.



    Edit:



    Oops, just saw the later part of your post I didn't quote was pertinent. Indeed it will be interesting to see the bench marks on what additional gpu cores achieve in iOS.
  • Reply 106 of 114
    jnjnjnjnjnjn Posts: 588member
    Quote:
    Originally Posted by dfiler View Post


    Is evidence to illustrate that assertion about an exact doubling of performance with a doubling of cores. Even when hardware has very efficient scheduling, there is still overhead and software considerations.



    I don't understand your first sentence, but I believe you confuse GPU performance with CPU performance. GPUs perform tasks that are inherently parallel. The rendering of one pixel can be performed without info about rendering another pixel (*). Further, OpenGL instructions have very little data (except of course when bitmaps are transferred), so this isn't a bottleneck. They are instructions after all and the GPU issues the parallel execution.

    So scheduling isn't relevant in this situation.

    According to AnandTech (http://www.anandtech.com/show/5663/a...new-apple-ipad) the A5X GPU has the same frequency as the A5. They use the same line of reasoning I do and add info about the benchmarks of Tegra3 and A5 processors. They also conclude that Apples info correlates to 333MHz Tegra3 GPUs versus 250MHz A5 GPUs. This makes it higly probable that the A5X GPU has the same clock frequency.



    J.



    (*) Note that this is a simplification ofcourse and ignores the rendering setup etc.
  • Reply 107 of 114
    dfilerdfiler Posts: 3,420member
    Quote:
    Originally Posted by jnjnjn View Post


    I don't understand your first sentence, but I believe you confuse GPU performance with CPU performance. GPUs perform tasks that are inherently parallel. The rendering of one pixel can be performed without info about rendering another pixel (*). Further, OpenGL instructions have very little data (except of course when bitmaps are transferred), so this isn't a bottleneck. They are instructions after all and the GPU issues the parallel execution.

    So scheduling isn't relevant in this situation.

    According to AnandTech (http://www.anandtech.com/show/5663/a...new-apple-ipad) the A5X GPU has the same frequency as the A5. They use the same line of reasoning I do and add info about the benchmarks of Tegra3 and A5 processors. They also conclude that Apples info correlates to 333MHz Tegra3 GPUs versus 250MHz A5 GPUs. This makes it higly probable that the A5X GPU has the same clock frequency.



    J.



    (*) Note that this is a simplification ofcourse and ignores the rendering setup etc.



    I meant to say "is there evidence?"



    But you're still fixating on scheduling, a minor aside and a distraction from the main point. That was that we don't know yet if the clock speed has stayed the same or if it has been upgraded. The doubling seen in the graph isn't really evidence of it remaining the same.



    As for perfectly parallizable computation, definitely an interesting topic. I don't want to get stuck in a debate about that. But while GPU operations are generally much more parallel, they aren't completely so. It's that _almost_ 100% that was the point.
  • Reply 108 of 114
    jnjnjnjnjnjn Posts: 588member
    Quote:
    Originally Posted by dfiler View Post


    I meant to say "is there evidence?"



    But you're still fixating on scheduling, a minor aside and a distraction from the main point. That was that we don't know yet if the clock speed has stayed the same or if it has been upgraded. The doubling seen in the graph isn't really evidence of it remaining the same.



    As for perfectly parallizable computation, definitely an interesting topic. I don't want to get stuck in a debate about that. But while GPU operations are generally much more parallel, they aren't completely so. It's that _almost_ 100% that was the point.



    Another interesting point from AnandTech is that it is a possibility that the A5X feature size is still 42nm.

    That would explain that the frequency is still the same (250MHz) and isn't changed to something like 333MHz.

    I would expect the GPU frequency to change accordingly to the feature size if the chip design isn't changed.



    J.
  • Reply 109 of 114
    Quote:
    Originally Posted by Relic View Post


    Would be nice if there was a developer option to turn on true multitasking though. Sometimes I'm running a web application that gets all screwed up just because I switch over for a sec to change the music.



    If that was an issue of iOS not having "true multitasking" I wouldn't also see that all the time on my Galaxy S phone. It's a memory issue not anything to do with multitasking.
  • Reply 110 of 114
    jragostajragosta Posts: 10,473member
    Quote:
    Originally Posted by Relic View Post


    Would be nice if there was a developer option to turn on true multitasking though. Sometimes I'm running a web application that gets all screwed up just because I switch over for a sec to change the music.



    Then complain to the people who wrote the web application. Sounds like an application problem and not an OS problem.
  • Reply 111 of 114
    gatorguygatorguy Posts: 24,212member
    Here's one I didn't see coming. The latest benchmark results for the new iPad shows the "old" ipad2 has better on-screen graphics performance.



    http://www.ign.com/articles/2012/03/...vidias-tegra-3



    "For graphics testing, we used GLBenchmark 2, a cross-platform app for both Android and iOS. The app offers an array of detailed tests, both on-screen and off-screen, including two notable 3D tests: GLBenchmark 2.1 Egypt and GLBenchmark 2.1 Pro. While the on-screen tests allow user to watch the tests in realtime, the app currently lacks the ability to manually set the output resolution, thus limiting our ability to standardize the test across all four devices. Instead, the app ran at the native resolution of whatever device it was running on. Interestingly, the iPad 2 outperformed both the iPad 3 and Transformer Prime, while the Samsung Galaxy Tab 10.1 fell far below the others.
  • Reply 112 of 114
    MarvinMarvin Posts: 15,320moderator
    Quote:
    Originally Posted by Gatorguy View Post


    Here's one I didn't see coming. The latest benchmark results for the new iPad shows the "old" ipad2 has better on-screen graphics performance.



    http://www.ign.com/articles/2012/03/...vidias-tegra-3



    That was to be expected. They increased the resolution by 4x and the GPU by 2x so running tests at native resolution will show the new one to be slower. The last test where they all run 720p shows the iPad 3 to be faster (50-100% faster than iPad 2 and 100-200% faster than Tegra 3).



    Just like the iPhone, only the next revision will improve performance at native resolution.
  • Reply 113 of 114
    relicrelic Posts: 4,735member
    Quote:
    Originally Posted by Apple ][ View Post


    So you're basically confirming what I claimed, that Java sucks. You're making excuses for poor performance on Android.



    You can compile and use C++ code with ease in Android, you don't have to use Darvik. This laggy experience that people have experienced in Android 3.02 is virtually gone in Android 4.03. I to think that 3.02 was a rushed job to try and improve the tablet experience but it failed. Ice Cream Sandwich on the other had is headed in the right direction and is much better. It might not be for everyone especially the iPad user who just wants something simple and fast. Android is for those who want a little more control and customization in their OS, there is room for both to coexist in harmony.



    If you don't like it then don't use it. Most of negative comments are from people who have never used Android 4.03 or is just following everyone else's que.
  • Reply 114 of 114
    relicrelic Posts: 4,735member
    Quote:
    Originally Posted by jragosta View Post


    Then complain to the people who wrote the web application. Sounds like an application problem and not an OS problem.



    It only happens in iOS, OSX and Android runs them perfectly. It's this pausing of apps that iOS does, it destroys the state when you switch a app.
Sign In or Register to comment.