'A5X' CPU featured on purported Apple 'iPad 3' logic board

123468

Comments

  • Reply 101 of 146
    Quote:
    Originally Posted by DrDoppio View Post


    Either that, or some ideas are so obvious that many realize them soon enough



    I prefer to think of us dipping our toes in the stream of universal consciousness.
  • Reply 102 of 146
    wizard69wizard69 Posts: 13,377member
    Quote:
    Originally Posted by SolipsismX View Post


    I don't think the iPad uses VRAM, but DRAM. Since the IGP needs to allocate RAM from the system RAM I'd think 512MB would be too little for pushing 4x as many pixels.



    There is an advantage in providing dedicated VRAM for the GPU, especially if integrated on the SoC. it has the potential of controlling bandwidth demands on main memory, lowering power if on chip and freeing up cache.



    Just because the GPU is on the same die as the CPU, that does not mean that one has to design it to use main memory for the frame buffer.



    All that being said it would be foolish of Apple to not upgrade RAM in iPad 3. Many apps are crying out for more RAM.
  • Reply 103 of 146
    wizard69wizard69 Posts: 13,377member
    Quote:
    Originally Posted by cy_starkman View Post


    What has that got to do with the device? Nothing



    He was talking about the need for more RAM. If you don't understand the relationship Safari has with RAM and how the lack of RAM impacts Safari on the iPad then be quiet.



    [insult removed]
  • Reply 104 of 146
    Quote:
    Originally Posted by Technarchy View Post


    You can always get an ASUS Transformer Prime.



    Sure it is laggy balls, but at least it is quad core.



    The "paper specs mean everything" disease of android needs to stay there.



    The Tegra 3 that is in the Transformer Prime is _the_ reason I'd rather see a higher-clocked A5 with a better GPU and more RAM, than an A6 quadcore part that's comparable with Tegra 3.



    Looking at early benchmark, Tegra 3 doesn't even scale linearly in clock speed, ie: a 1.4 Ghz Tegra 3 is less than 40% faster than a 1.0 Ghz Tegra 2. The GPU in the Tegra 3 is not even faster than the one in the Apple A5.



    So rather than being first-to-market (ie: the NVidia strategy) with a chip that has 4x the same cores as the current dual-cores and only a 'somewhat better' GPU, I'd rather wait for a 'real' quad-core A6 with Cortex-A15 cores and PowerVR 6 series GPU.
  • Reply 105 of 146
    Quote:
    Originally Posted by wizard69 View Post


    There is an advantage in providing dedicated VRAM for the GPU, especially if integrated on the SoC. it has the potential of controlling bandwidth demands on main memory, lowering power if on chip and freeing up cache.



    Just because the GPU is on the same die as the CPU, that does not mean that one has to design it to use main memory for the frame buffer.



    For embedded GPU's dedicated VRAM is much less required, since apart from the Tegra GPU's, all of them use tile-based rendering. This means there is no Z-buffer, and the framebuffer is always written in tiles, allowing for very efficient caching, and all rendering operations are done to on-chip tile memory that is orders of magnitude faster than e.g. DDR5 memory. In general, you can't really extrapolate anything you know about desktop GPU's to mobile GPU's because they are based on completely different technology, with completely different performance characteristics and constraints.



    Quote:
    Originally Posted by wizard69 View Post


    All that being said it would be foolish of Apple to not upgrade RAM in iPad 3. Many apps are crying out for more RAM.



    Agreed, anything less than 1 GB would be a huge disappointment.
  • Reply 106 of 146
    wizard69wizard69 Posts: 13,377member
    Quote:
    Originally Posted by Aizmov View Post


    A) I'm a computer science major (graduate student) doing research on parallel computing. Heck, look at my sig.

    B) I'm not a troll.



    It is pretty clear he doesn't have a clue as to how computers work in general nor how Safari works specifically.

    Quote:

    The 512MB in the iPad 2 is just not enough. How do I know? Try opening multiple tabs in Safari. My RAM complaint with the iPad is legitimate.



    Yep. Plus your explanations are fact based and rational.

    Quote:

    Why are you trying to label it as trolling? I had the original iPad and it was slooooooooow so I replaced it immediately with the iPad 2. Much better; but with lots of tabs open it does get slow at times.



    Plus it significantly impacts your data usage. Reloads are not free if you are on a capped connection. Even working with big files, PDFs, hi res pics and other large data sets sucks on iPad.

    Quote:

    Maybe I have become spoiled by very fast machines but I'm not trolling. I will get the iPad 3, 1GB of RAM or not, because there will be performance improvements. But I sincerely hope that it has at least 1GB.



    That is pretty much my position. It likely won't be on release day but iPad 3 looks to be very nice. So unless Apple really screws it up I will likely have one by summer.

    Quote:

    Having pages reload just because I have lots of tabs open completely ruins the experience, especially if you had some forms filled in some tab and switched to another tab and switched back just to see the page reloading.



    I'd actually like to see Apple jump to 2 GB of RAM. By the time the video buffer is subtracted, the extra RAM used by bit maps is subtracted and the extra system functions like Siri subtracted, we will have just about the right amount of RAM.
  • Reply 107 of 146
    drdoppiodrdoppio Posts: 1,132member
    Quote:
    Originally Posted by d-range View Post


    The Tegra 3 that is in the Transformer Prime is _the_ reason I'd rather see a higher-clocked A5 with a better GPU and more RAM, than an A6 quadcore part that's comparable with Tegra 3.



    Looking at early benchmark, Tegra 3 doesn't even scale linearly in clock speed, ie: a 1.4 Ghz Tegra 3 is less than 40% faster than a 1.0 Ghz Tegra 2. The GPU in the Tegra 3 is not even faster than the one in the Apple A5.



    So rather than being first-to-market (ie: the NVidia strategy) with a chip that has 4x the same cores as the current dual-cores and only a 'somewhat better' GPU, I'd rather wait for a 'real' quad-core A6 with Cortex-A15 cores and PowerVR 6 series GPU.



    Tegra 3 was delayed substantially and may not offer as great of an improvement as some would have liked, however it's up to 3x faster than the Tegra 2 and currently the most advanced SoC of its class. It is good enough so that Audi selected it as the application and graphics processor for its in-vehicle infotainment systems and digital instrument display.



    Also, it doesn't have "4x the same cores as the current dual-cores", since that would make it 8-core, right?
  • Reply 108 of 146
    wizard69wizard69 Posts: 13,377member
    Quote:
    Originally Posted by sunilraman View Post


    AFAIK, GPU RAM requirements don't scale like you mention... 4x the pixels does not mean 4x the RAM required.



    Someplace in the machine there has to be a byte for every pixel, a frame buffer if you will, so that increases by 4x. The rest of the RAM allocated to video usage is made up of data that may or may not see an increase in RAM space depending upon exactly what that data is.



    Beyond that all system and app wide bit maps move to 4x space requirements. Again this is variable and app dependent but clearly the machine will require more RAM to drive retina displays.

    Quote:

    Let's take the framebuffer itself. In theory, even a 8MB video card can handle 1920x1080 at 32bit colour:



    Lets not. instead look at what iOS needs
    Quote:



    So, why do we need 256MB video cards? Well, because of how 3D is implemented by GPUs ~ this was a hot topic back when beyond-720p resolutions started to be supported by PCs.



    Video RAM needs vary with the Apps and the Systems ability to use that RAM, no big deal there.

    Quote:

    For example, in Oblivion, at 0xAA, even at 1600x1200, only just over 200MB VRAM was being used. And, at 640x480, ~almost~ 200MB VRAM was still being used. Memory requirements jump when implementing 2xAA and 4xAA, but not in any linear fashion:




    In this context that means nothing. Not every use of a video card is 3D, plus higher resolution displays impact RAM usage far outside the video card.





    In any event this stuff is nice to dream about. I actually am hoping that Apple debuts a CPU and GPU complex using a single address space, combined witha frame buffer embedded in the SoC. This should positively impact memory bandwidth, power usage and flexibility. They could beat AMD to the punch here.
  • Reply 109 of 146
    Quote:
    Originally Posted by sunilraman View Post


    But that's not how Apple rolls. What advantage could a quad iPhone 5 offer over dual ARM with a great GPU?






    Faster performance and better battery life.
  • Reply 110 of 146
    wizard69wizard69 Posts: 13,377member
    Quote:
    Originally Posted by d-range View Post


    For embedded GPU's dedicated VRAM is much less required, since apart from the Tegra GPU's, all of them use tile-based rendering. This means there is no Z-buffer, and the framebuffer is always written in tiles, allowing for very efficient caching, and all rendering operations are done to on-chip tile memory that is orders of magnitude faster than e.g. DDR5 memory.



    Eventually that frame buffer has to go out to the video device.

    Quote:

    In general, you can't really extrapolate anything you know about desktop GPU's to mobile GPU's because they are based on completely different technology, with completely different performance characteristics and constraints.



    You can't dismiss the need for more RAM simply because the technology is different I'm mobile hardware. Nor can you dismiss the fact that a 4X increase in bit map sizes affects system bandwidth. In any event we will see how similar the new GPU is to the current ones fairly soon. I'm fairly certain that Apple will have to address GPU performance demands with the move to a retina screen.



    The architecture for SoC processors ismchanging rapidly. If Apple sees a unified address space, where both the GPU and CPU are equals on the memory bus is suspect that structures used by the GPU would become more accessible by the CPU. A bit of a moving target if you will.

    Quote:

    Agreed, anything less than 1 GB would be a huge disappointment.



    Yes this is true, even 512 on the 2 was a disappointment. It would also be nice to see a significant performance increase, it is too bad you can't squeeze in a GB of high performance video RAM in the machine for both CPU and GPU.



    Even though much of this thread has focused on raw performance I'm convinced that iPads biggest flaw is the lack of RAM. That flaw is close to tying the lack of a USB port as big frustrations on iPad.
  • Reply 111 of 146
    Quote:
    Originally Posted by wizard69 View Post


    Someplace in the machine there has to be a byte for every pixel, a frame buffer if you will, so that increases by 4x. The rest of the RAM allocated to video usage is made up of data that may or may not see an increase in RAM space depending upon exactly what that data is.



    Yeah, but in terms of the frame buffer for storing just bitmap information for each pixel, that's:



    Memory in MB = (X-Resolution * Y-Resolution * Bits-Per-Pixel) / (8 * 1,048,576)



    For 1024x768

    that's 1024*768*32 / (8*1048576)

    = only 3MB



    For 2048x1536

    that's 2048*1536*32/(8*1048576)

    = only 12MB



    4x yes, but the difference is only 8MB.



    Quote:
    Originally Posted by wizard69 View Post


    Beyond that all system and app wide bit maps move to 4x space requirements. Again this is variable and app dependent but clearly the machine will require more RAM to drive retina displays.



    Indeed, but my point is 4x is only in terms of the 12MB compared to 3MB. All other use of memory for the GPU is dependent on the implementation.



    Quote:
    Originally Posted by wizard69 View Post


    In this context that means nothing. Not every use of a video card is 3D, plus higher resolution displays impact RAM usage far outside the video card.



    Fair enough, but it will not be 4x the "VRAM" needed by any stretch of the imagination...



    Like you say, we're just guessing, but my bone is there's no 4x the requirement, outside of reserving 12MB instead of 3MB for the framebuffer (in the purest sense of the word) only.
  • Reply 112 of 146
    Quote:
    Originally Posted by wizard69 View Post


    You can't dismiss the need for more RAM simply because the technology is different I'm mobile hardware. Nor can you dismiss the fact that a 4X increase in bit map sizes affects system bandwidth. In any event we will see how similar the new GPU is to the current ones fairly soon. I'm fairly certain that Apple will have to address GPU performance demands with the move to a retina screen.



    I think Apple is pretty much on the ball here. Like I said, Quartz as a concept is nothing to sneeze at. Even on my iPad 2, 1000x1000 pixel lossless PNGs are easily animated in several layers in an app. Certainly Retina places a lot of demand on the GPU, but I think Apple would have it figured out for 2D. As I've said in the past, the iOS render engine is phenomenal. There is a 4x increase in ~final output resolution~ but we'll have to see how Quartz handles a lot of the other aspects. For example, if we have an app that can zoom in on something, that means when you zoom out, you could be looking at 4000x4000 pixel bitmaps in several layers to be rescaled and output to 2048x1536.



    Certainly this will need 256MB of GPU RAM at a very, very rough estimate, but it may need no more than that amount of GPU RAM, say 300MB at most.



    I think there should be 1GB of RAM in the iPad 2X, with the GPU taking up ~100MB to 300MB of that RAM at the most.



    3D games will be limited to 1024x768 with specialised extremely low-performance-impact upscaling to 2045x1536 ~ given it's pure 2x upscaling they can Lanczos or equivalent optimise for that easily.



    2D stuff, ballpark, 300MB at most, I don't think by design Apple would want the GPU to eat up any more than that on a 1GB RAM device.



    Quote:
    Originally Posted by wizard69 View Post


    The architecture for SoC processors is changing rapidly. If Apple sees a unified address space, where both the GPU and CPU are equals on the memory bus is suspect that structures used by the GPU would become more accessible by the CPU. A bit of a moving target if you will.



    In this area I have to defer to you because I'm not informed on SoC RAM addressing by CPU and GPU. I would imagine a unified address space is the best way to go since it's all one die, plus allowing a flexible use of system RAM like on a PC/Mac IGP? Dedicated memory for the GPU, or "less flexible" memory for the GPU wouldn't be a good way to go because the GPU might only need, say, less than 100MB of RAM in very basic apps, but certainly more for 2D games, let alone 3D.
  • Reply 113 of 146
    wizard69wizard69 Posts: 13,377member
    Seriously guys Cortex A15 will be a stretch in portable devices, mainly because it is likely to be far more power hungry than Cortex A9. aRM is targeting servers with that chip!



    Quote:
    Originally Posted by d-range View Post


    The Tegra 3 that is in the Transformer Prime is _the_ reason I'd rather see a higher-clocked A5 with a better GPU and more RAM, than an A6 quadcore part that's comparable with Tegra 3.



    The only thing that counts is what Apple delivers, it really makes no sense to damn an unannounced chip based on the results of somebody else's implementation of an ARM core. In the literature there has been a wide range of reports on Cortex A9's done on the new processes, just because NVidia didn't go for performance doesn't mean that Apple won't.

    Quote:

    Looking at early benchmark, Tegra 3 doesn't even scale linearly in clock speed, ie: a 1.4 Ghz Tegra 3 is less than 40% faster than a 1.0 Ghz Tegra 2. The GPU in the Tegra 3 is not even faster than the one in the Apple A5.



    One big issue with processors is the wait on RAM as clock rates increase. As such performance will be a problem unless the bandwidth problem is addressed.

    Quote:

    So rather than being first-to-market (ie: the NVidia strategy) with a chip that has 4x the same cores as the current dual-cores and only a 'somewhat better' GPU, I'd rather wait for a 'real' quad-core A6 with Cortex-A15 cores and PowerVR 6 series GPU.



    Frankly I'd rather see Apple stay with Cortex A9 until ARM delivers a viable 64 bit core. A15 just has to many trade offs in my minds. By carefully adjusting the architecture Apple should be able to double performance every year for the next two or three with Cortex A9 hardware.
  • Reply 114 of 146
    Quote:
    Originally Posted by wizard69 View Post


    Seriously guys Cortex A15 will be a stretch in portable devices, mainly because it is likely to be far more power hungry than Cortex A9. aRM is targeting servers with that chip!



    The only thing that counts is what Apple delivers, it really makes no sense to damn an unannounced chip based on the results of somebody else's implementation of an ARM core. In the literature there has been a wide range of reports on Cortex A9's done on the new processes, just because NVidia didn't go for performance doesn't mean that Apple won't.



    One big issue with processors is the wait on RAM as clock rates increase. As such performance will be a problem unless the bandwidth problem is addressed.



    Frankly I'd rather see Apple stay with Cortex A9 until ARM delivers a viable 64 bit core. A15 just has to many trade offs in my minds. By carefully adjusting the architecture Apple should be able to double performance every year for the next two or three with Cortex A9 hardware.



    Yeah, that's why I've also said, A15 will very likely not be in iOS devices until big.Little, ie. most likely dual A7 with dual A15, doing the intelligent power-switching thingy. A pure dual-A15 is outside the scope of iOS devices, at least in 2012. Even ARM doesn't want to put that in phones...!
  • Reply 115 of 146
    hattighattig Posts: 860member
    Quote:
    Originally Posted by cy_starkman View Post


    Really? I must be special then cause I can have many tabs open all with complex pages and it all goes nicely. Unless... the Internet is made up of varying connection speeds, with varying connection speeds to the websites I am on and a varying number of people accessing those websites.



    Consider the case when all apps are using 4x as much memory for their graphics and display as they are now... 1GB is an effective minimum for a tablet with a retina display.



    I'm sure a developer can chip in with real figures, but the majority of an application these days is its digital assets, including art. I wouldn't be surprised if the average application doubled or tripled its memory requirements when moving to the iPad 3. Some won't - many are data structure limited rather than graphics limited (e.g,. Minecraft).
  • Reply 116 of 146
    Quote:
    Originally Posted by Hattig View Post


    Consider the case when all apps are using 4x as much memory for their graphics and display as they are now... 1GB is an effective minimum for a tablet with a retina display.



    I'm sure a developer can chip in with real figures, but the majority of an application these days is its digital assets, including art. I wouldn't be surprised if the average application doubled or tripled its memory requirements when moving to the iPad 3. Some won't - many are data structure limited rather than graphics limited (e.g,. Minecraft).



    Er, may I interject, I get what you're saying, but there's no 4x as much memory, it doesn't scale that way. The only 4x we know for sure is in the final output to the framebuffer, ie, 12MB required instead of 3MB. But everything up to that point does not necessarily mean 4x the memory required.



    I do still however think something over 500MB is the minimum RAM, with 1GB being just nice. It's hard to imagine the GPU not needing less than 100MB (but not directly 4x the RAM that the iPad 2 GPU uses)
  • Reply 117 of 146
    Quote:
    Originally Posted by DrDoppio View Post


    Tegra 3 was delayed substantially and may not offer as great of an improvement as some would have liked, however it's up to 3x faster than the Tegra 2 and currently the most advanced SoC of its class. It is good enough so that Audi selected it as the application and graphics processor for its in-vehicle infotainment systems and digital instrument display.



    Also, it doesn't have "4x the same cores as the current dual-cores", since that would make it 8-core, right?



    Oh my, Audi selected it? For their in-vehicle system?



    I mean, car makers are just known for using the best tech for their infotainment systems.
  • Reply 118 of 146
    hattighattig Posts: 860member
    Quote:
    Originally Posted by d-range View Post


    The Tegra 3 that is in the Transformer Prime is _the_ reason I'd rather see a higher-clocked A5 with a better GPU and more RAM, than an A6 quadcore part that's comparable with Tegra 3.



    Looking at early benchmark, Tegra 3 doesn't even scale linearly in clock speed, ie: a 1.4 Ghz Tegra 3 is less than 40% faster than a 1.0 Ghz Tegra 2. The GPU in the Tegra 3 is not even faster than the one in the Apple A5. .



    The problem with Tegra 3 is memory bandwidth. For some reason NVIDIA didn't use a 64-bit memory bus, but use a 32-bit bus. This is quite crippling in many situations. A5 uses 64-bit IIRC.



    On the other hand, Tegra 3 is 80mm^2 on 40nm, and A5 is over 120mm^2 on 45nm. But a side effect is that the GPU isn't as powerful as the larger one in the A5.
  • Reply 119 of 146
    Quote:
    Originally Posted by Hattig View Post


    The problem with Tegra 3 is memory bandwidth. For some reason NVIDIA didn't use a 64-bit memory bus, but use a 32-bit bus. This is quite crippling in many situations. A5 uses 64-bit IIRC.



    On the other hand, Tegra 3 is 80mm^2 on 40nm, and A5 is over 120mm^2 on 45nm. But a side effect is that the GPU isn't as powerful as the larger one in the A5.







    Apple's chips will be better than any multicore chip.

    Apple's current iPad screen is better than any alternative.

    No LTE. The iPad is better without it.



    Even if the iPad 2S were basically the same as the iPad 2, with just a somewhat faster clock speed, everybody and his brother would buy it.
  • Reply 120 of 146
    hattighattig Posts: 860member
    Quote:
    Originally Posted by sunilraman View Post


    Indeed, but my point is 4x is only in terms of the 12MB compared to 3MB. All other use of memory for the GPU is dependent on the implementation.



    All applications will allocate memory for their UI. It is likely that each app's display is double buffered. However as applications are suspended in the background, background apps may not be double buffered, but I would need an iOS developer to confirm that.



    6MB for a double buffered display is now 24MB, an 18MB difference. With ten apps running that would be 180MB more memory required for application interfaces alone. That's before the art resources the apps also use. A full-screen backdrop? Another 9MB * 10 apps. What about an app with a dozen different full screen bitmap UIs (Garageband has fancy drumsets, keyboards, etc)? Another 100MB gone.



    Games? Oddly enough games can make use of (real time) compressed textures far better than a 2D UI application can. Even so, higher resolution textures will require more RAM - but you may not require them in your game.



    Ultimately, RAM is cheap - even low-power RAM that is PoP like the A5. It's also dropped massively in price in the past year. So if the iPad 3 has less than 1GB it would be really frustrating. Stick 2GB on the top-end 128GB iPad as a bonus.
Sign In or Register to comment.