or Connect
AppleInsider › Forums › Mobile › iPad › Epic game developer calls iPad 2 graphics leap "astonishing," doubts Android can compete
New Posts  All Forums:Forum Nav:

Epic game developer calls iPad 2 graphics leap "astonishing," doubts Android can compete - Page 2

post #41 of 69
Quote:
Originally Posted by solipsism View Post



The first test had the xoom within an hour of the ipad (less than 10%). The second was a lot lower (4 hours). I'm not sure why they didn't have a video test in the xoom review. Other links I gave still gave similar results between the two platforms (they lasted similar times) when comparing video playback, so it would be interesting to see what the difference was. Were they using a third party player on the xoom? And I've read of people getting bframes to to work fine on the xoom (some using third party players there)

While others have the xoom pegged at 10 hour battery life: http://news.consumerreports.org/elec...tery-life.html (the ipad2 is at 12, still higher, but not as big a gap)


And you're right, display tech is different. But so is screen resolution, web load times, Contrast ratio, backgrounding capabilities, etc. You can't have totally "equal" comparisons because they're operating different hardware AND software.
post #42 of 69
Quote:
Originally Posted by Menno View Post

The first test had the xoom within an hour of the ipad (less than 10%). The second was a lot lower (4 hours). I'm not sure why they didn't have a video test in the xoom review. Other links I gave still gave similar results between the two platforms (they lasted similar times) when comparing video playback, so it would be interesting to see what the difference was. Were they using a third party player on the xoom? And I've read of people getting bframes to to work fine on the xoom (some using third party players there)

While others have the xoom pegged at 10 hour battery life: http://news.consumerreports.org/elec...tery-life.html (the ipad2 is at 12, still higher, but not as big a gap)


And you're right, display tech is different. But so is screen resolution, web load times, Contrast ratio, backgrounding capabilities, etc. You can't have totally "equal" comparisons because they're operating different hardware AND software.

I do not know if you actually used the Xoom and the iPad2 long enough to compare them, and make reliable anecdotal impressions. Your arguments are all over the place, when it suits you, you use information that compares a Xoom to the original iPad (but conveniently ignore the evidence comparing the Xoom to iPad2), see your original links first provided before they were challenged with relevant links between Xoom and iPad2.

That is also true with reviews from various sources, you focused more on the one with least difference between the two devices. Did you even take the time to read more carefully about the nuanced of each comparison?

To rely solely on reviews and what you read in the internet, as a basis of your conclusions, and then casting doubt on the statement of others, i.e., the game developer in question, because you believe they "have a stake on it" (which may or may not be true) is rather disingenuous, if not outright ridiculous.

It may help if you buy a Xoom, and then show us your results.

But then again, how can we be sure that you will not do a DaHarder act, where he claimed to own both the Xoom and the iPad*** but preferred the Xoom and the iPad is gathering dust (or something to that effect). And then, the credulity can be questioned when he claims he owns so many Apple products, but the holistic impression you get from his posts is that he does not like Apple products much and always touting a better product by some other company.

I do not know about you or someone else, but if I tried a company once or twice, over the years, and they continue to give me a bad experience, why the heck will I keep on buying their product?

CGC

***The flaw there again was a comparison between the Xoom and the "original iPad". Ones credulity may be put to question if one claims that he was actually comparing it with the iPad2 because if memory serves me, the post was before the iPad2 was yet to be on sale. And, even if iPad2 was already on sale, why would someone be lining up to buy an iPad2, if they had a not so good experience with the original iPad, let alone an overall mediocre experience with Apple products?
post #43 of 69
Quote:
Originally Posted by Menno View Post

I refuse to pay Consumer Reports Paywall, so I don't know what methodology they used. I do know with Cnet for their tests they used a 720p version of a movie with a third party movie player for the Xoom, and they used a iPad Optimized version of the movie for the ipad.. pretty sure that will have significant weight when it comes to battery life. It's been awhile since I read mossbergs review, but I'm sure I saw that other reviewers commented on his review saying his battery results were not what they were seeing.

Battery life tests based on video playback do not give good appreciation of the CPU consumption. As you point out, optimized codec limit the need to the CPU on decoding and it's a mostly lightweight task for the GPU since it only need to process a steady stream with relative simple function.

This is not about Xoom vs iPad or Android vs iOS, it's all about CPU. I still can't find anything on the net giving real proof of Tegra 2 "awesomeness". None of the tests i've seen so far give direct appreciation of the Tegra 2 performance and efficiency, this is very very strange for a off the shelf product. Every chip maker beside Nvidia are publishing their TDW. If the TDW of Tegra 2 is in the 4-5 watts range, they are more in the Intel Atom league than Samsung-Qualcomm-Apple below 1 watt league.
post #44 of 69
Quote:
Originally Posted by tipoo View Post

An ARM Cortex A8 at 1GHz equivalent to a single 360 processor core? I'm giving that a [citation needed]


http://en.wikipedia.org/wiki/Instruc...ons_per_second

The whole tri-core processor outputs 19,200 MIPS. An A8 like the iPad 1 outputs 2,000 MIPS at peak. And its 6 instructions per cycle vs 2. Even if you divide the former score by three, your nowhere close.

And yes, I do know MIPS aren't a perfect indicator of performance, but they should give you a general sense of where things are.

I know many posters before me have tried to discard your statement as uninformed, that they'd rather trust Epic and that these numbers don't tell the whole story, but IMO, you are absolutely right. There is NO WAY a Cortex A8 at 1 Ghz is equivalent with a single (3 Ghz) core as in the 360. No way AT ALL, not even nearly so. I don't know why someone at Epic is saying something like this, but a single core of the 360 CPU is in a whole different league of performance. It has 3x the clock speed, much faster memory bus, much more sophisticated, application controllable cache architecture, much higher floating point performance, much higher IPC, it's faster in every way imaginable. Even a single 360 core running at 1 Ghz would beat the pants off a Cortex A8 in terms of CPU speed. The CPU's in current consoles, even though they are already a few years old, are actually pretty damn fast. The gap with PC's is still mostly in GPU performance.

Maybe he was making some kind of imaginary, nonsensical comparison of the theoretical performance of an A8 core including the GPU part, just adding FLOPS together to get a number that compares with the performance of 1/3rd of an Xbox 360, but that makes no practical sense at all, since a 360 also has a GPU, and it also has faster RAM and most likely a much better SDK and compiler that generate more efficient code than Apple gcc on iOS (gcc-generated ARM has never been great, and until Apple makes LLVM the default compiler backend, it can't hold a candle to the code generated by Microsoft compilers, in terms of efficiency).

I'd say a dual-core Cortex A9 starts getting close to a single core of a 360 CPU, but I'd estimate it would still be slower. We're talking about an ARM SoC that's almost twice as fast as a typical Cortex A8.

I'm not a professional game developer myself so I'm not pretending to be an expert, but I do have a background in processor design, and I did do my fair share of research on the architecture of the 360 and the PS3, and I'm _one hundred_ percent sure that a Cortex A8 at 1 Ghz is NOWHERE near the performance of 1/3rd of the CPU in a 360.
post #45 of 69
Quote:
Originally Posted by Menno View Post

THe LG optimus has released devices with Tegra2. The Atrix is a tegra2 device (battery issues here related to BLUR, not processor). The Bionic should be out within a month, as will other devices.

There is a LOT of stuff out there demonstrating the capabilities of Tegra devices, both in battery life and in graphics processing. Again, I don't know where you've been reading otherwise.

If you're looking for a closer comparison to the Ipad, you'll have to wait until the Asus Transformer, because that is also a IPS display device (though higher pixel density)

But again, I'm not seeing what you're trying to say here. On one hand you're saying that there's next to no information out there on what a Tegra device can do, and on the other you're dismissing Tegra as being inferior. You cannot hold both positions.

We have very good information as to what its capabilities are in a real design, and that in the test comparison between the Xoom and the iPad2 on Anandtech, where the Tegra2 design got thoroughly pounded by the iPad2, and was even being given a run for its money in some areas by the iPad1.

The problem for the Tegra 2 was that all of it's good press came from the specs Nvidia gave out, and the ASSUMPTION that as Nvidia is primarily a graphics chip design firm its graphics capabilities would be better than anything else around. These reporters conveniently forgot about Imagination. Not any more. The graphics of the Tegra is now acknowledged to be much inferior to Imagination's, an inferiority that isn't believed to be able to be made up this year. Of course, imagination isn't sitting still either.
post #46 of 69
Quote:
Originally Posted by d-range View Post

There is NO WAY a Cortex A8 at 1 Ghz is equivalent with a single (3 Ghz) core as in the 360. No way AT ALL, not even nearly so. I don't know why someone at Epic is saying something like this, but a single core of the 360 CPU is in a whole different league of performance. It has 3x the clock speed, much faster memory bus, much more sophisticated, application controllable cache architecture, much higher floating point performance, much higher IPC, it's faster in every way imaginable. Even a single 360 core running at 1 Ghz would beat the pants off a Cortex A8 in terms of CPU speed.

It'll depend on the load. The PowerPC-based (Cell) PPE core inside the Xenon and the Cell basically traded good branch prediction capabilities (not to mention OOOE either) for a deep pipeline to enable the high GHz. In other words, for streaming, media loads, it's great. For AI and gameplay code, it sucks.

So, I can definitely see from a game developers perspective, people who have to develop FPS-style games that require good gameplay and AI, a 1 GHz Cortex-A8 could be about equivalent to a 3.2 GHz PPE core. Cell PPE is a pretty imbalanced architecture.

Developers were complaining at the time that their code were running 1/3 to 1/10 slower!

The A8 doesn't have OOOE or have huge branch predictions either, but its branch misprediction penalties are smaller too. It has 2 ALUs. It's fairly clean. Cell PPE on the other hand, there are a lot of weird latencies with the architecture that can hamstring it. 2-cycle latency on instruction issue? 64-bit precision stalls the pipeline by 6 cycles? Penalties associated with branch mispredictions?

So, don't buy into the marketing hype that IBM, Sony and MS built here. (3.2 GHz, XXX GFLOPS, etc). They knew that the PPE would suck at certain types of load.
post #47 of 69
Quote:
Originally Posted by melgross View Post

We have very good information as to what its capabilities are in a real design, and that in the test comparison between the Xoom and the iPad2 on Anandtech, where the Tegra2 design got thoroughly pounded by the iPad2, and was even being given a run for its money in some areas by the iPad1.

The problem for the Tegra 2 was that all of it's good press came from the specs Nvidia gave out, and the ASSUMPTION that as Nvidia is primarily a graphics chip design firm its graphics capabilities would be better than anything else around. These reporters conveniently forgot about Imagination. Not any more. The graphics of the Tegra is now acknowledged to be much inferior to Imagination's, an inferiority that isn't believed to be able to be made up this year. Of course, imagination isn't sitting still either.

Yup. There are still from Android fans holding on to Nvidia's marketing of Tegra 2. Remember the 16 hr of HD video claims? Hehe.

Seriously, it's pretty much proven that Tegra 2 is probably running last or second to last in terms of CPU and GPU performance among the 2011 ARM SoCs: Apple A5, Samsung Exynos, TI OMAP 44x0, and Qualcomm Snapdragon MSM8xx0. My guess that out of this group, Tegra 2 is probably last or 2nd to last in terms of aggregate performance. It's a really tight grouping though. Apple A5's SGX543MP2 GPU is the only clear winner as the best GPU of the bunch. CPU-wise, it's going to be a pretty tight grouping.
post #48 of 69
Quote:
Originally Posted by Shrike View Post

It'll depend on the load. The PowerPC-based (Cell) PPE core inside the Xenon and the Cell basically traded good branch prediction capabilities (not to mention OOOE either) for a deep pipeline to enable the high GHz. In other words, for streaming, media loads, it's great. For AI and gameplay code, it sucks.

So, I can definitely see from a game developers perspective, people who have to develop FPS-style games that require good gameplay and AI, a 1 GHz Cortex-A8 could be about equivalent to a 3.2 GHz PPE core. Cell PPE is a pretty imbalanced architecture.

Developers were complaining at the time that their code were running 1/3 to 1/10 slower!

The A8 doesn't have OOOE or have huge branch predictions either, but its branch misprediction penalties are smaller too. It has 2 ALUs. It's fairly clean. Cell PPE on the other hand, there are a lot of weird latencies with the architecture that can hamstring it. 2-cycle latency on instruction issue? 64-bit precision stalls the pipeline by 6 cycles? Penalties associated with branch misprediction

So, don't buy into the marketing hype that IBM, Sony and MS built here. (3.2 GHz, XXX GFLOPS, etc). They knew that the PPE would suck at certain types of load.

Even with all these penalties and pipeline stalls a 1Ghz A8 is not going to come close to a single Xenon core or Cell PPE, not even on heavily branched code such as AI. The difference in clock speed, FPU throughput, cache architecture and memory bandwidth is simply too big. I know the PPC cores in 360's are incomparable in performance to e.g. a G5, but we're comparing against an ARM core about as fast as a single core Atom at 1 Ghz, which is hideously slow.

I'm really the first person to admit ARM designs are making huge inroads in terms of performance, and a dual core Cortex A9 is really starting to look very interesting compared to low end x86 chips, but a Cortex A8 beating a dual-threaded chip running at 3x the clock speed and pretty crazy FPU performance, on it's own game, that's really a bridge too far.
post #49 of 69
Quote:
Originally Posted by d-range View Post

Even with all these penalties and pipeline stalls a 1Ghz A8 is not going to come close to a single Xenon core or Cell PPE, not even on heavily branched code such as AI. The difference in clock speed, FPU throughput, cache architecture and memory bandwidth is simply too big. I know the PPC cores in 360's are incomparable in performance to e.g. a G5, but we're comparing against an ARM core about as fast as a single core Atom at 1 Ghz, which is hideously slow.

I'm really the first person to admit ARM designs are making huge inroads in terms of performance, and a dual core Cortex A9 is really starting to look very interesting compared to low end x86 chips, but a Cortex A8 beating a dual-threaded chip running at 3x the clock speed and pretty crazy FPU performance, on it's own game, that's really a bridge too far.

You seem to have missed one important thing.

"Last year's A4 CPU used in the iPhone 4 and iPad is roughly "comparable to a single Xbox 360 core" Sweeney estimated."

He never mentioned the Cortex A8, he was talking about the whole system-on-a-chip which makes all your arguments meaningless.
post #50 of 69
Quote:
Originally Posted by d-range View Post

Even with all these penalties and pipeline stalls a 1Ghz A8 is not going to come close to a single Xenon core or Cell PPE, not even on heavily branched code such as AI. The difference in clock speed, FPU throughput, cache architecture and memory bandwidth is simply too big. I know the PPC cores in 360's are incomparable in performance to e.g. a G5, but we're comparing against an ARM core about as fast as a single core Atom at 1 Ghz, which is hideously slow.

I'm really the first person to admit ARM designs are making huge inroads in terms of performance, and a dual core Cortex A9 is really starting to look very interesting compared to low end x86 chips, but a Cortex A8 beating a dual-threaded chip running at 3x the clock speed and pretty crazy FPU performance, on it's own game, that's really a bridge too far.

One thing for sure, SoC chip could beat any conventional motherboard architecture in latency. It can easily beat RAM and CPU-GPU bandwidth too. I haven't seen any spec for the real memory bandwidth of the A5 vs the Xbox 360 cpu, but in term of Cache architecture and memory bandwidth, the Cell PPE may be a stream processing beast, but there is nothing to prevent a current generation of SoC from getting the similar memory bandwidth and better latency while sitting the GPU and the RAM on top of the CPU. The Xbox 360 still a 6 years old design.
post #51 of 69
Quote:
Originally Posted by InsideOut View Post

You seem to have missed one important thing.

"Last year's A4 CPU used in the iPhone 4 and iPad is roughly "comparable to a single Xbox 360 core" Sweeney estimated."

He never mentioned the Cortex A8, he was talking about the whole system-on-a-chip which makes all your arguments meaningless.

He's talking about 'the CPU', to me, that means the CPU and not the CPU plus the GPU plus the RAM. Comparing a CPU core to a complete SoC doesn't make sense anyway, that would be be along the lines of saying a car can 300 mph, because the engine can run it at 150 mph, and the metal bits around it can do 150 mph when pushed off a cliff. Talk about meaningless
post #52 of 69
Quote:
Originally Posted by BigMac2 View Post

One thing for sure, SoC chip could beat any conventional motherboard architecture in latency. It can easily beat RAM and CPU-GPU bandwidth too. I haven't seen any spec for the real memory bandwidth of the A5 vs the Xbox 360 cpu, but in term of Cache architecture and memory bandwidth, the Cell PPE may be a stream processing beast, but there is nothing to prevent a current generation of SoC from getting the similar memory bandwidth and better latency while sitting the GPU and the RAM on top of the CPU. The Xbox 360 still a 6 years old design.

No, that's not true, the latencies in RAM don't depend on how close the RAM is located to the CPU. The 360 runs it's RAM at 3 Ghz by the way, so it's going to be much faster just because of that fact. Also, the 360 CPU has a much more flexible cache architecture, for example it allows developers to lock cache lines and use them as ultra fast micro-memory. All in all, most architectural aspects of the 360 CPU are more advanced, everything runs at higher clocks, with faster RAM, higher FPU throughput.

Edit: I double checked because I wasn't sure, but the 360 has 700 MHz GDDR3 RAM, so it's not running at the CPU clock like the PS3. Still a lot faster than the RAM on the A4 though.
post #53 of 69
Quote:
Originally Posted by d-range View Post

No, that's not true, the latencies in RAM don't depend on how close the RAM is located to the CPU. The 360 runs it's RAM at 3 Ghz by the way, so it's going to be much faster just because of that fact. Also, the 360 CPU has a much more flexible cache architecture, for example it allows developers to lock cache lines and use them as ultra fast micro-memory. All in all, most architectural aspects of the 360 CPU are more advanced, everything runs at higher clocks, with faster RAM, higher FPU throughput.

The 360 got a conventional North-south bridge motherboard, so every reading from the RAM need extra step to reach if you compare to Intel newer Core architecture with the ram controller within the CPU, SoC got the same benefit. As for the cache locking, every moderne multi-core CPU with cache sharing between cores need those feature, so we can assume that any dual core ARM have it right now. The 360 CPU have been design since 2003, so there is nothing exceptional here.

BTW, check back your specs, the 360 is running his DDR3 ram @ 700Mhz... not at CPU frequency... neither any system on the market.
ref: http://hardware.teamxbox.com/article...cifications/p1
post #54 of 69
Quote:
Originally Posted by d-range View Post

He's talking about 'the CPU', to me, that means the CPU and not the CPU plus the GPU plus the RAM. ...

Agree with you here. Sweeney is strictly talking about the CPU proper between the two devices. There is not other way to interpret the quote.
post #55 of 69
Quote:
Originally Posted by cgc0202 View Post

I do not know if you actually used the Xoom and the iPad2 long enough to compare them, and make reliable anecdotal impressions. Your arguments are all over the place, when it suits you, you use information that compares a Xoom to the original iPad (but conveniently ignore the evidence comparing the Xoom to iPad2), see your original links first provided before they were challenged with relevant links between Xoom and iPad2.

That is also true with reviews from various sources, you focused more on the one with least difference between the two devices. Did you even take the time to read more carefully about the nuanced of each comparison?

To rely solely on reviews and what you read in the internet, as a basis of your conclusions, and then casting doubt on the statement of others, i.e., the game developer in question, because you believe they "have a stake on it" (which may or may not be true) is rather disingenuous, if not outright ridiculous.

It may help if you buy a Xoom, and then show us your results.

But then again, how can we be sure that you will not do a DaHarder act, where he claimed to own both the Xoom and the iPad*** but preferred the Xoom and the iPad is gathering dust (or something to that effect). And then, the credulity can be questioned when he claims he owns so many Apple products, but the holistic impression you get from his posts is that he does not like Apple products much and always touting a better product by some other company.

I do not know about you or someone else, but if I tried a company once or twice, over the years, and they continue to give me a bad experience, why the heck will I keep on buying their product?

CGC

***The flaw there again was a comparison between the Xoom and the "original iPad". Ones credulity may be put to question if one claims that he was actually comparing it with the iPad2 because if memory serves me, the post was before the iPad2 was yet to be on sale. And, even if iPad2 was already on sale, why would someone be lining up to buy an iPad2, if they had a not so good experience with the original iPad, let alone an overall mediocre experience with Apple products?

I'll make this simple for you, ok? The links I posted were for XOOM reviews. When the xoom came out, the ipad 2 wasn't around for testing, thus it's absence from the data I linked. THe links sophilism posted were for ipad2 reviews. Since I have zero interest in owning an ipad, I wouldn't see those stats. It's really not a hard concept. Why would I link to an ipad review if someone asked me for information about xoom battery life?

I know people who do have both (they're tech reviews/junkies) and they say that both can easily get them through a day or two. That's what I base my "Practically comparable" statement off of.

Do you have a xoom? I thought not. Don't tell me to not rely on what's read on the internet when that is ALL That you are doing. I think that Apple makes some of the best computers around, I just don't like iOS. I do have an ipod touch (over 2 years old), so yes, I'm familiar with the OS.

For the record I don't own an ipad or a Xoom. If I get a tablet it will most likeyl be an android device because that's what all my apps are wrapped up in and I prefer the more desktop like experience it gives. I have used both the xoom and the ipad2 (though nowhere near long enough to test the battery)

I know this is hard for you to believe, but if you look at it. ALL MY LINKS WERE TO XOOM REVIEWS. And even stuff comparing the Xoom to the ipad2 the only place it seems to fall short is for 720p video playback. Browsing and standard definition are very comparable (ot the point where the average user would NOT notice a difference)>
post #56 of 69
Quote:
Originally Posted by Shrike View Post

Agree with you here. Sweeney is strictly talking about the CPU proper between the two devices. There is not other way to interpret the quote.

True, and while it look like I defend the A5 a lot, I still consider the 360 cpu to be much more powerful than any mobile SoC.

But for what I can see, and what this article is all about, SoC would be the future of Mobile and Gaming platform for the next few years. And at one point could be the next gen of desktop computing.
post #57 of 69
Quote:
Originally Posted by BigMac2 View Post

The 360 got a conventional North-south bridge motherboard, so every reading from the RAM need extra step to reach if you compare to Intel newer Core architecture with the ram controller within the CPU, SoC got the same benefit.

It's not as simple as that. Either way, it doesn't matter much anyway, from a practical point of view the RAM in the 360 is much faster than the LPDDR1 on the A4.

Quote:
As for the cache locking, every moderne multi-core CPU with cache sharing between cores need those feature, so we can assume that any dual core ARM have it right now.

Maybe, I'm not sure, I don't know much about the newer dual core ARM. I do know that the A8 pretty much lacks any form of cache control that could help multithreaded performance. He was talking about the A4, and that's a single-core A8.

Quote:
The 360 CPU have been design since 2003, so there is nothing exceptional here.

I agree with you there. Nothing exceptional, just brute force. You can't compare it to a ~10W mobile part, even though it's 7 years old.

Quote:
BTW, check back your specs, the 360 is running his DDR3 ram @ 700Mhz... not at CPU frequency... neither any system on the market.
ref: http://hardware.teamxbox.com/article...cifications/p1

As you can see I already corrected myself right before you mentioned it . The PS3 RAM does run at CPU clock though.
post #58 of 69
Quote:
Originally Posted by d-range View Post

As you can see I already corrected myself right before you mentioned it . The PS3 RAM does run at CPU clock though.

True, the PS3 XDR ram is running at CPU clock, and VRAM at 700Mhz. Strange enough, specs said the memory bandwidth for the XDR is 25GB/sec and 22GB/sec for the Vram, that puzzle me since the big frequency differences between the Vram and the XDR Ram on the PS3.

And I give you right about the brute force of the +50 watts 360 CPU, and for now it still be pretentious to compare Cell and Arm cpu. but if you consider the whole system, mobile platform like the iPad have some "spades" making them in reach of true gaming platform, and exceeding in some "limited" way current gaming console and even desktop computer (ex. NAND flash storage give them SSD performance). In a way the iOS ecosystem look like a lot of gaming console. Apple have the same level of control on its device and developer for it's device than Sony, Nintendo and Microsoft, which appear to be very successful for the whole gaming industry.
post #59 of 69
Quote:
Originally Posted by d-range View Post

Even with all these penalties and pipeline stalls a 1Ghz A8 is not going to come close to a single Xenon core or Cell PPE, not even on heavily branched code such as AI. The difference in clock speed, FPU throughput, cache architecture and memory bandwidth is simply too big. I know the PPC cores in 360's are incomparable in performance to e.g. a G5, but we're comparing against an ARM core about as fast as a single core Atom at 1 Ghz, which is hideously slow.

I'm really the first person to admit ARM designs are making huge inroads in terms of performance, and a dual core Cortex A9 is really starting to look very interesting compared to low end x86 chips, but a Cortex A8 beating a dual-threaded chip running at 3x the clock speed and pretty crazy FPU performance, on it's own game, that's really a bridge too far.

Sweeney didn't say that. He said "comparable to a single Xbox 360 core." That doesn't mean beating. It may even mean a little slower (10%, 20%, maybe even 30% slower).

I think you're putting too much credence to the "marketing" specs of the PPE. For single-threaded performance, branchy code, the performance can be as "bad" or worse than a 1 GHz Cortex-A8. Developer rumors were 1/10 the performance. The penalties can be that bad. Instructions can only be issued every other clock cycle! A 1-1.5 GHz PowerPC G4 was comparable to a 3.2 GHz PPE. No wonder they implemented 2-way SMT. For streaming loads, a 3.2 GHz PPE would obviously be 2x or 3x faster on single threaded code.

On aggregate, over a game with a mix load of instructions, something a developer like Sweeney would see, comparable maybe the right word. With 3 cores, and the other hardware in the Xbox 360, it obviously has more horsepower, 2x-3x, than a Cortex-A8 system. But core-to-core comparisons, I think it makes sense.

In retrospect, the era that produced Netburst, Cell, and PPE was a market-driven dead end. Atom is similar, and is suffering for it. Even EPIC has a philosophical similarity (I'll get to this). Back then, clock frequency was the number one parameter in the performance and marketing of CPUs. It got to the point that they reached economic, practical limits for the power consumption for personal computing. It sacrificed single-threaded, spaghetti code performance for streaming performance because the GHz, GFLOPS, GB/S were so much more marketable.

Cell and PPE was perhaps the zenith of the imbalance. These processor architectures were essentially hoping that the compiler would save them, that somehow software designers would make their code beneficially multithreaded. The PPE and Atom essentially require multiple threads to ensure that their pipelines had instructions to execute. EPIC was even lower level as it was expecting compilers and software design to extract parallelism out of the instruction stream, let alone having multi-thread code.

That was a bad bet as parallelism is a decades long problem. CPU architecture? It can cycle every four years. Gradual improvements single threaded performance in could be done every 2 years. We caught in the same trap with multi-core CPUs. After the Cortex-A15, single-threaded performance improvement is going to slow down while the cores ramp up. But software to take advantage of the increasing cores will be few.
post #60 of 69
Quote:
Originally Posted by tipoo View Post

Ehh? Why would I count the GPU when I'm specifically talking about the CPU comparison he made? He made a per-core processor comparison. Not GPU. I'm not talking about Apple's 9x graphical performance claim.

Now you are the one making the bogus claim -- Sweeny said A4, YOU said CortexA8 core. An A4 includes the GPU, a memory controller and on package memory. An A4 vs one of the XBox Xenon cores. A Xenon core with very little cache and no onboard memory controller.

Is the A4 as MIPS benchmark "fast", no. Could the A4 with all of the designed in synergy and latency mitigation of on package and on-die placement become as computationally powerful? Not such a big stretch.

Most non-CS types have no idea of how pathetic most CPU utilization is, how much computational power of the chip is wasted by less than wonderful systems engineering and programming. The intervening 10 years since the original PPC Power4 core was shipped and eventually morphed into the derived Xenon core have allowed a lot of total systems engineering to happen.

It is quite naive to discount that 10 years of engineering, even if a MIPS snapshot of a single operating mode in both CPUs doesn't measure up to a division by three. The problem is the MIPS benchmark is specifically designed to ignore as much of that systems engineering as possible! MIPS is just a single component of a benchmarking suite that needs some serious interpretation to get right. Cherry-pick any component and you are guaranteed to get the big picture wrong.
.
Reply
.
Reply
post #61 of 69
Quote:
Originally Posted by d-range View Post

I'm really the first person to admit ARM designs are making huge inroads in terms of performance, and a dual core Cortex A9 is really starting to look very interesting compared to low end x86 chips, but a Cortex A8 beating a dual-threaded chip running at 3x the clock speed and pretty crazy FPU performance, on it's own game, that's really a bridge too far.

Tried to find a good benchmark. There are none. The best I could find is this.

Linux Dhrystone 2 benchmarks


Machine_______________Result____________________In dex
G4/1250 (Mac Mini)____3896391.2_________________174.2
Thunderbird/1400______3161216.2_________________141.3
xbox 360 ("xenon")____3015837.2 (estimated)_____134.8
Celeron M/1500________2759615.6_________________123.4
iBook G3/700__________1537968.9_________________68.8


My memory is vaguely recalling the discussions at the time for the IBM PPE in the Cell and the Xenon/Waternoose. Remember, in that time frame, 2003-2005, there was a lot of noise about what Apple was going to do as the PPC 970fx, 970mp, was running out of steam and reaching a bad perf/watt state. There was a lot of angst about this on Apple boards/forums. One of the wishful thoughts at the times was that IBM could adapt the PPE for Apple Macs.

My vague recollection was that, after many posts about architecture differences, a 4 GHz PPE would be about like a 2 GHz 970fx, maybe even a 2 GHz PPC G4, core to core, leaving Apple none the better.

This simplistic, single-threaded integer benchmarks above basically show this. But once you take into account spaghetti, branchy code, things got ominous. Good multi-threaded code, which these types of architecture are good at, is still obviously decades off except for the embarrassingly parallel problems (solving differential equations, streaming ops, multimedia etc).

Obviously, going Intel was the best move.
post #62 of 69
Quote:
Originally Posted by d-range View Post

Even with all these penalties and pipeline stalls a 1Ghz A8 is not going to come close to a single Xenon core or Cell PPE, not even on heavily branched code such as AI. The difference in clock speed, FPU throughput, cache architecture and memory bandwidth is simply too big. I know the PPC cores in 360's are incomparable in performance to e.g. a G5, but we're comparing against an ARM core about as fast as a single core Atom at 1 Ghz, which is hideously slow.

I'm really the first person to admit ARM designs are making huge inroads in terms of performance, and a dual core Cortex A9 is really starting to look very interesting compared to low end x86 chips, but a Cortex A8 beating a dual-threaded chip running at 3x the clock speed and pretty crazy FPU performance, on it's own game, that's really a bridge too far.


The Cell is a very good chip, but time does wonders to new technologies. Now that the Cell is over 5 years old, and has had no upgrades, it isn't as astounding as it once was. The same thing is true of console graphics.

While these devices as a whole are still better than current iPad2's, they aren't that much ahead. Other interesting facts are that much of the vaunted features of current high end graphics boards aren't even noticed during gameplay. We don't have the ability to see many defects in motion graphics that we notice in stills of the same scene. This is very well known in the motion picture and Tv industries, and it holds true for video games.

Once we got to a certain level, better effects simply aren't noticed. I know that gamers would disagree, but it's true nevertheless. I'm not talking about obvious things such as real time rendering of plant movements in the wind, or flags flapping and such. I'm talking about things such as anisotropic filtering etc. The details are too fine at the resolution so many monitors today to notice the effect of much of this.

At any rate, game consoles sit still as far as technology goes, while the world rapidly moves by. Now, we're hearing rumors that MS is looking for beta testers for some unannounced upgrade to the 360. This is interesting as a comparison to the past. If true at all, this is a very low level way of doing it.

In the past, no longer than three years into a console's run, and typically about two years into it, we would be hearing about new chip designs and architectures the companies were doing R&D on. A very big deal would be made of new cpu's and gpu's, memory architectures and the like. Then, after a very visible process, including descriptions in various game and computer magazines and sites, we would get interviews and then presentation at gaming expo's. This would all happen a year before the new consoles were expected to arrive.

It's been about five years since the last batch, and we've heard—nothing!

Meanwhile, there will be an iPad 3 early next year, at the latest. This will likely be much more powerful, and will probably sport the fabled 2000 x1500 screen. With what we're already seeing, with gameplay on your monitor or Tv, usi g the iPad as a controller, or a phone or two as a 3D controller, how will consoles compete when you consider that the iPad let's you take it with you anywhere, and a console doesn't?

The big three will have to scrap whatever they've been working on and confront this. And what exactly, if anything HAVE they been working on?
post #63 of 69
This is exactly what I was thinking as I was reading. The consoles hardware was amazing years ago. In that time little has been changed. While ARM is improving by leaps and bounds every 6 months.

Quote:
Originally Posted by melgross View Post

The Cell is a very good chip, but time does wonders to new technologies. Now that the Cell is over 5 years old, and has had no upgrades, it isn't as astounding as it once was. The same thing is true of console graphics.
post #64 of 69
Quote:
Originally Posted by TenoBell View Post

This is exactly what I was thinking as I was reading. The consoles hardware was amazing years ago. In that time little has been changed. While ARM is improving by leaps and bounds every 6 months.

My thinking is that both MS and Sony are burned out by both the costs of bringing the last consoles to market, and the big losses they sustained during years of sales. They aren't in a rush to start that whole thing again before they've had a chance to make some of those losses back, particularly MS.

Sony was canny with the PS3. it cost them more than 360 development for MS, but MS had just a standard console to show for it. Sony was using the PS3 to fund the R&D on part of the Blu-Ray design and laser development, which was why it was 10 months late and $100 per device more costly than expected at launch. But Sony has gotten billions as a result of that and killed HD-DVD as well. So they've come out well ahead. On the other hand, MS has less to show for it. And they have that $1.3 billion in an escro account to pay for service on the 360 from the red ring of death defect.
post #65 of 69
Quote:
Originally Posted by melgross View Post

The big three will have to scrap whatever they've been working on and confront this. And what exactly, if anything HAVE they been working on?

I think more than all, at short term the iPad is menacing more "The big three" than PC world. The iPad only missing a credible controller for being the first true mobile game console, it shouldn't be very hard-long for Apple to enable Bluetooth HID Frameworks on iOS. Apple got right everything in place for creating a new game industry crash like in the 80' when Atari, coleco, intellivision crash all together facing new game oriented personal computer like the C64. Right now ask any kids what they would like most between a tv attach game console or and iPad.

Think about it... The iPad2 in an 99$ Apple TV and Apple still make profit... New console is generally sold below cost.
post #66 of 69
Quote:
Originally Posted by BigMac2 View Post

I think more than all, at short term the iPad is menacing more "The big three" than PC world. The iPad only missing a credible controller for being the first true mobile game console, it shouldn't be very hard-long for Apple to enable Bluetooth HID Frameworks on iOS. Apple got right everything in place for creating a new game industry crash like the 80' when Atari, coleco, intellivision crash all together facing new game oriented personal computer like the C64. Right now ask any kids what they would like most between a tv attach game console or and iPad.

Think about it... The iPad2 in an 99$ Apple TV and Apple still make profit... New console is generally sold below cost.

What you're saying it true to a great extent. What makes the iPad such a dangerous weapon is that it threatens BOTH the gaming industries way of operating, AND the computer industries future path. The most dangerous devices are the ones that can replace more than one thing, and do it well enough so that people prefer it to the two or more things they were using until this new thing came out.

That's why the iPhone was so disruptive. I read all the arguments of those who insisted that they didn't want all of this in a phone, but that was just a small minority. We see what happened.

The same thing will be true for the iPad, in a somewhat different direction. If apple can get good controllers for this, then it's all over..
post #67 of 69
Quote:
Originally Posted by melgross View Post

The most dangerous devices are the ones that can replace more than one thing, and do it well enough so that people prefer it to the two or more things they were using until this new thing came out.

That's why the iPhone was so disruptive.

And I would add to this, by having multiple device unified in a way that Nintendo would not even dream about it. Let's dream a little, and think about a +600$ iPhone, 500$ iPad, 250$ iPod Touch and 99$ Apple TV with the same hardware capabilities and steam approach "buy once play on anything and free reinstallation", you've got the most fabulous armada out there to crush the gaming console industry. Apple sit on something really big, I don't know if they will handle that way.
post #68 of 69
Quote:
Originally Posted by BigMac2 View Post

And I would add to this, by having multiple device unified in a way that Nintendo would not even dream about it. Let's dream a little, and think about a +600$ iPhone, 500$ iPad, 250$ iPod Touch and 99$ Apple TV with the same hardware capabilities and steam approach "buy once play on anything and free reinstallation", you've got the most fabulous armada out there to crush the gaming console industry. Apple sit on something really big, I don't know if they will handle that way.

It seems as though Apple is migrating to this model as much as they can. There is the disconnect between the keyboard model and the touch model they've got to get through with aTV. But if they can work through that with the other devices, then it could work. If they do, they only need the aTv as a passthrough. No need to have apps on that if you use you're iDevice to run them with.

The reinstallation is pretty much here, except for the aTv. If they get iOS apps working on the Mac, it would help as well. If, somehow, they can get them working on a PC, that would cause problems for MS's model, and Google's as well.
post #69 of 69
Quote:
Originally Posted by melgross View Post

If they get iOS apps working on the Mac, it would help as well. If, somehow, they can get them working on a PC, that would cause problems for MS's model, and Google's as well.

Apple still have many jokers in his sleves, many of which lots of people doesn't know about. Remember, before being bought by Apple, NeXT main product was OpenStep which was an application layer on top of Windows (The yellow box). This is how Apple could port easily any app to Windows like Safari or iTunes base on objective-c code. On Windows each of those apps got their own yellow box embeded in runtime, Apple could challenge if they wanted Java VM with what they called cocoa, this was NeXT master plan at first. With his new compiler LLVM and his JIT runtime, Apple can easy brigde between iOS and MacOS. Just look how many iOS port back to Mac populate the Mac AppStore.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: iPad
  • Epic game developer calls iPad 2 graphics leap "astonishing," doubts Android can compete
AppleInsider › Forums › Mobile › iPad › Epic game developer calls iPad 2 graphics leap "astonishing," doubts Android can compete