...and I played the X-Men 3 trailer in 1080P, zoomed to fit width (it was too large otherwise).
CPUs were evenly loaded, and ranged between 25-50% usage while all this was going on. No frame drops were visible.
Cool. You can see the actual fps through the info window.
Your results are perhaps a little better than those on a dual 2.0 GHz Power Mac, where the CPU usage in 1080p decoding is generally between 90-110%, while yours is 50-100% (for a total of 200%). Of course there are some factors to consider, like the new GPU which is really decent and the trailer in question (not all exactly the same size).
The X1600 in the new iMac can decode h.264, but I am not sure if this applies to 1080p format. So, to see if Apple provides appropriate drivers for that, I would suggest to play a 720p trailer, having always an eye on CPU monitor. If CPU usage is suspiciously low (but I cannot tell how much), then probably the hardware 720p decoding is enabled.
Unkie Steve said on Tueday that OS X 's shipped apps, iLife and iWork are all Universal, so iTunes must be Universal. I imagine, though, that iTunes, as it's probably the least processor-intensive app in iLife, was the last one made Universal. It probably ended up being rushed.
Steve wouldn't allow something like that to slip on such important products as the iMac / MacBook Pro ... I imagine we're going to have a patch soon. I mean, a lot of switchers will come from the Intel switch - this is just a too important part of Apple's history for something like that to fall through Steve's watch.
Remember that this is a very new technology for Macs - it will get better as time passes.
There seem to be a few people not reading this thread properly.
The 4.5 x ripping speed quoted at the beginning of the thread was an anomaly. iTunes is universal binary, and apparently there's nothing wrong with it. The MacBooks actually rip at around 20x (go back and read the thread again, paying particular attention to Xool's posts), presumably the iMacs are a bit faster.
Okay, I currently have some Google Maps mashups open in Safari (we're doing real-estate searches), with iTunes streaming CBC Radio 3, and I played the X-Men 3 trailer in 1080P, zoomed to fit width (it was too large otherwise).
CPUs were evenly loaded, and ranged between 25-50% usage while all this was going on. No frame drops were visible. Scrubbing was smooth. I don't have QT Pro.
I have a question to the person who has the new Intel iMac. Can you maybe run it through some applications like Garageband or Photoshop - anything that will raise the processor up to near its maximum for at least a couple of minutes. I am VERY curious about how loud the fan is. I currently have a revision A iMac G5 in a music studio and it is unusable really because the fans are always running so loud that it sounds almost like a vacuum cleaner. I've taken it to the Apple Store and they've run tests, etc. and apparently it is within the spec of the machine to have the fans running at 4400rpm with a processor heat score of 144 degrees F. You would be doing a huge favor if you could run some tests of this nature! Thanks a lot!
Correct about people not reading this forum properly! The results in the initial post have been shown to be false!
What I suspect may have happened is that the original version of iTunes on the show floor was the PPC codebase compiled as a Universal Binary (and thus not optimised for Intel). For something as CPU-intensive as ripping, Apple would have maintained a separate Intel library, and it wasn't until day two that they released a version with this ripping code merged into the Intel part of the Universal build.
Just a guess, but seems feasible from Xool's results (before and after).
Well, I don't know about MacBook Pro, but my now 3 years old Sony VAIO with a P4 2.4Ghz and 768MB or RAM, using iTunes, rips CDs at 14x.
So it definitely isn't the chip, as others have pointed out too.
iTunes on a P4 is actually quite a bit quicker than iTunes on the Pentium M. I'd expect the gap to narrow on the Core though and most likely surpass it.
It's also important to run the tests with the CPU running at full power. I'd guess the 4.5x speed was because the CPU was throttled back to low power mode. I've a 1.8Ghz Dothan based Windows laptop that only gets about 6-7x speed encoding when it's on battery and about 14x on power. For comparison, a G5 1.8 gets about 16-18x.
I'd expect the Core to match the G5 and since there's 2 of them and a big cache, beat the G5 at the same clock.
What I suspect may have happened is that the original version of iTunes on the show floor was the PPC codebase compiled as a Universal Binary (and thus not optimised for Intel).
I wouldn't have thought so. iTunes has been a Universal Binary since 10.4.3
It is possible to force universal binaries to run their PPC code through Rosetta on Intel macs. Someone could have ticked the "run in Rosetta" box to check performance. We'll never know for sure why Xool's first test was so out of whack.
Check out this folks. It has Photoshop tests under Rosetta. You may be surprised by the results. But I would wait for more detailed benchmarks.
Wow. That would be impressive if Rosetta will benchmark across the board like that. From my reckoning, it means the 1.83Ghz iMac was faster running Photoshop through Rosetta than on the previous G5 natively. Which seems a little too good to be true.
Wow. That would be impressive if Rosetta will benchmark across the board like that. From my reckoning, it means the 1.83Ghz iMac was faster running Photoshop through Rosetta than on the previous G5 natively. Which seems a little too good to be true.
More benchmarks needed.
Well, that IS what Apple's claiming - that the iMac is 200% as fast, and that Rosetta runs 60-80% as fast as native - so Rosetta apps should run a minimum of 120% the speed of - well - whatever it is Apple is comparing these numbers to
Well, that IS what Apple's claiming - that the iMac is 200% as fast, and that Rosetta runs 60-80% as fast as native - so Rosetta apps should run a minimum of 120% the speed of - well - whatever it is Apple is comparing these numbers to
Well that's an overly simplistic way of looking at it. Apple's claim of around 200% performance is strictly from a SPECint and SPECfp perspective. Those benchmarks are synthetic, but reflect general purpose integer and floating point performance. These are areas where the G4 and the G5 have trailed the Intel based processors a little bit lately.
Bear in mind that SPEC is optimized for multiprocessors, so automatically the dual-core nature of the Core Duo gives it a leg up over the single G5 in the previous iMac.
Where the G4 and the G5 did well, and why Photoshop was still very much viable on the PowerPC was with vector performance. Specifically, with Altivec instructions. Altivec was much more elegant and much faster than Intel's equivalent, SSE, and I do not expect Rosetta emulation of Altivec to be nearly as fast as real Altivec hardware.
So what does that necessarily mean? For something like application startup, I would expect it to be a wash. App launching doesn't use altivec, so you won't see much of a slowdown there. where you will see a significant slowdown is when you start to apply altivec optimized filters in Photoshop.
Well, that IS what Apple's claiming - that the iMac is 200% as fast, and that Rosetta runs 60-80% as fast as native - so Rosetta apps should run a minimum of 120% the speed of - well - whatever it is Apple is comparing these numbers to
Well, dual processors with at least partly multiprocessor-optimised programms like Photoshop alone should speed up things (how optimised for multiple processors Rosetta is, is another question, but I guess it is).
For me, it would make more sense to compare dual-core Pentium M (aka Core Duo) with a dual-core G5.
That would be step one, step two is marvelling at the possibility to have dual-processors in both the iMac and Powerbook (or MacBook Pro if you prefer).
iTunes on a P4 is actually quite a bit quicker than iTunes on the Pentium M. I'd expect the gap to narrow on the Core though and most likely surpass it.
My guess is that the P4 with the SSE3 instructions and a better FPU gives it superior performance.
Merom in 2H '06 will inherit both the SSE3 instructions and probably the better FPU. Core at the moment is the extension of the P-M, Merom is the evil lovechild of the P-M and the P4.
For me, it would make more sense to compare dual-core Pentium M (aka Core Duo) with a dual-core G5.
Sure. But for people wondering if Photoshop on the iMac/MacBook will run as quickly as their old iMac/Powerbook, the answer would seem positive from that one benchmark.
If there's a few more benchmarks in saying Photoshop is running as quickly as that under Rosetta, many people who do fine with laptops and iMacs will be quite happy to upgrade before native Photoshop.
My guess is that the P4 with the SSE3 instructions and a better FPU gives it superior performance.
Merom in 2H '06 will inherit both the SSE3 instructions and probably the better FPU. Core at the moment is the extension of the P-M, Merom is the evil lovechild of the P-M and the P4.
Wrong. Yonah (the Core Duo) supports SSE3 instructions right now.
Cool. You can see the actual fps through the info window.
Your results are perhaps a little better than those on a dual 2.0 GHz Power Mac, where the CPU usage in 1080p decoding is generally between 90-110%, while yours is 50-100% (for a total of 200%). Of course there are some factors to consider, like the new GPU which is really decent and the trailer in question (not all exactly the same size).
The X1600 in the new iMac can decode h.264, but I am not sure if this applies to 1080p format. So, to see if Apple provides appropriate drivers for that, I would suggest to play a 720p trailer, having always an eye on CPU monitor. If CPU usage is suspiciously low (but I cannot tell how much), then probably the hardware 720p decoding is enabled.
Mind you, that the new qtime update also optimized the h.264 performance.
Before I won't be able to play a 720p on my pbook G4 1.5GHz, now, it plays without a frame drop.
I picked up a mini-DVI cable today and hooked up my Gateway 21" widescreen display. This image (crappy cellphone cam -- sorry) is showing the iMac doing spanning, with both displays set to 1680 * 1050 (native for each).
Comments
Originally posted by ciparis
...and I played the X-Men 3 trailer in 1080P, zoomed to fit width (it was too large otherwise).
CPUs were evenly loaded, and ranged between 25-50% usage while all this was going on. No frame drops were visible.
Cool. You can see the actual fps through the info window.
Your results are perhaps a little better than those on a dual 2.0 GHz Power Mac, where the CPU usage in 1080p decoding is generally between 90-110%, while yours is 50-100% (for a total of 200%). Of course there are some factors to consider, like the new GPU which is really decent and the trailer in question (not all exactly the same size).
The X1600 in the new iMac can decode h.264, but I am not sure if this applies to 1080p format. So, to see if Apple provides appropriate drivers for that, I would suggest to play a 720p trailer, having always an eye on CPU monitor. If CPU usage is suspiciously low (but I cannot tell how much), then probably the hardware 720p decoding is enabled.
Steve wouldn't allow something like that to slip on such important products as the iMac / MacBook Pro ... I imagine we're going to have a patch soon. I mean, a lot of switchers will come from the Intel switch - this is just a too important part of Apple's history for something like that to fall through Steve's watch.
Remember that this is a very new technology for Macs - it will get better as time passes.
(First post
Originally posted by webmail
Yup, agreed iTunes under OS X intel is either not native or has another problem it's not the same.
iTunes is a universal binary as of 1/10/06. It was mentioned that all of the iLife '06 apps were universal and they were shipping Tuesday.
There seem to be a few people not reading this thread properly.
The 4.5 x ripping speed quoted at the beginning of the thread was an anomaly. iTunes is universal binary, and apparently there's nothing wrong with it. The MacBooks actually rip at around 20x (go back and read the thread again, paying particular attention to Xool's posts), presumably the iMacs are a bit faster.
Originally posted by ciparis
Okay, I currently have some Google Maps mashups open in Safari (we're doing real-estate searches), with iTunes streaming CBC Radio 3, and I played the X-Men 3 trailer in 1080P, zoomed to fit width (it was too large otherwise).
CPUs were evenly loaded, and ranged between 25-50% usage while all this was going on. No frame drops were visible. Scrubbing was smooth. I don't have QT Pro.
Can you play 2 HD trailers at the same time?
What I suspect may have happened is that the original version of iTunes on the show floor was the PPC codebase compiled as a Universal Binary (and thus not optimised for Intel). For something as CPU-intensive as ripping, Apple would have maintained a separate Intel library, and it wasn't until day two that they released a version with this ripping code merged into the Intel part of the Universal build.
Just a guess, but seems feasible from Xool's results (before and after).
Originally posted by Gene Clean
Well, I don't know about MacBook Pro, but my now 3 years old Sony VAIO with a P4 2.4Ghz and 768MB or RAM, using iTunes, rips CDs at 14x.
So it definitely isn't the chip, as others have pointed out too.
iTunes on a P4 is actually quite a bit quicker than iTunes on the Pentium M. I'd expect the gap to narrow on the Core though and most likely surpass it.
It's also important to run the tests with the CPU running at full power. I'd guess the 4.5x speed was because the CPU was throttled back to low power mode. I've a 1.8Ghz Dothan based Windows laptop that only gets about 6-7x speed encoding when it's on battery and about 14x on power. For comparison, a G5 1.8 gets about 16-18x.
I'd expect the Core to match the G5 and since there's 2 of them and a big cache, beat the G5 at the same clock.
Originally posted by Frogmella
What I suspect may have happened is that the original version of iTunes on the show floor was the PPC codebase compiled as a Universal Binary (and thus not optimised for Intel).
I wouldn't have thought so. iTunes has been a Universal Binary since 10.4.3
Originally posted by PB
Check out this folks. It has Photoshop tests under Rosetta. You may be surprised by the results. But I would wait for more detailed benchmarks.
Wow. That would be impressive if Rosetta will benchmark across the board like that. From my reckoning, it means the 1.83Ghz iMac was faster running Photoshop through Rosetta than on the previous G5 natively. Which seems a little too good to be true.
More benchmarks needed.
Originally posted by aegisdesign
Wow. That would be impressive if Rosetta will benchmark across the board like that. From my reckoning, it means the 1.83Ghz iMac was faster running Photoshop through Rosetta than on the previous G5 natively. Which seems a little too good to be true.
More benchmarks needed.
Well, that IS what Apple's claiming - that the iMac is 200% as fast, and that Rosetta runs 60-80% as fast as native - so Rosetta apps should run a minimum of 120% the speed of - well - whatever it is Apple is comparing these numbers to
Originally posted by Gee4orce
Well, that IS what Apple's claiming - that the iMac is 200% as fast, and that Rosetta runs 60-80% as fast as native - so Rosetta apps should run a minimum of 120% the speed of - well - whatever it is Apple is comparing these numbers to
Well that's an overly simplistic way of looking at it. Apple's claim of around 200% performance is strictly from a SPECint and SPECfp perspective. Those benchmarks are synthetic, but reflect general purpose integer and floating point performance. These are areas where the G4 and the G5 have trailed the Intel based processors a little bit lately.
Bear in mind that SPEC is optimized for multiprocessors, so automatically the dual-core nature of the Core Duo gives it a leg up over the single G5 in the previous iMac.
Where the G4 and the G5 did well, and why Photoshop was still very much viable on the PowerPC was with vector performance. Specifically, with Altivec instructions. Altivec was much more elegant and much faster than Intel's equivalent, SSE, and I do not expect Rosetta emulation of Altivec to be nearly as fast as real Altivec hardware.
So what does that necessarily mean? For something like application startup, I would expect it to be a wash. App launching doesn't use altivec, so you won't see much of a slowdown there. where you will see a significant slowdown is when you start to apply altivec optimized filters in Photoshop.
Originally posted by Gee4orce
Well, that IS what Apple's claiming - that the iMac is 200% as fast, and that Rosetta runs 60-80% as fast as native - so Rosetta apps should run a minimum of 120% the speed of - well - whatever it is Apple is comparing these numbers to
Well, dual processors with at least partly multiprocessor-optimised programms like Photoshop alone should speed up things (how optimised for multiple processors Rosetta is, is another question, but I guess it is).
For me, it would make more sense to compare dual-core Pentium M (aka Core Duo) with a dual-core G5.
That would be step one, step two is marvelling at the possibility to have dual-processors in both the iMac and Powerbook (or MacBook Pro if you prefer).
Originally posted by aegisdesign
iTunes on a P4 is actually quite a bit quicker than iTunes on the Pentium M. I'd expect the gap to narrow on the Core though and most likely surpass it.
My guess is that the P4 with the SSE3 instructions and a better FPU gives it superior performance.
Merom in 2H '06 will inherit both the SSE3 instructions and probably the better FPU. Core at the moment is the extension of the P-M, Merom is the evil lovechild of the P-M and the P4.
Originally posted by noirdesir
For me, it would make more sense to compare dual-core Pentium M (aka Core Duo) with a dual-core G5.
Sure. But for people wondering if Photoshop on the iMac/MacBook will run as quickly as their old iMac/Powerbook, the answer would seem positive from that one benchmark.
If there's a few more benchmarks in saying Photoshop is running as quickly as that under Rosetta, many people who do fine with laptops and iMacs will be quite happy to upgrade before native Photoshop.
Originally posted by Electric Monk
My guess is that the P4 with the SSE3 instructions and a better FPU gives it superior performance.
Merom in 2H '06 will inherit both the SSE3 instructions and probably the better FPU. Core at the moment is the extension of the P-M, Merom is the evil lovechild of the P-M and the P4.
Wrong. Yonah (the Core Duo) supports SSE3 instructions right now.
http://en.wikipedia.org/wiki/Intel_Core
Merom will introduce a more advanced 4 issue core with 64-bit support.
Originally posted by PB
Cool. You can see the actual fps through the info window.
Your results are perhaps a little better than those on a dual 2.0 GHz Power Mac, where the CPU usage in 1080p decoding is generally between 90-110%, while yours is 50-100% (for a total of 200%). Of course there are some factors to consider, like the new GPU which is really decent and the trailer in question (not all exactly the same size).
The X1600 in the new iMac can decode h.264, but I am not sure if this applies to 1080p format. So, to see if Apple provides appropriate drivers for that, I would suggest to play a 720p trailer, having always an eye on CPU monitor. If CPU usage is suspiciously low (but I cannot tell how much), then probably the hardware 720p decoding is enabled.
Mind you, that the new qtime update also optimized the h.264 performance.
Before I won't be able to play a 720p on my pbook G4 1.5GHz, now, it plays without a frame drop.