or Connect
AppleInsider › Forums › Mac Hardware › Current Mac Hardware › Apple's Benchmarks misleading?
New Posts  All Forums:Forum Nav:

Apple's Benchmarks misleading? - Page 4

post #121 of 179
Quote:
PS: sorry there are so many people who take everything a company says as gospel.

S'funny. 98% of PC weeners have been buying the Intel 'mhz' is god crap fer years.

And they readily accept Spec benches which are optimised on Intel's terms.

Sorry, I thought I smelt a line of crap fer a second there...

Machine vs Machine. The G5 whooped the Xeon's ass. And that's the best Intel has got.

Renderman President didn't say Xeon was the fastest. Or the bloated Pentium 4...nope. The G5! Read and weep. I'll take Pixar's President on Renderman over any armchair PC hack on these boards.

And yer 'Amd' net loophole...well...if the Pentium 4 trashes 'XP' athlons...and the Pentium 4, more significantly, the Xeons...got trashed...then G5 says, 'So waht to AMD...' AMD haven't been able to do proper mhz fer a few years now. They've been playing with numbers. Intel fudge the numbers when they launched the 1.4 gig Pentium 4 which was slower than a 1 gig Pentium 3! Yeesh, the hypocritical spew being ejaculated by threatened PC whiners is discordant to say the least.

And the much lauded AMD 64...by the time they get that out the gate and into the hands of PEECEE whiners...then IBM and Apple will be waiting for them with 2.5 .13 970s. Dual. Poor AMD should be stuck on their 1.8 gig mhz Opteron for a while. Heard they had problems getting their mhz up. 'kin take some Viagra fer that...



Lemon Bon Bon

I AM GOING TO BUY A G5 AS SOON AS THEY START SHIPPING! TOP END. DOG'S B*LL*CKS!
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
post #122 of 179
Thread Starter 
I AM GOING TO BUY A G5 AS SOON AS THEY START SHIPPING! TOP END. DOG'S B*LL*CKS!

A-Rupert-Grinting-Men.

Die, PC Weenies, Die.
J.C. Corbin, Apple Certified Technical Coordinator
Member, Apple Consultants Network
www.ro3.com
Reply
J.C. Corbin, Apple Certified Technical Coordinator
Member, Apple Consultants Network
www.ro3.com
Reply
post #123 of 179
Quote:
Die, PC Weenies, Die.



(Hey, gotta get back into the Powertower Pro 225 mhz Pentium smashing spirit.... After years of Intel's mhz crap...I'm just having fun getting the last four years off my chest...)

Now where was I?

Oh yes..., 'Die, PC Weenies, Die!'

Lemon Bon Bon
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
post #124 of 179
Heh...I hope to make Apple's benchmarks even more 'misleading' when I stick 4 gigs of Ram and an Ati 9800 Pro in my dual G5...

(erughK...cr-eek...opens can of Pentium whooop-ass...)

Lemon Bon Bon
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
post #125 of 179
OK, now that LBB had a chance to finally get something off of his chest


Anyways....

Powerlogix issued an informative press release regarding G5. http://www.powerlogix.com/press/rele...03/030625.html

This Technical Note from Apple also lists some technical detail of G5 vs G4 e.g there is 0 (zero!) MB of L3 cache on G5. I don't like that. http://developer.apple.com/technotes/tn/tn2087.html

I was pretty sure that I was going to get a G5 but may now wait for RevB or pick up the G4 on the cheap.

And as I have always said, no real comparison can be made re G5 until it ships...
One iMac G5, one iPod, many PCs.
Reply
One iMac G5, one iPod, many PCs.
Reply
post #126 of 179
Quote:
Originally posted by klinux
This Technical Note from Apple also lists some technical detail of G5 vs G4 e.g there is 0 (zero!) MB of L3 cache on G5.

And?
JLL

95% percent of the boat is owned by Microsoft, but the 5% Apple controls happens to be the rudder!
Reply
JLL

95% percent of the boat is owned by Microsoft, but the 5% Apple controls happens to be the rudder!
Reply
post #127 of 179
Steve Jobs mentioned that there was no L3 Cache on the G5 in his keynote. you should watch it... he basically says the processor bus is so fast that it doesn't need it ... or something to that effect.
A Fair and Balanced Liberal

John Kerry for President
Reply
A Fair and Balanced Liberal

John Kerry for President
Reply
post #128 of 179
How much do you want to bet RevB or later will have a L3 cache in there? The rest of the industry (Intel, AMD, IBM, etc) is going toward more L3 cache, not less.
One iMac G5, one iPod, many PCs.
Reply
One iMac G5, one iPod, many PCs.
Reply
post #129 of 179
Quote:
Originally posted by klinux
How much do you want to bet RevB or later will have a L3 cache in there? The rest of the industry (Intel, AMD, IBM, etc) is going toward more L3 cache, not less.

JLL

95% percent of the boat is owned by Microsoft, but the 5% Apple controls happens to be the rudder!
Reply
JLL

95% percent of the boat is owned by Microsoft, but the 5% Apple controls happens to be the rudder!
Reply
post #130 of 179
One iMac G5, one iPod, many PCs.
Reply
One iMac G5, one iPod, many PCs.
Reply
post #131 of 179
From what I can tell, Joswiak's response thoroughly debunked 90% of the haxial site and its kin.

The only remaining issue from what I can tell is whether they should have used the gcc compiler all around. It seems to me that there are two possibilities:

1. use the same compiler on both machines in order to hold the compiler constant so only the hardware is tested (this is what Apple did)
2. use the most optimized compiler on each machine in order to obtain the peak performance of the chip (this appears to be what the PC people want)

I don't know enough about the technical issues to say which one is best, but #1 certainly has a logic to it. I've read people like Hannibal at Ars say that hardware and compiler are inextricable and it doesn't really make sense to divorce the compiler from the hardware. OK. But then what if Apple did just use Intel's SPEC scores and some similarly proprietary Apple SPEC scores?

Example: IBM's initial SPEC scores were considerably higher for the 1.8Ghz G5 than Apple's 2Ghz keynote SPEC scores. This seems to suggest that using some other method of getting SPEC scores can substantially influence the scores. I'd be interested in seeing those - let Apple do whatever it can to squeeze out the highest SPEC scores, and then compare those to the Intel/Dell numbers that are out there. At the least you'd have to agree that both companies were able to cheat equally.

It is also my understanding that those SPEC scores don't take advantage of Altivec, which I think most people agree is a strong suit for the G4/G5 machines compared to Intel, and has been very important to Apple up to this point.
post #132 of 179
Quote:
Originally posted by Lemon Bon Bon


(Hey, gotta get back into the Powertower Pro 225 mhz Pentium smashing spirit.... After years of Intel's mhz crap...I'm just having fun getting the last four years off my chest...)

post #133 of 179
Thread Starter 
Ahh, the good old days.

As for the L3 cache -- Intel, etc are moving to it for the same reason that Apple moved to it years ago - it alleviates the delays caused by system buses that cannot feed the processor and other systems fast enough by caching oft-used items close to the processor.

Quite simply, if your L3 cache cannot feed the processor faster than the system bus, you don't need L3 enough for it to be cost-effective.

0.8-1.0 ghz sysbus and 0.4 ghz ddr RAM can apparently feed the chip fast enough, at least in Apple's opinion.
J.C. Corbin, Apple Certified Technical Coordinator
Member, Apple Consultants Network
www.ro3.com
Reply
J.C. Corbin, Apple Certified Technical Coordinator
Member, Apple Consultants Network
www.ro3.com
Reply
post #134 of 179
Quote:
Originally posted by Fluffy

Yea sluggo, I still have that poster somewhere. That was on of the best MacWorlds ever.
post #135 of 179
The other reason for L3 cache is reduced latencies hence why you see it so much in servers. I'm sure at some point Apple will probably readd L3 cache but I wouldn't count on it going to be in Revision B or even the overly near future.
"When I was a kid, my favourite relative was Uncle Caveman. After school, wed all go play in his cave, and every once and awhile, hed eat one of us. It wasnt until later that I discovered Uncle...
Reply
"When I was a kid, my favourite relative was Uncle Caveman. After school, wed all go play in his cave, and every once and awhile, hed eat one of us. It wasnt until later that I discovered Uncle...
Reply
post #136 of 179
From www.accelerateyourmac.com

Quote:
"Hey everyone, I've been at Apple's developer conference and had a chance to install and try out After Effects on a new G5.
I ran the Night Flight file that has come to be the standard for AE benchmarking. Since I didn't want to sit there and watch it render for hours, I ran just the first 10 interlaced frames from the project's pre-set render queue...
http://www.aefreemart.com/tutorials/...ghtflight.html
Here are my results for this test on the three computers I have available to me:

1 x 1.0 GHz G4 PowerBook 17" - ~30 minutes (3 min/frame)
2 x 2.66 GHz Pentium Xeon from Boxx - 11 min, 39 sec (1.2 min/frame)
2 x 2.0 GHz PowerMac G5 - 6 min, 1 sec (0.6 min/frame)

I ran the Xeon test on a couple different identical machines to make sure mine wasn't just running slowly, but got identical results.
Of course my Mac bias is well-documented, but I'm sure many people here can vouch for me as an honest person. If the results had gone the other way, I'd just keep my mouth shut and let someone else break the bad news.
Other observations about this test that may ultimately work in the Mac's favor:

1) The machine was not running 64-bit Panther, but only a tweaked version of 32-bit Jaguar. Likewise, AE is obviously not yet compiled to take advantage of the G5 chip in any way. Both or these situations will automatically be rectified in the future.

2) Night Flight is very CPU-intensive, but not very disk I/O intensive. I think the 1 GHz system bus and other details on the G5 will provide greater gains for typical projects that rely more heavily on I/O."
==============================================

All pretty interesting!!!!
DAVID S.
pixelcraft studios "

DAMN! twice as fast as a Dual Xeon with one of worst coded apps for Dual Macs. It's been interesting to the response of the PC fans. I sense a sort of insecurity that's cropped up. Absolute advantage in speed was theirs in the past couple of years at the least. That seems to have evaporated. We as Mac users don't really require the fastest hardware...we just need to stay close. We already know we have the superior OS. Be afraid...be VERY afraid.
He's a mod so he has a few extra vBulletin privileges. That doesn't mean he should stop posting or should start acting like Digital Jesus.
- SolipsismX
Reply
He's a mod so he has a few extra vBulletin privileges. That doesn't mean he should stop posting or should start acting like Digital Jesus.
- SolipsismX
Reply
post #137 of 179
Quote:
Originally posted by BRussell
Example: IBM's initial SPEC scores were considerably higher for the 1.8Ghz G5 than Apple's 2Ghz keynote SPEC scores. This seems to suggest that using some other method of getting SPEC scores can substantially influence the scores.

Does anybody know the story behind the orignial IBM SPEC scores? It could be that they were just coded better. However it could also be that the final version was slower, or that OSX is slower than whatever was used in the original test. Until these numbers came out, it had been assumed around here that the IBM numbers were conservative, but maybe they were not conservative enough.
post #138 of 179
Quote:
Originally posted by JBL
Does anybody know the story behind the orignial IBM SPEC scores? It could be that they were just coded better. However it could also be that the final version was slower, or that OSX is slower than whatever was used in the original test. Until these numbers came out, it had been assumed around here that the IBM numbers were conservative, but maybe they were not conservative enough.

I believe they were compiled using Visual Age. Definitely not GCC. You know..I'm pretty much fed up with SPEC scores. The premise behind SPEC is Noble but the execution is poor. Bring on the Real World tests!
He's a mod so he has a few extra vBulletin privileges. That doesn't mean he should stop posting or should start acting like Digital Jesus.
- SolipsismX
Reply
He's a mod so he has a few extra vBulletin privileges. That doesn't mean he should stop posting or should start acting like Digital Jesus.
- SolipsismX
Reply
post #139 of 179
Quote:
Originally posted by JBL
Does anybody know the story behind the orignial IBM SPEC scores? It could be that they were just coded better. However it could also be that the final version was slower, or that OSX is slower than whatever was used in the original test. Until these numbers came out, it had been assumed around here that the IBM numbers were conservative, but maybe they were not conservative enough.

The original IBM SPEC scores were estimates, not actual results. Any actual benchmarking would probably have used IBM's extremely nice VisualAge compiler, but I haven't seen any concrete results from that (not that I've looked, really, since the whole question seems academic to me).

On another topic, the above-linked PowerLogix white paper has at least one glaring error:

Quote:
By the way, anyone remember when the PowerMac G4 733 with the 7450 CPU first began shipping? Apple touted it as the best thing since...well....the PowerMac G4 with PPC 7400...but the previous 533 model was actually a better performer than the 733! This was primarily due to the pipeline architecture of the 7450 compared to the 7400. Apple needed to get the clock speed up for marketing purposes (then and now.) Will history repeat itself? How close will the previous Dual 1.42 PowerMac G4 come to the new 1.6GHz or 1.8GHz G5s, for example?

This is a terrible example, because it ignores the fact that the 7450 is a significantly different chip from the 7400. Yes, it was less efficient than the 7400 when running code compiled for the 7400. Once applications started targeting the 7450, the 733 was faster - in particular, 7450 AltiVec performance shot through the roof (and then hit the MaxBus bandwidth limitation) once people started adapting their code to it, because it's a much better AV unit than the 7400's. But Mot's CPUs are very sensitive to instruction ordering, so if you don't make an effort to write for a particular strain of the G4 you're not going to get impressive results from it.
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
post #140 of 179
Again the dual G5 kicks the crap out of a dual Xeon on an unoptimised Adobe app... Bring on Panther and Adobe optimisation...

Saaay. Where's that smart-ass Digital Video site that was largin' it with their 'Dell twice as fast as powerMac G4' smugness? They didn't mind dishing it...can they take it?

Bet they aint so smug now...maybe they can try their Adobe digital video bench again on a dual 2 gig G5.

Wipe the smile off their 'face like slapped arse'.

Lemon Bon Bon

We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
post #141 of 179
Quote:
Originally posted by Lemon Bon Bon
Saaay. Where's that smart-ass Digital Video site that was largin' it with their 'Dell twice as fast as powerMac G4' smugness? They didn't mind dishing it...can they take it?

Oh he's still dishing it.
Quote:
The G5 is not the fastest chip for personal computers, workstations, desktops, ducks or whatever. No matter how you cheat it, according to AMD its Opteron has the G5 beat by a country mile. DMN has obtained SPEC benchmark data from AMD, a company that I believe has considerably more credibility than Apple, showing its numbers on the exact same tests run by Apple. Take a good look at the table below. Well, well. Who's the fastest now?
[graph showing Opteron 50% faster than G5.]
post #142 of 179
I've been thinking about something since reading about all the PC people complaining that Apple skewed the tests because they didn't use the compiler that produced the *higest* SPEC scores, not to mention that said compiler isn't even the most commonly used.

To me when you're testing the HARDWARE and *only the HARDWARE, you must try and keep everything the same as much as possible. That means using the same compiler for both machines. Sorta like testing automobile performance... You'd test the car using the SAME DRIVER and the SAME MODEL AND MANUFACTURER of the TIRES on the SAME TRACK. That means you must run these SPEC tests and see which machines produce the higher scores. VeriTest obviously did this, but the kicker is that even went further to see if the slower (Intel) scores could be boosted using the special features that might be available. Again, VeriTest did this.. They tested with SSE extensions turned *on*. Since SPEC doesn't test AltiVec on the PowerPC's I say that this test was MORE than fair -- even to the extent that I'd call it biased in favor of Intel! All that aside (and I'd really like some feedback on my speculation), is it at all possible that Apple issued these results with this one specific (particular?) benchmark knowing full well that it would cause such an uproar?

I'm beginning to wonder if they singled out SPEC in order to draw attention to it -- showing that it really is pretty useless in terms of what's important *computationaly* these days in terms of computing -- personal or otherwise. I think they did it to show that SPEC can no longer be *gospel* regarding system performance. There is too many tricks and tweaks that companies can use to show higher than usual SPEC marks. Apple may have done this to show that SPEC's results are misleading and somewhat flawed and that it's results have become *limited* in terms of what they tell you about a particular system. To be honest, I think it makes the PC side look bad, not Apple. Anyway, just something to think about.

--
Ed
post #143 of 179
Yeah. I'd like to wipe that guy's smug smile off his web-page...

Speaking of the Opteron...hardly a personal computer. Costs an absolute fortune.

We'll see if the Opteron is 50% faster when Altivec, Panther, the G5 are all optimised and IBM's 0.09 970 process their AMD ass... Plus, Opteron's still run on a crap OS. Xp is crap. I'm using it. I KNOW what I'm talking about.

You can get the G5 dual gigger for a reasonable (compared to last years Tower top end...) £2,300. That's damn good for what you're getting.

That's what I'm getting!

As for the Spec', Ed M, I think you're right. They do seem biased in Intel's favour if anything. As above...wait till Apple get the whole caboodle sorted by the Fall. Panther. Dual G5's optimised to run the OS. And real world apps to bench...not some x86 croney Spec test bed. Pixar said it was the highest performing machine. If they say it. I'll take their word.

I think the G5 will meet all comers...Opteron included...and even if an Opteron/Prescott came out slightly ahead...then I'm okay with that knowing IBM are moving 'swiftly' to 0.09 with the 970. How's AMD going to do with their 0.09? Wish them luck...

We'll have Panther. They won't.

Lemon Bon Bon

PS. Besides, for once, I don't care what Wintel has. The G5 is more than competitive. It's much cheaper than Intel's Xeon box. Kick ass G5 on Kick ass Panther...that's all I've been asking for...during these last four years. We NOW have hardware that can go toe-to-toe. That will do for me. And we've still got the Altivec factor. Can't wait to see it on that bus on real world apps... Nasty...
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
post #144 of 179
...and besides...the G5 had the Xeon aced on price by over a grand.

When was the last time Apple could say that?

I can't whinge about the price. That's why I'm getting one...

C'mon, Apple...ship it already!

Lemon Bon Bon
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
post #145 of 179
It does look like machines with Intel chips are way overpriced and underperforming compared to Apple's G5s, but what about AMD's Opteron?

Are there Opteron systems out there that are competitive with the G5?
post #146 of 179
At first blush, the Opteron seems to be in the G5's ballpark performance-wise, although it seems to land in a higher price bracket.

It'll be hard to say more than that until a G5 actually ships, though. When people - including thorough, independent benchmarkers like Mike Breeden - can get their grubby mitts on the new PowerMac, we can start talking in more specific terms.

In the meantime, I wish AMD all the best against Intel.
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
post #147 of 179
Quote:
Originally posted by Ed M.

To me when you're testing the HARDWARE and *only the HARDWARE, you must try and keep everything the same as much as possible. That means using the same compiler for both machines.

Am I the only one who doesn't buy this argument? The PPC and x86 code generation components of GCC were written by entirely different groups of people. They are not the same in any way, shape or form with the exception of the name. The parser and semantic analysis may be the same, but that seems to me to be largely irrelevant. It's not that I agree with the winlots on this one (I couldn't care less about the SPEC scores), but I still don't think the "same compiler" argument is valid (unless there's something about the compiler that I'm missing that makes it similar across platforms).

All in all I think Apple would have been better off sticking with real-world apps.
post #148 of 179
Also keep in mind, guys, that not only are the Opteron and Xeon systems MORE expensive (and underperforming in at least the Xeon's case), but also the fact that the Opteron will likely be running 32bit apps for some time to come. Not like the shrink-wrap consumer/professional applications developers are coming over in droves for the Opteron. Again, I'm not aware of any developers, Microsoft included, saying that they would port their wares to AMD's x86-64 for the desktop. If it's gonna happen, it will happen exactly like Microsoft said it would... No 64bit Windows on the personal consumer desktop systems until the end of the decade (or was that Intel? Or was it both?) Anyway.. the point is, I'm not sure there is a quick and efficient way of getting those x86 apps over in time. Think of all the bugs they will have to deal with all while trying to maintain backward compatibility or compatibility in general between al the many, many versions of Windows that are out there. Talk about the Windows OS *forking*... Sheesh!

Consumer confusion at an all-time high! Migration nightmares for the Wintelon crowd indeed. That's probably why they said by the end of the decade. There would be little incentive for Wintelon developers to code for these systems at this time, since the Opterons and Xeons are not aimed at the every-day-joe-blow-consumer space and the 32bit versions are claiming to run just fine through a 64bit OS, and since they're 32bit, it's unlikely that they will be able to address more than the 4-GB limit. Yep, Apple has the upper-hand right now and their timing couldn't have been better, given the current market conditions.

Hey, what's with the new developer tools that have been released that supposedly make it a breeze to port existing apps over to the new Panther/G5 systems? Can someone explain this a little better. Amorph? Programmer?

By the way... Where is StagflationSteve...? That character must be beside himself since the G5's were announced.. Let's face it, how is he going to explain that Apple's new G5 systems are much cheaper and better performing that the best the Wintelon crowd has to offer?

--
Ed
post #149 of 179
Quote:
Originally posted by Fluffy
Am I the only one who doesn't buy this argument? The PPC and x86 code generation components of GCC were written by entirely different groups of people. They are not the same in any way, shape or form with the exception of the name. The parser and semantic analysis may be the same, but that seems to me to be largely irrelevant. It's not that I agree with the winlots on this one (I couldn't care less about the SPEC scores), but I still don't think the "same compiler" argument is valid (unless there's something about the compiler that I'm missing that makes it similar across platforms).

All in all I think Apple would have been better off sticking with real-world apps.

I agree that the conceit that any of this measures pure hardware is silly; pure hardware can't do anything but idle and heat your apartment. I certainly don't buy it. The conceit that using GCC across platforms somehow levels the playing field is equally bizarre, although at least it's no more absurd than claiming that any compiler or compilers could accomplish that goal. But if this is true, then comparing the "same" application across platforms is an unreliable measure of "raw" system performance as well, because applications are built with compilers.

It seems to me that if any use is to be made of SPEC, it should be admitted that what is being measured is a target platform, and then a target platform - that is, a combination of compiler (+ settings), OS and hardware that developers would actually target their applications for - is what should be tested. By the conventional wisdom of benchmarking, it would be almost useless to compare, say, GCC 3.3 on a 2x2GHz PowerMac G5 running OS X 10.3 to MS Visual C++ on a 2x3GHz Dual Xeon running Windows XP Professional, because so many things are different - but no matter what you do, too many things are going to be different. So you can either abandon the errand, or accept that what SPEC essentially is is a suite of application benchmarks (gzip is part of it, and that's certainly an application) and so it can't be used to test anything more finely grained than an application platform. Given that, the only platforms of any interest are the ones that people are actually going to use, so the people interested in comparos should get a couple of retail boxes, install the most common developer toolsets leaving the defaults in place, build SPEC, run it, and report the results. The results from this test will most closely mirror the actual capabilities available to the applications that people will use, and the methodology will make it difficult to tweak for benchmark results - if Intel wants better results on SPEC, they'd have to change their systems in a way that would be noticeable to anyone picking up an Intel-powered PC in retail, and in that case the higher SPEC score would be relevant in real terms.

I understand that this is a radical and outrageous thesis to most systems benchmarkers, but it's the only way I can imagine a cross-platform comparison that's in any way meaningful.
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
post #150 of 179
Quote:
Originally posted by Ed M.
Also keep in mind, guys, that not only are the Opteron and Xeon systems MORE expensive (and underperforming in at least the Xeon's case), but also the fact that the Opteron will likely be running 32bit apps for some time to come. Not like the shrink-wrap consumer/professional applications developers are coming over in droves for the Opteron. Again, I'm not aware of any developers, Microsoft included, saying that they would port their wares to AMD's x86-64 for the desktop. If it's gonna happen, it will happen exactly like Microsoft said it would... No 64bit Windows on the personal consumer desktop systems until the end of the decade (or was that Intel? Or was it both?) Anyway.. the point is, I'm not sure there is a quick and efficient way of getting those x86 apps over in time. Think of all the bugs they will have to deal with all while trying to maintain backward compatibility or compatibility in general between al the many, many versions of Windows that are out there. Talk about the Windows OS *forking*... Sheesh!

I believe MS has announced an intention to release a version of Windows for AMD's x86-64, but I don't know more than that. If MS only releases a server edition of WIndows for x86-64, that would help keep Opteron based machines out of the PowerMac's market; but don't forget the UNIX-likes.

At any rate, the Opteron has none of the Itanium's difficulty running 32-bit applications. It doesn't do the job as elegantly as the 970 does, but it's not inelegant either. It looks to be an impressive performer, and for the sake of AMD's continued presence in the high-end CPU market (and for the sake of raising Intel's blood pressure) I hope it's everything AMD promises. The less room the Itanium has between the Opteron and the 970, the happier I'll be.

Quote:

Hey, what's with the new developer tools that have been released that supposedly make it a breeze to port existing apps over to the new Panther/G5 systems? Can someone explain this a little better. Amorph? Programmer?

As neither of us was sent to WWDC, I don't think we could muster any more information between us than is available to anyone else here.

The 970 supports the exact same instruction set that the 74xx does (the privileged and processor state instructions are different, but that's only relevant to a few parts of the kernel and the CHUD tools), so I can't imagine that "porting" means anything more than a recompile in all but the most pathological cases. Changes to the system frameworks shouldn't require recompilation in the overwhelming majority of cases - that's the big advantage of a framework. As usual, some Carbon apps will have to have code added to explicitly add support for some of the new features, but unless Apple takes this opportunity to deprecate the old event model and similar legacy cruft (please, Apple, please!), even the Bad Carbon Ports(TM) will run just fine as is.
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
post #151 of 179
Amorph writes:

Quote:
but don't forget the UNIX-likes

If OS X isn't the UNIX by which all others are judged, then it likely will be very soon. It's already has the best UI of any UNIX in history. Bar none. And it covers all users from the extreme geeks right on through to the first-time computer users. Opteron will likely be facing a drought of native Windows apps; especially if Microsoft doesn't release a "consumer" version of Windows-64 for it. We'll see though.

And regarding Amorph's comments on SPEC... This only goes to show that SPEC may very well have outlived it's usefulness of being the main benchmark people often look to when attempting to surmise which platform is fastest. I just think there are way too many variables that can be manipulated to skew the test one way or the other, depending how you look at it. SPEC is supposed to test the *entire* system. That must include every aspect of the CPU. If it doesn't test SIMD (AltiVec) or perhaps some other new CPU feature that might exist, then it IS NOT evaluating the *entire* system, therefore it's results provide less than optimal information.

Amorph writes:

Quote:
At any rate, the Opteron has none of the Itanium's difficulty running 32-bit applications. It doesn't do the job as elegantly as the 970 does, but it's not inelegant either.

OK, so then what's the catch, that users will "have to know" or "be aware of" what types of apps they are running. Will the 32-bit apps run smoothly alongside the 64-bit native apps simultaneously? If so, how much of a system slowdown is expected? Then there is the OS question again. The applications/developer support questions, the driver and hardware support questions, the end-user-confusion questions, etc., etc., etc., eeeeeesh... Not something I'd be looking forward to. And in the end, Intel and Windows folk are likely to be stuck with x86 architecture for some time to come if AMD's chips take off.

--
Ed
post #152 of 179
Quote:
Originally posted by Tuttle
It does look like machines with Intel chips are way overpriced and underperforming compared to Apple's G5s, but what about AMD's Opteron?

Are there Opteron systems out there that are competitive with the G5?

I went to www.boxxtech.com (a web site that crashes my current build of Mozilla, by the way) and priced a dual Opteron 244 system that matched the base dual 2.0 G5 as best as I could. The price came out to about US $4200 -- the same price range as the dual Xeon the G5 was put up against in the keynote.

It would be interesting to see this system in a bake-off with the G5 with real apps.

Does anyone know of any such tests run between Xeon and Opteron?

[Edit: I'd thought that "Opteron 244" meant 2.44 GHz. 244 is just a model number. The 244 runs at 1.8 GHz.]
We were once so close to heaven
Peter came out and gave us medals
Declaring us the nicest of the damned -- They Might Be Giants          See the stars at skyviewcafe.com
Reply
We were once so close to heaven
Peter came out and gave us medals
Declaring us the nicest of the damned -- They Might Be Giants          See the stars at skyviewcafe.com
Reply
post #153 of 179
I didnt really finish reading this thread cause I got to pissed. Of course the PC hardware junkies are going to go right for the spec tests trying to say apple cheated. Look Apple used an independent firm to do the testing, I`m quite sure they wouldnt have if they where going to cheat. Second if apple cheating then why did the mac perform so much better in all the bake-offs.. Look at mathmatica they couldnt even show the potential for that because the PC couldnt handle it. Its all in the apps baby. Unless you run spec for a living.

Thats my I bought the marketing point of view. I`m to tired to give out my own arguements right now. I`ll do so later.
''i'm an extremist, i have to deal with my own extreme personality and i walk the fine line of wanting to die and wanting to be the ruler of all.''
Reply
''i'm an extremist, i have to deal with my own extreme personality and i walk the fine line of wanting to die and wanting to be the ruler of all.''
Reply
post #154 of 179
I just found this Opteron vs. Xeon article: Duel of the Titans: Opteron vs. Xeon

It's a mixed bag of results with no clear overall winner. In these tests, the Opteron system falls behind the Xeon in the kinds of things Apple wants to emphasize, like audio, video, and 3-D processing.

So, SPEC numbers aside, I get the impression that Apple's bake-off might have been just as, if not more impressive against an Opteron-based system.
We were once so close to heaven
Peter came out and gave us medals
Declaring us the nicest of the damned -- They Might Be Giants          See the stars at skyviewcafe.com
Reply
We were once so close to heaven
Peter came out and gave us medals
Declaring us the nicest of the damned -- They Might Be Giants          See the stars at skyviewcafe.com
Reply
post #155 of 179
Level 3 cache on the G4 gets more bang for buck than on the G5, mainly because the G4 is so bandwidth starved (especially in Altivec). Although it might make some difference to the G5, it wouldn't be worth it (furthermore, the L3 cache would have to be faster and even more expensive to have a worthwhile effect on a faster, hungrier processor).

Do any Intel/AMD mainboards actually have L3 cache? The Power 4 does, but it's not a personal computer CPU. The last x86 CPU I can remember with L3 cache was the AMD K6/3.

Quote:
Saaay. Where's that smart-ass Digital Video site that was largin' it with their 'Dell twice as fast as powerMac G4' smugness? They didn't mind dishing it...can they take it?

Sensing some glee and smugness
Stoo
Reply
Stoo
Reply
post #156 of 179
That 2.5 gig 970 IBM PR page. That was on .13?

Could the 2.5 yields be stock piling now?

To coincide with the Panther release...at an even higher price bracket? Uber-Uber-Unix Workstation?

It suddenly occured to me...that a rev B Powermac G5 will get you an extra (assumption by LBB here...) 500 970 style mhz. Dual that. Yer talking 1 gig of 970 oomph over the current top of the line. On fpu performance, that would be like having a 3 gig G4 just with such a mere bump! Or a 1.7 G4 on integer? Or a 2 gig G4 overall. They don't make them yet! And if the bus stretches with each mhz bump...you get more bandwidth from a 970 04 San Fran' speed bump than you have in the current top of the line G4 Tower. Hmmm. Whatever, that is a handy extra 'oomph' to have. It should run the Opteron alot closer if the Opteron IS ahead.

If Apple don't ship the 970 towers until a little later due to shipping schedule crushing demand...then I may wait until Rev B turns up.

I've watched the keynote rebroadcast 3 times now...and I'm still in shock. That machine is a work of art. Panther is so much more the complete article...and I can't wait to reclaim my soul as a Mac user...

All this Spec' bull is a little churlish in my eyes. Alot of analysts have swallowed Intel's 'Mhz' bull for years. Seemingly unwilling to challenge those speed claims. The most blatant 'cheat' being the Pentium 3 1 gig to Pentium 4 at 1.4 gig transition. I often wonder...if the Pentium 4 was actually a Pentium 3...what would that performance really be? 2.5? 2.3? I suppose that's 'ifs' and 'maybes.'

Even IF the 'G5' is a 'catch up' machine. It's one hell of a catch up. A grand cheaper than its competition. Certainly the Boxx boxes. It looks ahead to me in real world apps. That's where it counts. And instinct says it won't be long before we have some kind of speed bump. I doubt we'll have to wait 9 months. 12. Or 18! And I'm sure the bump won't just be .250! Each bump you get with a 970 gets you double yer G4 progress! So 50% may not sound that much...to some...but to me...1 gig of 970 mhz improvement actually gives you 1.7 integer or 2 gig overall or 2.7 fpu G4 of G4 performance in 1 year! Put like that, it IS IMPRESSIVE!

Taken over a year between the last available G4 tower and the 3 gig 970 available this time next year...it's just a whole 'nuther ball game. The G4 will seem like the dark ages.

Lemon Bon Bon
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
post #157 of 179
Quote:
Originally posted by Lemon Bon Bon
That 2.5 gig 970 IBM PR page. That was on .13?

Could the 2.5 yields be stock piling now?

I think that the 2.5GHz 970 was a typo since it had a 900MHz bus.
JLL

95% percent of the boat is owned by Microsoft, but the 5% Apple controls happens to be the rudder!
Reply
JLL

95% percent of the boat is owned by Microsoft, but the 5% Apple controls happens to be the rudder!
Reply
post #158 of 179
Quote:
II just found this Opteron vs. Xeon article: Duel of the Titans: Opteron vs. Xeon

It's a mixed bag of results with no clear overall winner. In these tests, the Opteron system falls behind the Xeon in the kinds of things Apple wants to emphasize, like audio, video, and 3-D processing.

So, SPEC numbers aside, I get the impression that Apple's bake-off might have been just as, if not more impressive against an Opteron-based system.

Yeah. When I looked at the app benches at Tom's hardware I came away distinctly unimpressed by the Opteron in single cpu form and in dual? It seemed to be hanging out with the Xeon. So what's the story there..?

I suppose we'll have to wait for real world performance.

But for those creative pros who had to leave the Mac for performance reasons but who loved the OS...Panther and G5 will probably bring them home! And they can buy all their 'X' versions of their software with the £1000 they saved over buying the Xeon workstations...

Lemon Bon Bon
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
post #159 of 179
Quote:
Originally posted by Ed M.
If OS X isn't the UNIX by which all others are judged, then it likely will be very soon. It's already has the best UI of any UNIX in history. Bar none. And it covers all users from the extreme geeks right on through to the first-time computer users. Opteron will likely be facing a drought of native Windows apps; especially if Microsoft doesn't release a "consumer" version of Windows-64 for it. We'll see though.

Apple has some work to do before OS X is the UNIX by which all others are judged, and they might never do some of that work: It's not really necessary for OS X to have either the bulletproof uptime or the robust enterprise-grade toolset available on, say, AIX or OSF/1 (or whatever they're calling it now - Compaq UNIX?). Of course, it won't have their price tags, either.

Quote:
[re: the Opteron]

OK, so then what's the catch, that users will "have to know" or "be aware of" what types of apps they are running. Will the 32-bit apps run smoothly alongside the 64-bit native apps simultaneously? If so, how much of a system slowdown is expected? Then there is the OS question again. The applications/developer support questions, the driver and hardware support questions, the end-user-confusion questions, etc., etc., etc., eeeeeesh... Not something I'd be looking forward to. And in the end, Intel and Windows folk are likely to be stuck with x86 architecture for some time to come if AMD's chips take off.

I'm not recalling the exact nature of AMD's solution - I believe the processor has to switch modes, which it can do on the fly (say, between context switches). It's definitely refined enough that end users won't have to care whether they're running 32 or 64 bit applications, or both at once. Note that applications compiled for the Opteron's 64-bit mode gain access to twice as many registers, so there's an incentive to compile to 64 bit for a bit of extra performance even if the application doesn't need the 64-bit support itself. (The 970 reveals the same number of registers to both 32- and 64-bit applications, and it sports the generous number of registers common to PowerPCs.)

THG noted in the shootout article linked above that since the Opteron has a memory controller on board, it can only be connected to DDR333 RAM. If you wanted to hook an Opteron up to, say, dual channel 128-bit DDR400 RAM, you'd have to either wait for AMD to release a revised Opteron that supported that configuration or purchase a chip that bypasses the onboard memory controller in favor of its own (THG discusses this). On the other hand, each Opteron gets its own bank of RAM, up to 8GB - but that advantage wouldn't really manifest until you got to 4+ CPU systems.

Based on that article, I'm going to speculate that the G5 will still be at the front of the pack when it rolls out - if not in SPEC, than in real-world application benchmarking. If there is something faster on the landscape, it won't cost $3K.
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
post #160 of 179
Quote:
Originally posted by shetline
I just found this Opteron vs. Xeon article: Duel of the Titans: Opteron vs. Xeon

It's a mixed bag of results with no clear overall winner. In these tests, the Opteron system falls behind the Xeon in the kinds of things Apple wants to emphasize, like audio, video, and 3-D processing.

So, SPEC numbers aside, I get the impression that Apple's bake-off might have been just as, if not more impressive against an Opteron-based system.

This article showed also some interesting points : in some task a single opteron 1,8 is slower than the athlon 2400. The opteron is basically an athlon with a different memory controller and a bigger L2 cache (and of course 64 bit integer and SSE2 instructions set). I fear that the athlon 64with only 256 KB of L2 cache will sucks.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Current Mac Hardware
AppleInsider › Forums › Mac Hardware › Current Mac Hardware › Apple's Benchmarks misleading?