Meh. Apple used the graphs from the tests they liked, this guy used other graphs from the tests...it's almost as if the author expects truth in advertising.
All we have to do is wait and see if dell says anything or find out for ourselves when the g5 comes out.
Tests will always be skewed a little. It happens all the time.
I find it funny that one of this guys reasons that apple is misleading customers is the 2999 price tag whichis 1 dollar less than 3k to trick customes into thinking its not a 3k machine. Funny indeed.
To make matters even worse for Apple, Dell sells a faster version of the Dimension 8300 than the one that Apple tested. Apple tested a Dimension 8300 with a 3.0 GHz P4, but Dell also sells a 3.2 GHz P4.
Uhm, didn't Intel just launch that chip yesterday?
IBM's conservative estimates for its own compiler optimized Spec results for a 1.8 GHz PPC970 were 937/SpecInt and 1051/SpecFP.
Scale up those numbers for a 2 GHz part and I think the 970 is competitive, especially in respect to its power dissipation which is still less than your average Xeon, P4, Opteron, etc.
Meh. Apple used the graphs from the tests they liked, this guy used other graphs from the tests...it's almost as if the author expects truth in advertising.
The results from the Apple site are different because Apple is using a compiler called GCC, not the Intel compiler.
Intel uses their own especially for spec-tests tweakd compilers. Apple used the same compiler for every system. Apple is doing fair benchmarks.. the official spec-scores are quite misleading since every vendor supplies their own benchmarks, using whatever compiler they chose. AND.. they do not account for AltiVec.
the main thing that shows how much bullshit is in these benchmarks is that the dual processors are a lot slower than their single processors siblings in the real world this would show that the benchmarking software is not SMP ooptimised at all if it was would 2 engines be slower than one engine of exactly the same type
I hate to come in here with my "Facts" and "Logic", but the author of that page does not have a clue.
First, Apple stated that they used GCC 3.3, so as to level the playing field.
Second, and this is important: Using GCC 3.3 does *not* level the playing field.
For one, GCC is highly optimised for the x86 architecture. The author suggests otherwise, which is bullshit. What platform do all those
linux (FSF & GCC) developers use? If you guessed PC, you are correct. Through sheer numbers, there's probably 90% of GCC developers a) on PCs and
b) scrutinizing the code for x86 optimisation. What percent of said developers have PPC 970 workstations? Approximately 0%
Open source code of a magnitude of the (huge) Gnu Compiler Collection is all about numbers. The number of x86 developers is vastly greater than
the number of PPC developers (or Sparc, or MIPS, or Alpha), thus the x86 code is far, far more mature.
Second the P4 architecture has been around for YEARS. Thus, all those GCC developers have had all those years to tweak the relevant optimisation
code
for i686 (or whatever it's called). How long has the 970 portion of GCC 3.3 been around? months?! Thus, it is inevitable that the code
generated for PPC970s is less than perfect, even poor. Don't believe me? The GCC compiled code for all PPCs is reputed to be quite poor,
*especially* when compared to the binaries produced for x86.
Moreover, PPC in general, and 970s in particular are very much dependant on the compiler for speed. Much more so than for x86. Remember
Hannibal's articles on Ars about the 970? Optimising for the 970s scheduler makes or breaks the 970s performance. So how well do you think a 6
month old portion of GCC for a (then) vapourware chip, written by a small handful of IBM developers is going to fare?
The great thing about GCC, however, like all open-source projects with it's momentum, is that with time it will get much better. More eyes see
all bugs, or something like that.
I have a g4-based laptop running no less than three operating systems compiled w/ GCC (3.2), and I can tell you GCC produces poorly optimised
code for the G4, which has been around for years. And yes, I set my compiler flags.
OK, end of rant.
edit: forgive the fugly formatting, I'm posting from w3m (a text based web browser).
thanks for this contribution. I will add that the benchmarks with real applications show a much greater advantage for the G5 (twice as fast). There is PDF files avalaible on the apple site.
In other way isn't it logical that a 58 millions 64 bit chip RISC on SOI 0,13 with the fastest bus on the more advance PC mobo of the world, is the fastest PC ?
I said yes, without any doubt. It was logical that the G4 was behind. It's logical that the G5 rocks especially in the dual configuration (one Ghz bus for each processor compared to one shared 800 mhz bus for the dual Xeon).
Well it's a moot point. On a level playing field ie same software the G5 is faster. But this is not real world, in the real world, both processors would be intergrated into systems with code optimised for that processor.
Real world tests are what really count, so we will have to see what happens with Photoshop and Cinewave, and various Mpeg encoders etc.
there is no doubt the playing field has been leveled for most users.
It is disappointing that Apple *appears* to have cheated somewhat on the spec test. I would expect better from Apple than say, Gateway. Who compared Quake 3 on a Gateway to Quake 3 running in Classic on an iMac.
But if anyone is surprised by this, I've got news for you. Of course Apple's benchmarks are going to be bias somehow. They want to sell Macs, not provide a fair and honest comparison for the consumer.
But this is not real world, in the real world, both processors would be intergrated into systems with code optimised for that processor.
Actually, no. In the real world, the system that runs on the P4 is designed to run on anything from a Pentium II 133 MHz. The system that runs on the G5 is designed to run on an original iMac (G3 233 Mhz)
Apple's "cheating" by replacing the malloc library is strange, but probably not significant. The specint tests have long been ridiculed for concentrating on raw number-crunching at the expense of system-related services like memory allocation and paging.
Intel, on the other hand, probably used a special compiler that produces code that will not run on a P3. No vendor in his right mind would do that. Even the latest games are compiled so that they at least work on earlier pentia.
The Mathematica test is pretty convincing. They always try to optimize and they are a good example of real-world performance.
At the end of the day, I'm not interested in Steve's pissing contest with Intel. It's enough for me that we have a computer that performs well enough. The G5 delivers on that requirement.
The author of the haxial.com report is hippocritical: he blames apple for pushing their own machine's tests, but then compares the G5 with banechmarks of AMD and Intel processors that were done by AMD and Intel, (in that order).
Comments
Tests will always be skewed a little. It happens all the time.
I find it funny that one of this guys reasons that apple is misleading customers is the 2999 price tag whichis 1 dollar less than 3k to trick customes into thinking its not a 3k machine. Funny indeed.
And that the high Dell scores he mentions are the result of using a tweaked Intel compiler - one designed for high benchmark results, hm?
Quite simply, the real-world speed is going to be the selling pointof these Macs.
Get some ads on TV showing Dad getting home in time for supper or Mom getting to bed before midnight, and the real message will sink in:
Macs let you get on with life.
Cheers
Let's riot!
To make matters even worse for Apple, Dell sells a faster version of the Dimension 8300 than the one that Apple tested. Apple tested a Dimension 8300 with a 3.0 GHz P4, but Dell also sells a 3.2 GHz P4.
Uhm, didn't Intel just launch that chip yesterday?
the most important is how much faster they are than what we own now.
i think it's obvious that most of us won't be buying pc's, so to compare them is like subaru comparing a WRX to a bmw M3.
not too relevant! the numbers never tell the whole story!
i don't care about the stats as long as it screams on OSX!
you can't have that on any pc, no matter the speed!
Scale up those numbers for a 2 GHz part and I think the 970 is competitive, especially in respect to its power dissipation which is still less than your average Xeon, P4, Opteron, etc.
Originally posted by mrmister
Meh. Apple used the graphs from the tests they liked, this guy used other graphs from the tests...it's almost as if the author expects truth in advertising.
The results from the Apple site are different because Apple is using a compiler called GCC, not the Intel compiler.
First, Apple stated that they used GCC 3.3, so as to level the playing field.
Second, and this is important: Using GCC 3.3 does *not* level the playing field.
For one, GCC is highly optimised for the x86 architecture. The author suggests otherwise, which is bullshit. What platform do all those
linux (FSF & GCC) developers use? If you guessed PC, you are correct. Through sheer numbers, there's probably 90% of GCC developers a) on PCs and
b) scrutinizing the code for x86 optimisation. What percent of said developers have PPC 970 workstations? Approximately 0%
Open source code of a magnitude of the (huge) Gnu Compiler Collection is all about numbers. The number of x86 developers is vastly greater than
the number of PPC developers (or Sparc, or MIPS, or Alpha), thus the x86 code is far, far more mature.
Second the P4 architecture has been around for YEARS. Thus, all those GCC developers have had all those years to tweak the relevant optimisation
code
for i686 (or whatever it's called). How long has the 970 portion of GCC 3.3 been around? months?! Thus, it is inevitable that the code
generated for PPC970s is less than perfect, even poor. Don't believe me? The GCC compiled code for all PPCs is reputed to be quite poor,
*especially* when compared to the binaries produced for x86.
Moreover, PPC in general, and 970s in particular are very much dependant on the compiler for speed. Much more so than for x86. Remember
Hannibal's articles on Ars about the 970? Optimising for the 970s scheduler makes or breaks the 970s performance. So how well do you think a 6
month old portion of GCC for a (then) vapourware chip, written by a small handful of IBM developers is going to fare?
The great thing about GCC, however, like all open-source projects with it's momentum, is that with time it will get much better. More eyes see
all bugs, or something like that.
I have a g4-based laptop running no less than three operating systems compiled w/ GCC (3.2), and I can tell you GCC produces poorly optimised
code for the G4, which has been around for years. And yes, I set my compiler flags.
OK, end of rant.
edit: forgive the fugly formatting, I'm posting from w3m (a text based web browser).
Originally posted by 1337_5L4Xx0R
I hate to come in here with my "Facts" and "Logic", but the author of that page does not have a clue.
First, Apple stated that they used GCC 3.3, so as to level the playing field.
Second, and this is important: Using GCC 3.3 does *not* level the playing field.
For one, GCC is highly optimised for the x86 architecture. The author suggests otherwise, which is bullshit. What platform do all those
linux (FSF & GCC) developers use? If you guessed PC, you are correct. Through sheer numbers, there's probably 90% of GCC developers a) on PCs and
b) scrutinizing the code for x86 optimisation. What percent of said developers have PPC 970 workstations? Approximately 0%
Open source code of a magnitude of the (huge) Gnu Compiler Collection is all about numbers. The number of x86 developers is vastly greater than
the number of PPC developers (or Sparc, or MIPS, or Alpha), thus the x86 code is far, far more mature.
Second the P4 architecture has been around for YEARS. Thus, all those GCC developers have had all those years to tweak the relevant optimisation
code
for i686 (or whatever it's called). How long has the 970 portion of GCC 3.3 been around? months?! Thus, it is inevitable that the code
generated for PPC970s is less than perfect, even poor. Don't believe me? The GCC compiled code for all PPCs is reputed to be quite poor,
*especially* when compared to the binaries produced for x86.
Moreover, PPC in general, and 970s in particular are very much dependant on the compiler for speed. Much more so than for x86. Remember
Hannibal's articles on Ars about the 970? Optimising for the 970s scheduler makes or breaks the 970s performance. So how well do you think a 6
month old portion of GCC for a (then) vapourware chip, written by a small handful of IBM developers is going to fare?
The great thing about GCC, however, like all open-source projects with it's momentum, is that with time it will get much better. More eyes see
all bugs, or something like that.
I have a g4-based laptop running no less than three operating systems compiled w/ GCC (3.2), and I can tell you GCC produces poorly optimised
code for the G4, which has been around for years. And yes, I set my compiler flags.
OK, end of rant.
edit: forgive the fugly formatting, I'm posting from w3m (a text based web browser).
thanks for this contribution. I will add that the benchmarks with real applications show a much greater advantage for the G5 (twice as fast). There is PDF files avalaible on the apple site.
In other way isn't it logical that a 58 millions 64 bit chip RISC on SOI 0,13 with the fastest bus on the more advance PC mobo of the world, is the fastest PC ?
I said yes, without any doubt. It was logical that the G4 was behind. It's logical that the G5 rocks especially in the dual configuration (one Ghz bus for each processor compared to one shared 800 mhz bus for the dual Xeon).
btw: so what. it's it more important that it runs osx and, let's say: photoshop screamingly fast.
[very of topic mode on]
do you know what "dell" means in dutch? (well, okay, it's written differently but pronounced the same: del)
it means: slut.
so far for dells credibility
[very of topic mode off]
Originally posted by 1337_5L4Xx0R
BIG, HUGE RANT
Thanks. Your comments make me feel a little better. I had forgotten about Intel-compiled SPEC tests.
Real world tests are what really count, so we will have to see what happens with Photoshop and Cinewave, and various Mpeg encoders etc.
there is no doubt the playing field has been leveled for most users.
he say hes a macuser but had nothing good to say about macusers nor the new powermacs.
this is good!
for a long time apple could not legitamately challenge the wintel boxes.
now it can and the wintel dummies will come for us,watch and see.
yesterday was a glorious day in mac folklore,but i belive its only the beginning.
long live apple!!!!!!
But if anyone is surprised by this, I've got news for you. Of course Apple's benchmarks are going to be bias somehow. They want to sell Macs, not provide a fair and honest comparison for the consumer.
Barto
Originally posted by 1337_5L4Xx0R
I hate to come in here with my "Facts" and "Logic", but the author of that page does not have a clue.
First, Apple stated that they used GCC 3.3, so as to level the playing field.
Second, and this is important: Using GCC 3.3 does *not* level the playing field.
For one, GCC is highly optimised for the x86 architecture. The author suggests otherwise, which is bullshit. What platform do all those
linux (FSF & GCC) developers use? If you guessed PC, you are correct. Through sheer numbers, there's probably 90% of GCC developers a) on PCs and
b) scrutinizing the code for x86 optimisation. What percent of said developers have PPC 970 workstations? Approximately 0%
Open source code of a magnitude of the (huge) Gnu Compiler Collection is all about numbers. The number of x86 developers is vastly greater than
the number of PPC developers (or Sparc, or MIPS, or Alpha), thus the x86 code is far, far more mature.
Second the P4 architecture has been around for YEARS. Thus, all those GCC developers have had all those years to tweak the relevant optimisation
code
for i686 (or whatever it's called). How long has the 970 portion of GCC 3.3 been around? months?! Thus, it is inevitable that the code
generated for PPC970s is less than perfect, even poor. Don't believe me? The GCC compiled code for all PPCs is reputed to be quite poor,
*especially* when compared to the binaries produced for x86.
Moreover, PPC in general, and 970s in particular are very much dependant on the compiler for speed. Much more so than for x86. Remember
Hannibal's articles on Ars about the 970? Optimising for the 970s scheduler makes or breaks the 970s performance. So how well do you think a 6
month old portion of GCC for a (then) vapourware chip, written by a small handful of IBM developers is going to fare?
The great thing about GCC, however, like all open-source projects with it's momentum, is that with time it will get much better. More eyes see
all bugs, or something like that.
I have a g4-based laptop running no less than three operating systems compiled w/ GCC (3.2), and I can tell you GCC produces poorly optimised
code for the G4, which has been around for years. And yes, I set my compiler flags.
OK, end of rant.
edit: forgive the fugly formatting, I'm posting from w3m (a text based web browser).
I'll Drink to that
Originally posted by Addison
But this is not real world, in the real world, both processors would be intergrated into systems with code optimised for that processor.
Actually, no. In the real world, the system that runs on the P4 is designed to run on anything from a Pentium II 133 MHz. The system that runs on the G5 is designed to run on an original iMac (G3 233 Mhz)
Apple's "cheating" by replacing the malloc library is strange, but probably not significant. The specint tests have long been ridiculed for concentrating on raw number-crunching at the expense of system-related services like memory allocation and paging.
Intel, on the other hand, probably used a special compiler that produces code that will not run on a P3. No vendor in his right mind would do that. Even the latest games are compiled so that they at least work on earlier pentia.
The Mathematica test is pretty convincing. They always try to optimize and they are a good example of real-world performance.
At the end of the day, I'm not interested in Steve's pissing contest with Intel. It's enough for me that we have a computer that performs well enough. The G5 delivers on that requirement.