The G5 sucks

2

Comments

  • Reply 21 of 57
    It's Official. Benchmarks suck.



    I've yet to read one story about someone feeling their new G5 Duallie is slow.



    I'm sooooo tired of these useless Benchmarks which raise only more questions than answers.



    Yawn..gimme a G5 anyday. That an Panther will be a very nice system to have.
  • Reply 22 of 57
    Mathematica has not yet been optimized for the G5 (or rather it has not yet been released in optimized form). The autovectorization capacity of IBM's compiler is still limited.



    The rest of those numbers are even more meaningless as you can't tell what they're doing. Note they did not check the Opteron on the other benchmarks either.
  • Reply 23 of 57
    Edit:



    Nevermind, posting in this type of thread is ridiculous. See you in 6 months.
  • Reply 24 of 57
    resres Posts: 711member
    The G5 suck? You've got to be kidding. The G5 is a great chip, and the dual G5 has a better price/performance ratio then any Apple computer since the 90s. All of the real world tests have shown that it stacks up well against more expensive dual 3.06-GHz Xeon processor systems.



    Now the 1.6 and 1.8GHz G5s are not holding up as well versus single processor P4 systems, and if you want to complain about them I might agree with you (at lest on price/performance).
  • Reply 25 of 57
    I'm still wondering about those SPEC scores though. Why can't the PowerPC series ever score as well as it's big brother (Power4) on SPEC? A 1.3Ghz Power4 chip scores 1281 on SPECfp, and now a 2.0Ghz PPC970 only gets 1001?? Something doesn't seem right.
  • Reply 26 of 57
    Quote:

    Originally posted by twinturbo

    I'm still wondering about those SPEC scores though. Why can't the PowerPC series ever score as well as it's big brother (Power4) on SPEC? A 1.3Ghz Power4 chip scores 1281 on SPECfp, and now a 2.0Ghz PPC970 only gets 1001?? Something doesn't seem right.



    The POWER4 has an enormous L3 cache (32 MB?) which will affect the SPEC scores substantially.



    The SPEC benchmark source code must be identical on all platforms it is run. I have seen many cases, however, in which minor changes to the way an algorithm is implemented can dramatically impact the relative performance of x86 and PowerPC. One way of writing the code forces the C/C++ compiler to keep data in memory, and the other way allows it to keep the data in registers. Since the x86 has few registers and memory-to-memory operations it is better to use the array. Since the PowerPC has many registers and no memory-to-memory operations, it is better to not use an array. Unfortunately the C/C++ language doesn't give the compiler many options in how it can optimize the code because of the potential side effects. I remember one case many years ago -- one way the Pentium was double the speed, the other way the 604e was triple the speed but there was no single version of the code which had both machines running at maximum performance. Since most people code for x86 we tend to be stuck with x86-favoured implementations, unless somebody does the work for the PPC (and Sun / Alpha / MIPS which are also RISC processors and behave similarly). With SPEC nobody is allowed to do those optimizations.
  • Reply 27 of 57
    Quote:

    Originally posted by Amorph

    It would also be interesting to see how it runs under the more efficient Panther, compiled with a final version of the IBM compiler (I don't know if GCC can be told about the hardware square root function, for example, and that's a big speedup when it's used - but I don't know how much it would be used in 3D rendering?).



    I'm curious about this myself. Panther should make a rather large difference. Not that we are *ick measuring again....
  • Reply 28 of 57
    kidredkidred Posts: 2,402member
    Quote:

    Originally posted by 123

    The G5 Sucks.



    Yea, you're right. PC Mag's comparison TODAY, PhotoshopBench and all of us mac users are blinded by the RDF. You're basing your ignorant opinion on something you read online without knowing the Ram of each, running a tweaked to run OS and even tho it's running again a machine 1ghz faster per chip, the 'G5 sucks'?



    Thanks for the wonderful insight.
  • Reply 29 of 57
    123123 Posts: 278member
    Quote:

    Originally posted by KidRed

    Yea, you're right. PC Mag's comparison TODAY, PhotoshopBench and all of us mac users are blinded by the RDF. You're basing your ignorant opinion on something you read online without knowing the Ram of each, running a tweaked to run OS and even tho it's running again a machine 1ghz faster per chip, the 'G5 sucks'?



    Thanks for the wonderful insight.




    Why should I care for per clock performance? And PC Mag? Sorry... this is c't.



    (but I agree this is too early to judge and if it's true that they forgot to turn on MP for Mathematica... well then I might as well change the title to c't sucks, but until then, it's still a very highly respected magazine, which PC Mag clearly is not. Anyway, we'll see tomorrow)
  • Reply 30 of 57
    Quote:

    Originally posted by 123

    Sorry... this is c't.



    Never heard of it. Highly respected by whom?



    -- Mark
  • Reply 31 of 57
    Quote:

    Originally posted by mark_wilkins

    Never heard of it. Highly respected by whom?



    -- Mark




    Anyone who speaks german and is somewhat interested in computers, I'd say. Can't say I know any programmer/coder/freak over here who does not have a c't subscription.

    The best comparison would probably be to Byte magazine, about 10 years ago.
  • Reply 32 of 57
    benchmarks are all fake and evil... they make no sense... go to your nearest Apple Store and try one out... I was blown away with the Single 1.6GHz... Im serious...
  • Reply 33 of 57
    One last gut punch to the argument:



    They are using C4D and lightwave in these tests. . . Do the render tests in Electric Image (which in it's PC version will bury C4D and Lightwave also) and you'll be singing a different tune about 3D on the mac. It's heavily optimized for altivec and the PPC, and uses two processors very effectively.



    The speed of EI is not hype.
  • Reply 34 of 57
    Quote:

    Originally posted by Programmer

    The POWER4 has an enormous L3 cache (32 MB?) which will affect the SPEC scores substantially.







    The power4 is also a dual core, compared to the G5 lonely core.

    When you compare a 1288 for a 1,3 ghz power4 (dual core) and a 1001 spec core for a 2 ghz G5 (mono-core), you will conclude that for spec, a power 4 at equal mhz is two times faster than a G5. This is a logical result. L3 cache do not seem to have a big importance here.
  • Reply 35 of 57
    Quote:

    Originally posted by mark_wilkins

    Never heard of it. Highly respected by whom?



    -- Mark




    Sorry but this is your mistake not the one of c't if you haven't heard of them! c't is one of the best computer publications in the world. Their computer knowledge is far beyond the average PC Mag, especially when it comes the cpu architecture. I haven't come across a mag (in a language that I can read) that's better. Anyone who doubts the quality of c't is just fooling himself.



    End of Line
  • Reply 36 of 57
    IBM themselves though said that the 1.8Ghz PPC970 would get 937 and 1051 on SPECint and SPECfp. So the 2.0Ghz part should get somewhat higher scores. Do you think they forgot to turn off the bus slewing, or didn't use Darwin?
  • Reply 37 of 57
    Quote:

    Originally posted by User Tron

    Anyone who doubts the quality of c't is just fooling himself.



    Since like many people here, I've never heard of the publication and I don't speak the language in which it's written, I'm unfortunately unable to evaluate either their overall quality or their specific evaluation of the G5.



    While I'm happy to accept that the publication is written by a competent bunch of people, and it would be wrong to say that I "doubt their quality," I don't think that trying to support an argument solely by citing an authority that few people on the board have heard of carries much weight.



    Better to discuss the technical details than to say "so-and-so says it's true!" particularly when "so-and-so" are an unknown quantity to many.



    That said, failing to run a Mathematica slave kernel on a dual processor machine, which I strongly suspect they did, does not speak for the strength of the rest of their benchmarks. Besides which, it's very difficult to evaluate their results overall without access to a detailed description of the tests they conducted. Did they describe this stuff in the original article? Maybe, but there's no information on that here.



    -- Mark
  • Reply 38 of 57
    Quote:

    Originally posted by mark_wilkins

    That said, failing to run a Mathematica slave kernel on a dual processor machine, which I strongly suspect they did, does not speak for the strength of the rest of their benchmarks. Besides which, it's very difficult to evaluate their results overall without access to a detailed description of the tests they conducted. Did they describe this stuff in the original article? Maybe, but there's no information on that here.



    -- Mark




    No offence but again like many here, you trying to find a flaw in their benchmarking to invalidate the results. Why can't you accept that G5 is not as genius as some have hoped for? So far NO benchmark has shown that the G5 is far superior. Are all benchmarkers incompetent idiots? Instead for taking the results for what they are, people are desperatly seeking why the G5 is far better that reality shows. Please step out of the SDF for one moment and accept that the G5 is not crushing anything. It's an welcome evolution nothing more but nothing less either. The long anticipation of the G5 have driven the hype into such heights that some have a real problem to come back to the ground of reality. Many here try to deny the facts: It can't be true because it's not allowed to be true....



    End of Line
  • Reply 39 of 57
    powerdocpowerdoc Posts: 8,123member
    Quote:

    Originally posted by User Tron

    No offence but again like many here, you trying to find a flaw in their benchmarking to invalidate the results. Why can't you accept that G5 is not as genius as some have hoped for? So far NO benchmark has shown that the G5 is far superior. Are all benchmarkers incompetent idiots? Instead for taking the results for what they are, people are desperatly seeking why the G5 is far better that reality shows. Please step out of the SDF for one moment and accept that the G5 is not crushing anything. It's an welcome evolution nothing more but nothing less either. The long anticipation of the G5 have driven the hype into such heights that some have a real problem to come back to the ground of reality. Many here try to deny the facts: It can't be true because it's not allowed to be true....



    End of Line




    If my dual G5 will be as fast as a dual Xeon, i will be very happy. I will have the advantage of the both world : the playability of OS X and the speed of PC.
  • Reply 40 of 57
    Quote:

    Originally posted by Powerdoc

    If my dual G5 will be as fast as a dual Xeon, i will be very happy. I will have the advantage of the both world : the playability of OS X and the speed of PC.



    Spoken wisely, Powerdoc!



    While I'm one of the True Believers in c't (I probably started part of this thing by hand-copying the non-SPEC c't numbers into a macnn forum... mea culpa), I'd never say they are infallible. They're an extremely bright bunch, and their reporting and testing is simply on a totally different level than what you get with Mac- or PC-World et. al.



    Anyway, even if they had tested a dual-CPU Athlon 64 box crushing everything in its way, I'd rather have a dualy G5... in a PowerBook, of course.
Sign In or Register to comment.