The G5 sucks

13»

Comments

  • Reply 41 of 57
    Quote:

    Originally posted by Powerdoc

    If my dual G5 will be as fast as a dual Xeon, i will be very happy. I will have the advantage of the both world : the playability of OS X and the speed of PC.



    1.) I'm really glad for you!

    2.) My faster PC collects a lot of dust as I'm an owner of a Cube and an iBook. You don't have to convince me about the advantage of OS X

    3.) Still I'm not trying to deny that PCs are faster in raw power.



    End of Line
  • Reply 42 of 57
    123123 Posts: 278member
    Quote:

    Originally posted by mark_wilkins



    That said, failing to run a Mathematica slave kernel on a dual processor machine, which I strongly suspect they did, does not speak for the strength of the rest of their benchmarks.





    I haven't used Mathematica on a DP system so far. If you start a slave kernel, will Mathematica automatically use the second processor or do I have to issue special "parallel" commands?



    If it's used automatically, are there "optimized" algorithms and others that aren't? How much is optimized? Speed-up?
  • Reply 43 of 57
    Quote:

    Originally posted by User Tron

    No offence but again like many here, you trying to find a flaw in their benchmarking to invalidate the results.



    All testing or experimentation must be evaluated with a critical eye. When it holds up, that's what makes a good test. Welcome to the principles of sound science and engineering.



    I have no personal interest in the results going one way or the other, but a pile of numbers presented with no context, no description of methodology, and the hint of serious flaws in approach (such as obviously failing to enable MP for Mathematica) don't inspire particular confidence.



    -- Mark
  • Reply 44 of 57
    123123 Posts: 278member
    Quote:

    Originally posted by 123

    I haven't used Mathematica on a DP system so far. If you start a slave kernel, will Mathematica automatically use the second processor or do I have to issue special "parallel" commands?



    If it's used automatically, are there "optimized" algorithms and others that aren't? How much is optimized? Speed-up?




    Having read a bit about the "Parallel Computing Toolkit", it seems that you have to do everything yourself. In other words, there will be zero speed-up if you just launch the slave kernel without using parallel commands.
  • Reply 45 of 57
    Quote:

    Originally posted by mark_wilkins

    All testing or experimentation must be evaluated with a critical eye. When it holds up, that's what makes a good test. Welcome to the principles of sound science and engineering.





    Oh really! I thought everything that's written is the pure and only truth. Thank you for given me such an enlightenment.



    Quote:



    I have no personal interest in the results going one way or the other, but a pile of numbers presented with no context, no description of methodology, and the hint of serious flaws in approach (such as obviously failing to enable MP for Mathematica) don't inspire particular confidence.



    -- Mark




    So why are you trying so hard to invalidate their results? Again we not talking about some kids who don't know what they're doing. Simple question: Why can't you accept the results?



    End of Line
  • Reply 46 of 57
    Quote:

    Originally posted by 123

    Having read a bit about the "Parallel Computing Toolkit", it seems that you have to do everything yourself. In other words, there will be zero speed-up if you just launch the slave kernel without using parallel commands.



    I've looked into it further, and I believe you're right about this, in which case I've misspoken.



    However, this still does not address the issue that it's impossible to tell what these benchmarks consisted of. Perhaps the original poster could describe the methodology, including software used and the content of the tests?



    -- Mark
  • Reply 47 of 57
    Quote:

    Originally posted by User Tron

    Oh really! I thought everything that's written is the pure and only truth. Thank you for given me such an enlightenment.



    It sounded like you needed an explanation about why I was pointing out problems with what I saw in the original post. If it was obvious to you why I was doing it (as I infer from your sarcasm) then why did you criticize me for it?



    Quote:

    So why are you trying so hard to invalidate their results? Again we not talking about some kids who don't know what they're doing. Simple question: Why can't you accept the results?



    With no description of the testing procedure, it's impossible to evaluate a pile of numbers. Why should I accept the results without that information?



    I suspect that information was in the original article and wasn't deemed important enough to post by whomever started this thread.



    So far, the argument in favor of these results seems to rest on the reputation of the original testers, who only seem to be familiar to our German friends on the board. Not too compelling.



    -- Mark
  • Reply 48 of 57
    Here are my specific concerns about these particular benchmarks. Many benchmarks raise these concerns, but some are better at trying to reduce their impact:



    Two processors built from similar basic semiconductor technology can have different general strengths based on their configuration, which would result in each leaving the other behind in different speed tests. For example, the Athlon processors had two floating-point execution units while the G4 had one. Variations like this can be an issue not just across different applications but also in one application, as one feature performs well on one platform and a different feature performs well on another.



    Also, performance on modern processors can be significantly affected by the optimization of specific paths through code. A chunk of code might keep one processor busy while another suffers pipeline stalls and runs at half the effective speed. A brief rewrite can in some instances reverse this relationship.



    This particular suite of tests posted doesn't describe, for example, what software was used for MP3 compression or video compression. Since ON A SINGLE MACHINE this kind of code can vary greatly in speed depending on algorithm or optimization, in the absence of any specifics it's impossible to guess what led to the numbers above.



    One approach to all of the above is to take commercially available software and run tests that try to mirror what people would practically do with the software. Something like PSBench tries to do this. Unfortunately, the applicability of such tests is often up for dispute because a typical person uses pieces of the software disproportionately.



    For instance, one person may use Maya primarily for modeling and rendering and thus be concerned with performance in those areas while an animator might be more concerned with deformer performance. I can show you well-documented tests that demonstrate great disparity between platforms in different parts of the same software, so the "suite of real-world tests" usually requires a good deal of evaluation by the end user to figure out what it all means.



    The bottom line is that, assuming the publishers of this magazine are as clever as the regular readers say they are, the original article probably was full of details of the test methodology, but since we don't have that here there's no way to figure out what the numbers mean. Judging from the thread's title, it's just meant to get people riled up.



    I'm not riled up, I'm just baffled that anyone would be rushing to the defense of a bunch of test results free of a context apparently only because they were published by some well-regarded regional magazine. After all, intelligent people can make mistakes, or overlook things, or rush to press with shaky results. I'm not saying that happened -- I'm just saying we have no way of knowing.



    -- Mark
  • Reply 49 of 57
    Quote:

    Originally posted by mark_wilkins

    It sounded like you needed an explanation about why I was pointing out problems with what I saw in the original post. If it was obvious to you why I was doing it (as I infer from your sarcasm) then why did you criticize me for it?





    It's common sense so there was no need to point it out. If you're lecturing people with common sense you assume that they have no common sense which is an insult. (Although I didn't take it personally.



    Quote:



    With no description of the testing procedure, it's impossible to evaluate a pile of numbers. Why should I accept the results without that information?



    I suspect that information was in the original article and wasn't deemed important enough to post by whomever started this thread.



    So far, the argument in favor of these results seems to rest on the reputation of the original testers, who only seem to be familiar to our German friends on the board. Not too compelling.



    -- Mark




    So it's simply your lack of knowledge about c't and their tests which make you doubt them? That doesn't make them wrong in any way. It's ridiculous that you argue with me over an article you can't read! ...



    Lies den Artikel dann red' ma weiter.



    End of Line
  • Reply 50 of 57
    Let me post this again:



    Why aren't the C't scores even close to what IBM annouced for for the 1.8Ghz PPC970 at last years Microprocessor conference? At that point in time it was 937 and 1050. Now C't does a SPEC test and gets some horrible results for the 2.0Ghz proc., which should be at least somewhat faster than a single 1.8Ghz machine. Did they forget to turn off bus slewing? Were the tests conducted with Darwin or was Quartz running? What gives, pointing out that even IBM's numbers weren't reached, and that was a year ago on a slower processor?
  • Reply 51 of 57
    amorphamorph Posts: 7,112member
    mark_wilkins does have a point that until we know their testing methodology (and I know c't well enough to believe that they were thorough), the numbers are, in fact, essentially meaningless. We can't know what they describe until we know what they describe, so to speak.



    For instance, Mathematica was one of the applications used to demonstrate the dual G5's clear superiority at WWDC! So why did it come out way ahead on the test Wolfram concocted for the keynote, and behind in c't's testing? The answer is that they tested different setups of the same application on different data with different testing parameters. Until we know what all those were, we know nothing.



    All that said, I don't believe there's any argument that a single, isolated 970 is a bit behind a P4 or a Xeon in terms of scalar code. Even Apple has said that. The G5's power comes from dual processors, 64 bit memory addressing, AltiVec, and massive system bandwidth (including huge amounts of RAM). Any test that doesn't leverage those advantages, for whatever reason, will show the G5... well, about where c't did, making allowances for GCC and OS X 10.2.7.



    I understand that c't has a reputation, but argument by reputation is a logical fallacy. Once we know the methodology in detail, we can make sense of the numbers. That's true of all statistics and metrics across the board.
  • Reply 52 of 57
    123123 Posts: 278member
    Quote:

    Originally posted by twinturbo

    Let me post this again:



    Why aren't the C't scores even close to what IBM annouced for for the 1.8Ghz PPC970 at last years Microprocessor conference? At that point in time it was 937 and 1050. Now C't does a SPEC test and gets some horrible results for the 2.0Ghz proc., which should be at least somewhat faster than a single 1.8Ghz machine. Did they forget to turn off bus slewing? Were the tests conducted with Darwin or was Quartz running? What gives, pointing out that even IBM's numbers weren't reached, and that was a year ago on a slower processor?






    I haven't read the article yet, but I have read the one about the 1.6 GHz G5 from 2 weeks ago.



    According to them:

    - No measurable performance gains without Quartz (unlike when they did G4 tests on 10.1).



    - Bus slewing: Not mentioned.
  • Reply 53 of 57
    123123 Posts: 278member
    Quote:

    Originally posted by User Tron

    I

    So it's simply your lack of knowledge about c't and their tests which make you doubt them? That doesn't make them wrong in any way. It's ridiculous that you argue with me over an article you can't read! ...



    Lies den Artikel dann red' ma weiter.



    End of Line




    The thing is that we really don't know what the numbers mean. We don't even know c't's conclusions, we don't know if they are cautious about their own numbers, what software was optimized, why they didn't use this or that switch in the IBM compilers. It really doesn't matter whether c't knew what they were doing, we can't judge by the numbers only, because without additional information, they are meaningless, no matter how good a magazine c't is.



    (BTW: As long as I can't buy the magazine, I can't read the article)
  • Reply 54 of 57
    Quote:

    Originally posted by philby

    Spoken

    While I'm one of the True Believers in c't (I probably started part of this thing by hand-copying the non-SPEC c't numbers into a macnn forum... mea culpa).




    OK, so you started this mess, does that mean you have the mag in front of you? If so could you enlighten us to some of the details everyone around here has been crying for. I went online and couldn't find any information about how and what the tests were (I read German at about a grade school+ level).



    Thanks!
  • Reply 55 of 57
    Quote:

    Originally posted by User Tron

    It's ridiculous that you argue with me over an article you can't read! ...



    No more ridiculous than your pointing to an article I can't read and saying I should accept the results on faith!



    -- Mark
  • Reply 56 of 57
    Quote:

    Originally posted by Carson O'Genic

    OK, so you started this mess, does that mean you have the mag in front of you? If so could you enlighten us to some of the details everyone around here has been crying for. I went online and couldn't find any information about how and what the tests were (I read German at about a grade school+ level).



    Thanks!




    I don't have it here right now (office).



    I thought about taking pictures of the mag later and sending them (not "posting them") to interested parties, as I don't have a scanner and am definitely too lazy to retype the entire 8 or 10 page thing (plus the 4 or 6 technical pages about the 3 main 64bit CPUs in the market and the P4, and the pages about 64bit computing in general). Also we're opening the second bottle of wine now, so I won't be able to anyway
  • Reply 57 of 57
    The thing is, benchmarks on the G5 are only going to get faster. Intel has pretty much pushed every last ounce out of the P4, and all it can do now is up the clock speed.



    It will get faster. Probably much faster.
Sign In or Register to comment.