Apple's Benchmarks misleading?

1234579

Comments

  • Reply 121 of 178
    jccbinjccbin Posts: 476member
    I AM GOING TO BUY A G5 AS SOON AS THEY START SHIPPING! TOP END. DOG'S B*LL*CKS!



    A-Rupert-Grinting-Men.



    Die, PC Weenies, Die.
  • Reply 122 of 178
    lemon bon bonlemon bon bon Posts: 2,383member
    Quote:

    Die, PC Weenies, Die.







    (Hey, gotta get back into the Powertower Pro 225 mhz Pentium smashing spirit.... After years of Intel's mhz crap...I'm just having fun getting the last four years off my chest...)



    Now where was I?



    Oh yes..., 'Die, PC Weenies, Die!'



    Lemon Bon Bon

  • Reply 123 of 178
    lemon bon bonlemon bon bon Posts: 2,383member
    Heh...I hope to make Apple's benchmarks even more 'misleading' when I stick 4 gigs of Ram and an Ati 9800 Pro in my dual G5...



    (erughK...cr-eek...opens can of Pentium whooop-ass...)



    Lemon Bon Bon
  • Reply 124 of 178
    klinuxklinux Posts: 453member
    OK, now that LBB had a chance to finally get something off of his chest





    Anyways....



    Powerlogix issued an informative press release regarding G5. http://www.powerlogix.com/press/rele...03/030625.html



    This Technical Note from Apple also lists some technical detail of G5 vs G4 e.g there is 0 (zero!) MB of L3 cache on G5. I don't like that. http://developer.apple.com/technotes/tn/tn2087.html



    I was pretty sure that I was going to get a G5 but may now wait for RevB or pick up the G4 on the cheap.



    And as I have always said, no real comparison can be made re G5 until it ships...
  • Reply 125 of 178
    jlljll Posts: 2,713member
    Quote:

    Originally posted by klinux

    This Technical Note from Apple also lists some technical detail of G5 vs G4 e.g there is 0 (zero!) MB of L3 cache on G5.



    And?
  • Reply 126 of 178
    chu_bakkachu_bakka Posts: 1,793member
    Steve Jobs mentioned that there was no L3 Cache on the G5 in his keynote. you should watch it... he basically says the processor bus is so fast that it doesn't need it ... or something to that effect.
  • Reply 127 of 178
    klinuxklinux Posts: 453member
    How much do you want to bet RevB or later will have a L3 cache in there? The rest of the industry (Intel, AMD, IBM, etc) is going toward more L3 cache, not less.
  • Reply 128 of 178
    jlljll Posts: 2,713member
    Quote:

    Originally posted by klinux

    How much do you want to bet RevB or later will have a L3 cache in there? The rest of the industry (Intel, AMD, IBM, etc) is going toward more L3 cache, not less.



  • Reply 129 of 178
    klinuxklinux Posts: 453member
  • Reply 130 of 178
    brussellbrussell Posts: 9,812member
    From what I can tell, Joswiak's response thoroughly debunked 90% of the haxial site and its kin.



    The only remaining issue from what I can tell is whether they should have used the gcc compiler all around. It seems to me that there are two possibilities:



    1. use the same compiler on both machines in order to hold the compiler constant so only the hardware is tested (this is what Apple did)

    2. use the most optimized compiler on each machine in order to obtain the peak performance of the chip (this appears to be what the PC people want)



    I don't know enough about the technical issues to say which one is best, but #1 certainly has a logic to it. I've read people like Hannibal at Ars say that hardware and compiler are inextricable and it doesn't really make sense to divorce the compiler from the hardware. OK. But then what if Apple did just use Intel's SPEC scores and some similarly proprietary Apple SPEC scores?



    Example: IBM's initial SPEC scores were considerably higher for the 1.8Ghz G5 than Apple's 2Ghz keynote SPEC scores. This seems to suggest that using some other method of getting SPEC scores can substantially influence the scores. I'd be interested in seeing those - let Apple do whatever it can to squeeze out the highest SPEC scores, and then compare those to the Intel/Dell numbers that are out there. At the least you'd have to agree that both companies were able to cheat equally.



    It is also my understanding that those SPEC scores don't take advantage of Altivec, which I think most people agree is a strong suit for the G4/G5 machines compared to Intel, and has been very important to Apple up to this point.
  • Reply 131 of 178
    fluffyfluffy Posts: 361member
    Quote:

    Originally posted by Lemon Bon Bon





    (Hey, gotta get back into the Powertower Pro 225 mhz Pentium smashing spirit.... After years of Intel's mhz crap...I'm just having fun getting the last four years off my chest...)




  • Reply 132 of 178
    jccbinjccbin Posts: 476member
    Ahh, the good old days.



    As for the L3 cache -- Intel, etc are moving to it for the same reason that Apple moved to it years ago - it alleviates the delays caused by system buses that cannot feed the processor and other systems fast enough by caching oft-used items close to the processor.



    Quite simply, if your L3 cache cannot feed the processor faster than the system bus, you don't need L3 enough for it to be cost-effective.



    0.8-1.0 ghz sysbus and 0.4 ghz ddr RAM can apparently feed the chip fast enough, at least in Apple's opinion.
  • Reply 133 of 178
    Quote:

    Originally posted by Fluffy





    Yea sluggo, I still have that poster somewhere. That was on of the best MacWorlds ever.
  • Reply 134 of 178
    telomartelomar Posts: 1,804member
    The other reason for L3 cache is reduced latencies hence why you see it so much in servers. I'm sure at some point Apple will probably readd L3 cache but I wouldn't count on it going to be in Revision B or even the overly near future.
  • Reply 135 of 178
    hmurchisonhmurchison Posts: 12,423member
    From www.accelerateyourmac.com



    Quote:

    "Hey everyone, I've been at Apple's developer conference and had a chance to install and try out After Effects on a new G5.

    I ran the Night Flight file that has come to be the standard for AE benchmarking. Since I didn't want to sit there and watch it render for hours, I ran just the first 10 interlaced frames from the project's pre-set render queue...

    http://www.aefreemart.com/tutorials/...ghtflight.html

    Here are my results for this test on the three computers I have available to me:



    1 x 1.0 GHz G4 PowerBook 17" - ~30 minutes (3 min/frame)

    2 x 2.66 GHz Pentium Xeon from Boxx - 11 min, 39 sec (1.2 min/frame)

    2 x 2.0 GHz PowerMac G5 - 6 min, 1 sec (0.6 min/frame)



    I ran the Xeon test on a couple different identical machines to make sure mine wasn't just running slowly, but got identical results.

    Of course my Mac bias is well-documented, but I'm sure many people here can vouch for me as an honest person. If the results had gone the other way, I'd just keep my mouth shut and let someone else break the bad news.

    Other observations about this test that may ultimately work in the Mac's favor:



    1) The machine was not running 64-bit Panther, but only a tweaked version of 32-bit Jaguar. Likewise, AE is obviously not yet compiled to take advantage of the G5 chip in any way. Both or these situations will automatically be rectified in the future.



    2) Night Flight is very CPU-intensive, but not very disk I/O intensive. I think the 1 GHz system bus and other details on the G5 will provide greater gains for typical projects that rely more heavily on I/O."

    ==============================================



    All pretty interesting!!!!

    DAVID S.

    pixelcraft studios "



    DAMN! twice as fast as a Dual Xeon with one of worst coded apps for Dual Macs. It's been interesting to the response of the PC fans. I sense a sort of insecurity that's cropped up. Absolute advantage in speed was theirs in the past couple of years at the least. That seems to have evaporated. We as Mac users don't really require the fastest hardware...we just need to stay close. We already know we have the superior OS. Be afraid...be VERY afraid.
  • Reply 136 of 178
    jbljbl Posts: 555member
    Quote:

    Originally posted by BRussell

    Example: IBM's initial SPEC scores were considerably higher for the 1.8Ghz G5 than Apple's 2Ghz keynote SPEC scores. This seems to suggest that using some other method of getting SPEC scores can substantially influence the scores.



    Does anybody know the story behind the orignial IBM SPEC scores? It could be that they were just coded better. However it could also be that the final version was slower, or that OSX is slower than whatever was used in the original test. Until these numbers came out, it had been assumed around here that the IBM numbers were conservative, but maybe they were not conservative enough.
  • Reply 137 of 178
    hmurchisonhmurchison Posts: 12,423member
    Quote:

    Originally posted by JBL

    Does anybody know the story behind the orignial IBM SPEC scores? It could be that they were just coded better. However it could also be that the final version was slower, or that OSX is slower than whatever was used in the original test. Until these numbers came out, it had been assumed around here that the IBM numbers were conservative, but maybe they were not conservative enough.



    I believe they were compiled using Visual Age. Definitely not GCC. You know..I'm pretty much fed up with SPEC scores. The premise behind SPEC is Noble but the execution is poor. Bring on the Real World tests!
  • Reply 138 of 178
    amorphamorph Posts: 7,112member
    Quote:

    Originally posted by JBL

    Does anybody know the story behind the orignial IBM SPEC scores? It could be that they were just coded better. However it could also be that the final version was slower, or that OSX is slower than whatever was used in the original test. Until these numbers came out, it had been assumed around here that the IBM numbers were conservative, but maybe they were not conservative enough.



    The original IBM SPEC scores were estimates, not actual results. Any actual benchmarking would probably have used IBM's extremely nice VisualAge compiler, but I haven't seen any concrete results from that (not that I've looked, really, since the whole question seems academic to me).



    On another topic, the above-linked PowerLogix white paper has at least one glaring error:



    Quote:

    By the way, anyone remember when the PowerMac G4 733 with the 7450 CPU first began shipping? Apple touted it as the best thing since...well....the PowerMac G4 with PPC 7400...but the previous 533 model was actually a better performer than the 733! This was primarily due to the pipeline architecture of the 7450 compared to the 7400. Apple needed to get the clock speed up for marketing purposes (then and now.) Will history repeat itself? How close will the previous Dual 1.42 PowerMac G4 come to the new 1.6GHz or 1.8GHz G5s, for example?



    This is a terrible example, because it ignores the fact that the 7450 is a significantly different chip from the 7400. Yes, it was less efficient than the 7400 when running code compiled for the 7400. Once applications started targeting the 7450, the 733 was faster - in particular, 7450 AltiVec performance shot through the roof (and then hit the MaxBus bandwidth limitation) once people started adapting their code to it, because it's a much better AV unit than the 7400's. But Mot's CPUs are very sensitive to instruction ordering, so if you don't make an effort to write for a particular strain of the G4 you're not going to get impressive results from it.
  • Reply 139 of 178
    lemon bon bonlemon bon bon Posts: 2,383member
    Again the dual G5 kicks the crap out of a dual Xeon on an unoptimised Adobe app... Bring on Panther and Adobe optimisation...



    Saaay. Where's that smart-ass Digital Video site that was largin' it with their 'Dell twice as fast as powerMac G4' smugness? They didn't mind dishing it...can they take it?



    Bet they aint so smug now...maybe they can try their Adobe digital video bench again on a dual 2 gig G5.



    Wipe the smile off their 'face like slapped arse'.



    Lemon Bon Bon



  • Reply 140 of 178
    brussellbrussell Posts: 9,812member
    Quote:

    Originally posted by Lemon Bon Bon

    Saaay. Where's that smart-ass Digital Video site that was largin' it with their 'Dell twice as fast as powerMac G4' smugness? They didn't mind dishing it...can they take it?



    Oh he's still dishing it.

    Quote:

    The G5 is not the fastest chip for personal computers, workstations, desktops, ducks or whatever. No matter how you cheat it, according to AMD its Opteron has the G5 beat by a country mile. DMN has obtained SPEC benchmark data from AMD, a company that I believe has considerably more credibility than Apple, showing its numbers on the exact same tests run by Apple. Take a good look at the table below. Well, well. Who's the fastest now?

    [graph showing Opteron 50% faster than G5.]



Sign In or Register to comment.