NO DISINFO after all !!! - G5 vs. Core Duo

Posted:
in Current Mac Hardware edited January 2014
It's definately faster than G5. The press shot from the hips when they called Jobs presentation "disinfo". Take a look at this :



http://forums.macrumors.com/showthread.php?t=176161

[quote]MacSpeedZone posts their benchmarks for the Dual Core Intel iMac and claims that Macworld's initial test were misleading:



Quote:

This is where the Macworld "First Lab Tests" article falls a little flat ... obscuring the processor capacity vs processor usage problem inherent with mutiprocessor machines (or multi-core ... same difference). Using Macworld's logic we could argue, given the data above, the Quad G5 Power Mac is only 14% faster when running some of Apple's own applications. We think that this is misleading, as we pointed out.



They post a comparison chart, taking into account percentage of processor usage as a guide.





Sincerely



Zab the Fab

Comments

  • Reply 1 of 17
    hmurchisonhmurchison Posts: 12,425member
    Apple's only been shipping DP systems for 5+ years and reviewers STILL haven't groked this fact.



    With Dual Procs you don't always up the speed limit but you do widen the lanes of the highway.



    It would have been easy for Macworld to run proper tests that actually delivered a bit of value to their readers. Instead they set out to smear Steve Jobs.



    I wouldn't wipe my crack with their rag that's how I feel about Macworld Magazine.
  • Reply 2 of 17
    I've stopped reading any tech review...be it software or hardware. Most people don't know what they're talking about. The only reviews I can rely on are Anandtech and Tom's Hardware reviews...but I'm wary of these too.



    For the past few years, I just observe and collect user comments as well as try to get my hands on the hardware or software in question before making any decision. I also use good ol' logic to sort what sounds fishy and what doesn't.



    A bunch of user comments is much more useful than a single review. And a hands-on experience wins it all.
  • Reply 3 of 17
    Quote:

    Originally posted by kim kap sol

    I've stopped reading any tech review...be it software or hardware. Most people don't know what they're talking about. The only reviews I can rely on are Anandtech and Tom's Hardware reviews...but I'm wary of these too.



    For the past few years, I just observe and collect user comments as well as try to get my hands on the hardware or software in question before making any decision. I also use good ol' logic to sort what sounds fishy and what doesn't.



    A bunch of user comments is much more useful than a single review. And a hands-on experience wins it all.




    I couldn't agree more. Have you noticed a lot of poor reporting on Tom's recently as well? The actual hardware testing and reviews seems ok, but the "editorial content" just keeps going down hill.



    Anyway, back on topic...I think it's important for everyone to realize that the _potential_ speed of the duo (and more importantly, the entire systems) is huge, and that as more software becomes universal we'll have a better understanding of real-world performance. Also keep in mind that initial universal binaries are not likely to be terribly well optimised, and as we start seeing .2 and .3 releases of universal binaries we should see similar speed increases again.
  • Reply 4 of 17
    As someone who owns a (3) dual powermac, a quad powermac, a new intel imac, a powerbook g4, and who has played with the new Mac Book Pro.



    I can tell you that the Intel iMac feels faster even than my quad powermac. I have yet even ONCE to see the beachball on my Intel mac. Things like browsing 500 files, or right clicking on my applications folder in the dock (200+ apps) snap up immediately, not the small delay that is so frustrating on my powerpc computers.



    Everything absolutely flies on the iMac. I can hardly wait for Photoshop and other applications to be native. For day to day operations I use my iMac now, I feel kinda goofy letting my quad g5 sit in the corner room all day..



    I do use it for Aperture though.
  • Reply 5 of 17
    pbpb Posts: 4,255member
    Quote:

    Originally posted by kim kap sol

    I've stopped reading any tech review...be it software or hardware. Most people don't know what they're talking about. The only reviews I can rely on are Anandtech and Tom's Hardware reviews...but I'm wary of these too.





    Fortunately, the MacSpeedZone article put things into perspective. At least in this single case. The MacWorld test was just ridiculous.



    EDIT: actually not so much the test in itself, as the comments and the conclusions drawn.
  • Reply 6 of 17
    pbpb Posts: 4,255member
    Quote:

    Originally posted by concentricity



    Anyway, back on topic...I think it's important for everyone to realize that the _potential_ speed of the duo (and more importantly, the entire systems) is huge, and that as more software becomes universal we'll have a better understanding of real-world performance.





    So true. The benefits of dual processor systems have been discussed for long and in such detail, that my head hurts when I see comparisons between the two iMacs, the G5 and the Intel one, that do as if there was only one CPU in the later.



    Quote:

    Also keep in mind that initial universal binaries are not likely to be terribly well optimised, and as we start seeing .2 and .3 releases of universal binaries we should see similar speed increases again.



    This is also true. As with any transition of this kind, you should wait one or two years to see reasonably optimised software for the new systems.
  • Reply 7 of 17
    a_greera_greer Posts: 4,594member
    Quote:

    Originally posted by kim kap sol

    I've stopped reading any tech review...be it software or hardware. Most people don't know what they're talking about. The only reviews I can rely on are Anandtech and Tom's Hardware reviews...but I'm wary of these too.





    True, and soooooo sad:



    I like good honest non bias reviews, I dont have the money to "test" tons of configs, new parts, etc, so I up till a year or so ago, took Macworld, PCmag, and so on at their word, boy was I wrong...



    Looks like if I want benchmarks, I will take a copy of 3DS MAX to CompUSA with me
  • Reply 8 of 17
    lundylundy Posts: 4,466member
    Their problem is always that they use commercial apps for these "benchmarks". They are so complex that it tells you nothing. Each year at the WWDC, the Apple Performance Group takes a shareware or commercial app and tunes it up using Shark and Xcode. They routinely get orders of magnitude increases in speed in these apps by simple profiling.



    I think the only way to even come close to testing anything is to do a specific test of one aspect of the system using a freely multithreaded app. RC5-72 was a good comparison that was fully multithreaded and Altivec-aware that showed the G4s and G5s destroying any x86 chip.



    If these guys can't find multithreaded apps, they need to write one. It only takes a few hours. OS X has all the multiprocessing APIs that anyone would need.
  • Reply 9 of 17
    My brother and I both fired up BOINC on our Quad G5s, processing data for the Sztaki distributed computing project. As of this post, our 2 macs ruled the Top Computers list, with recent average credits of 550+ vs. the next best AMD/Intel computers with 280-285 credits... The AMD Opteron is a dual processor, the Intel Xeon is a quad as well.



    Digg this post and get the word out - G5s aren't obsolete by far!



    http://digg.com/apple/PowerMac_Quad-...ted_Computing_





    As an aside, team Canada was in 20th place when we joined, and now is in 4th... :-) Go Canada!



    http://szdg.lpds.sztaki.hu/szdg/team...php?teamid=183
  • Reply 10 of 17
    lundylundy Posts: 4,466member
    I wouldn't count on the next updates of any shareware or commercial apps being "snappier" or faster, except in the case of long operations like rendering and displaying large numbers of photos, etc.



    Developers want to add features, so they can market the features.



    While they may wish to speed up or multithread or make multiprocessor savvy, it is not something that they can usually market as a feature, so it doesn't get done.



    There is so much bloat and inefficiency in modern commercial software that it is ridiculous. Paying an engineer to optimize something costs a lot more than just jacking up the hardware speed, especially if optimizing can't be sold as a new feature.



    The IBM 360/67 ran dozens of virtual 360/65s and batch jobs simultaneously, all in 512KB (KILOBYTES) of memory. No room for bloated runtime overhead in that kind of code.
  • Reply 11 of 17
    vineavinea Posts: 5,585member
    Quote:

    Originally posted by lundy



    There is so much bloat and inefficiency in modern commercial software that it is ridiculous. Paying an engineer to optimize something costs a lot more than just jacking up the hardware speed, especially if optimizing can't be sold as a new feature.



    The IBM 360/67 ran dozens of virtual 360/65s and batch jobs simultaneously, all in 512KB (KILOBYTES) of memory. No room for bloated runtime overhead in that kind of code.




    Yes, and no room for solving problems in a human friendly way. While they ran them simultaneously the typical process was to submit your card deck and expect results the next day. A misplaced card often meant a blown day much less real syntax errors (yes, I've seen card sorters make mistakes).



    Which is why some folks thought "Clean Room" software development methodology was a good idea vs just compile the damn thing and see if any syntax errors pop up.



    Give me "ridiculous" modern bloat and inefficiency. The old way was necesary because of the limitations of the day...not some virtue. We went for abstractions to solve larger more complex problems than can be solved via batch jobs.



    Why don't you go run OS/360 on a linux machine and build yourself some really tight code? Because it isn't overly useful except as an exercise in nostalgia.



    Vinea



    Used TSO, OS/MVT and OS/360. COBOL and FORTRAN.
  • Reply 12 of 17
    placeboplacebo Posts: 5,767member
    I've found that overall computer magasines are increasingly useless. Especially when Macworld manages to print and mail their issues on the same days new products come out, it always feels like they're a month behind.
  • Reply 13 of 17
    lundylundy Posts: 4,466member
    Quote:

    Originally posted by vinea

    Yes, and no room for solving problems in a human friendly way. While they ran them simultaneously the typical process was to submit your card deck and expect results the next day. A misplaced card often meant a blown day much less real syntax errors (yes, I've seen card sorters make mistakes).



    Who said anything about batch jobs? I specifically said virtual 360/65s. From IBM Selectric terminals. Interactive.

    Quote:

    Which is why some folks thought "Clean Room" software development methodology was a good idea vs just compile the damn thing and see if any syntax errors pop up.



    Absence of syntax errors tells you nothing about the correctness of the code.

    Quote:

    Give me "ridiculous" modern bloat and inefficiency. The old way was necesary because of the limitations of the day...not some virtue. We went for abstractions to solve larger more complex problems than can be solved via batch jobs.



    Again, I am not talking about batch jobs.



    And the bloat is not in just the complexity of the application - the bloat is due to astronomical levels of subroutine calling, message passing, making the integer 5 into an object and passing a message to it asking it what its value is, and so forth.



    The original QuickDraw on the Mac 128K was 24K of code. Imagine if we had that kind of efficiency on today's hardware.



    Launch MS Excel for Mac and try to move the insertion point in the formula bar. It takes about a half second to move to the next character. It's calling about 30 subroutines to do that, whereas the same thing on Excel for Mac 512K would be maybe 2 assembler instructions.



    Yes, the hardware is dual 2 gHz instead of 8 mHz, but the bazillions of calls of repetitive code that hardware has to wade through just to put a character on the screen nullifies a large amount of the gain we got from the hardware increases.



    I lament it, but I know it's not going to change. As I said, programmer time is way more expensive than hardware.



    That's why I don't expect any effort to be made to remove bloat in Leopard. It isn't cost-effective.
  • Reply 14 of 17
    aplnubaplnub Posts: 2,605member
    what a weird thread. I can't see any post even though it is reported there are 15 in tis thing. What gives?
  • Reply 15 of 17
    lundylundy Posts: 4,466member
    It's like the HAL 9000 computer in "201 Minutes of A Space Idiocy" - the forum is slowly losing its mind as the index becomes more and more corrupted.



    I suggest to copy and paste all important posts into TextEdit until we get the re-indexing done.
  • Reply 16 of 17
    aplnubaplnub Posts: 2,605member
    Quote:

    Originally posted by lundy

    I suggest to copy and paste all important posts into TextEdit until we get the re-indexing done.



    Good thing we have automator...







Sign In or Register to comment.