Somehow, it' like '99 all over again :-/

Posted:
in Current Mac Hardware edited January 2014
The new supercomputer is released, Apple shows it side-by-side with some tanks, it's restricted from export by government regulations, it's going to kick intels butt, and seriously so because Moto can do wonders on a short pipeline and to top it off, it has a Velocity Engine, powering any media application like a Space Shuttle on steroids. IBM is relegated to second place because they don't believe in the Velocity Engine miracle.



Only, the development is somewhat worrying: the clock speed gets scaled back, real world tests show the G4 is only fast in some very limited tests. Not a problem, the system and apps need time to be optimized for the G4 and soon everything will be fine...



*snip*



Four years later, the short pipeline has broken the G4's neck, speedwise, intel and AMD are locked into a fight for life boosting clockspeed of their chips. Altivec has been adopted for some, but not all, multimedia apps and Apple's machines have taken a severe spanking from el-cheapo towers for half the price.



But then - just in the nick of time - G5 comes to the rescue. Longer pipeline but more efficient memory bus means it does not need the insane clock speeds to kill competing architectures, and this time we have 64 bit too. Fastest personal computer ever. Moto is relegated to second place...



Only the first benchmark tests are somewhat worrying. The G5 1.6 Ghz performs roughly equal to a hypothetical G4 1.6Ghz and gets slapped by the latest P4 in many tests. Of course, this will improve with new compilers and when the system and apps are optimized for the G5, all will be fine...



Or will it?

Comments

  • Reply 1 of 10
    costiquecostique Posts: 1,084member
    Quote:

    Originally posted by Smircle

    Or will it?



    Yes, it will. Hold on, bro.
  • Reply 2 of 10
    Quote:

    Originally posted by Smircle

    ... Of course, this will improve with new compilers and when the system and apps are optimized for the G5, all will be fine...



    Or will it?




    Actually, from what I've heard, in the past, IBM has optimized their compilers to such a point that preformance was improved in the 50-100% range... IBM's good...



    So yeah. It will.



    Just found this, showing benchmarks between gcc and IBM's own XLC. Not sure what it all means, but the last two pages sure look pretty
  • Reply 3 of 10
    Why is everyone so concerned with speed?



    I use mine for stability. Not being a technical guru I like to know that when I turn on my computer it will work, and will keep on working.



    Yes I have used PC's. I think I use about a 1.5Ghz at work, and I think my 667Mhz is just as fast, especially once I have 6 apps running. What really is the difference between 25 seconds and 22? (Note the massive bias I have - I could not care less about the machine at work - it is my PB that I am faithful to!)



    How much time have the IT been spending patching the flaws in the system? How long have we?



    How many times has their computer gone down? How many times has ours?



    The seconds we lose performing a task is easily made up in the time saved by having a more secure system. I just thank that I am not an IT person running around trying to patch systems everyday.



    At the end of the day my machine performs the tasks that I need in a timely fashion, and if broadband would only become cheaper in this country, then I would have no complaints!



    When getting back into the computing game I had the option of buying another brand but the thought never entered my mind. My machine may have cost me more than it should have but to this day I still say it was worth every penny (and it was a few!)
  • Reply 4 of 10
    smirclesmircle Posts: 1,035member
    Quote:

    Originally posted by Phroggy

    Actually, from what I've heard, in the past, IBM has optimized their compilers to such a point that preformance was improved in the 50-100% range... IBM's good...



    Right, only:

    - That pdf compares GCC and XLC on the POWER4. No comparisons given for the 970. The advantages of the XLC are not so clear when you compare integer instead of floating point (second slide from the end).

    - XLC is intellectual property owned by IBM. They might or might not wish to give it away and have GNU incorporate it into GCC (hint: might not sounds more likely since they sell XLC licenses).

    - Apple might or might not use XLC. But since Darwin is opensource, this opens quite a can of worms (independent contributers will not be able to compile Darwin as easily as they used to) - most likely solution: use XLC only for the Mach kernel, not for userland (Cocoa, Carbon, QuickTime, Quartz).

    - XLC can compile C, Fortran, C++. Darwin and Carbon would be fine, Cocoa need not apply (written in obj-C). Most likely solution again: use XLC only for the Mach kernel.



    Wrapup: if Apple uses XLC at all, it will (most likely) be only for the Kernel. Even if IBM gives XLC away for inclusion in GCC, till it is a shipping product more than a year will have passed.
  • Reply 5 of 10
    amorphamorph Posts: 7,112member
    Quote:

    Originally posted by Smircle

    Right, only:

    - That pdf compares GCC and XLC on the POWER4. No comparisons given for the 970. The advantages of the XLC are not so clear when you compare integer instead of floating point (second slide from the end).




    The 970 is architecturally very similar to a POWER4 core.



    Quote:

    - XLC is intellectual property owned by IBM. They might or might not wish to give it away and have GNU incorporate it into GCC (hint: might not sounds more likely since they sell XLC licenses).



    GCC is intellectual property owned by its various contributors. At any rate, given that the overwhelming majority of Macintosh software is compiled with a proprietary, commercial compiler (MetroWerks' CodeWarrior) and the world hasn't collapsed, this is a non-issue.



    Quote:

    - Apple might or might not use XLC. But since Darwin is opensource, this opens quite a can of worms (independent contributers will not be able to compile Darwin as easily as they used to) - most likely solution: use XLC only for the Mach kernel, not for userland (Cocoa, Carbon, QuickTime, Quartz).



    C and (to a lesser extent) C++ exist independent of compilers, so unless the programmer gets sloppy, the same code can be compiled in GCC one day and XLC the next. Furthermore, IBM engineers have worked to make XLC interoperate with GCC (remember, they're also looking at their own Linux initiative!) as far as linking goes. This, again, should not be an issue: Whether Apple compiles in GCC or XLC should be transparent.



    This actually shows an advantage of open source - some kid somewhere (and I mean that in the best way - "some kid somewhere" has built many things we currently take for granted) can write something useful to Apple, and test it and build it with GCC, then open source the code. Apple can take the code, build it with XLC for speed, and then ship the binary with OS X. They're not stuck with whatever compiler and settings the kid decided to use.



    Quote:

    - XLC can compile C, Fortran, C++. Darwin and Carbon would be fine, Cocoa need not apply (written in obj-C). Most likely solution again: use XLC only for the Mach kernel.



    Cocoa is not written entirely in Objective-C. CoreFoundation, the common base of functionality available to Carbon, Cocoa, Java and other languages, is written in straight C. It could be compiled with XLC, speeding up all Mac applications that leaned on the system APIs (whether or not they did so via the Cocoa frameworks).



    Also, any application that uses C and Objective-C, or C++ and Objective-C, can be built with both compilers - XLC for the C and C++, gcc for the Objective-C (and for where it produces better code) and linked together, because IBM has made sure that their object files (what the compiler outputs, basically) are compatible with GCC's.



    Quote:

    Wrapup: if Apple uses XLC at all, it will (most likely) be only for the Kernel. Even if IBM gives XLC away for inclusion in GCC, till it is a shipping product more than a year will have passed.



    Apple can use XLC for anything that isn't written in Objective-C, which at this point is a sizeable chunk of OS X. Apple can choose to use it on a per-file basis within an application, not a per-application basis, because of object file compatibility, and since most applications consist of hundreds or thousands of files of code this gives them tremendously fine granularity as far as optimization goes.



    IBM will not incorporate XLC into GCC, and they don't have to, any more than MetroWerks has to incorporate CodeWarrior into GCC (and Apple is using CodeWarrior for some projects - notably Finder). It can be proprietary and exist comfortably as a tool that Apple and third-party applications developers can use, because the languages used to write the code exist independent of the compilers, and the object files are compatible (at least between XLC and GCC).
  • Reply 6 of 10
    I thought the 1.6GHz G5 model was benched at about 2.4-2.6Ghz on a PC id say thats good...



    and the 1.8 was at around 2.6-2.8.

    can anyone else confirm?











    o ya and check this out ^^



    http://www.macaddict.com/news/news_007.html





    G5 + 2BG of RAM = PURE WINTELL OWNAGE!!!
  • Reply 7 of 10
    smirclesmircle Posts: 1,035member
    Quote:

    Originally posted by Wireless



    o ya and check this out ^^



    http://www.macaddict.com/news/news_007.html





    G5 + 2BG of RAM = PURE WINTELL OWNAGE!!!




    I don't see the Dual 2Gig owning anything here. In all the tests where the 2Ghz dual has 512 MB, it performs about as fast or slow as the G4s would if you scale their scores to 2Ghz.



    Only in the last test equipped with 2GB RAM does the 2Ghz dual really rock - in comparison to machines castrated with 512MB (of which 256 or more go to OS X, some MB to photoshop and 2 x 115MB = 230MB to the photoshop file and its undo-backup version. So, we are comparing one machine which has plenty of RAM to RAM-starved ones that cannot even fit the whole data in their physical RAM but have to swap.

    This test just shows that macaddict are complete dorks. And I'd like to see a dual xenium in comparison (2GB RAM please).
  • Reply 8 of 10
    Hate to link to the battlefront, but:



    Photoshop



    The tables are there. It's really unfortunate that such a small photoshop file is being utilized for these tests. Give each system 2GB - 4GB of RAM and a 400MB photoshop file and the effects of the higher bandwidth of the G5 would begin to be seen (as evidenced by Apple's photoshop numbers on a 600MB file). I don't think it's as much a case of the G5 being over-hyped vis-Ã*-vis the G4 as it is the tests being relics of an older time. I think it likely that real media work with realistically sized files will bear out the superiority of the G5 architecture.
  • Reply 9 of 10
    Quote:

    Originally posted by Smircle

    And I'd like to see a dual xenium in comparison (2GB RAM please).



    Do you mean a Xeon?



    -- Mark
  • Reply 10 of 10
    Motorola G4

    Motorola does not need the G4 to be a fast desktop CPU for their own (embedded) applications



    IBM 970 AKA "G5"

    IBM intend to sell Linux servers where the 970 will race head to head with Xeon CPUs also running Linux. IBM have a very stron interest to wring out as mcuh performance of the 970 hardwarewise as well as having very good compilers otherwise the Xeons will kill them.



    If we next summer still have top of the line G5s that is marginaly faster than a dual 1.42 G4 then it is '99 all over again



    I do hope that we are way past the performance of the dual 1.42 G4 much sooner than that. Say 10.3 and some application tweaks before xmas
Sign In or Register to comment.