G-5's 1.6 arrived at my office today

1679111215

Comments

  • Reply 161 of 283
    multimediamultimedia Posts: 1,056member
    Quote:

    Originally posted by Ti Fighter

    is the photoshop update/plugin in out yet?



    Yes. Has been since Tuesday evening. Adobe Photoshop 7.0.1 G5 Processor Plug-in update for Mac OS X
  • Reply 162 of 283
    jlljll Posts: 2,713member
    Quote:

    Originally posted by twinturbo

    (and no, I'm not mad, I just know that in my circle I have a ONE SHOT chance at proving that the new G5 is FAST to my PC-totting comrades-if the benchmarks suck for the next two months, that's not going to help anything, and when optimized software finally does come and creames the x86 crowd, no one will be around to listen-except us macheads of course-while the IT department goes out and buys even more PC's)



  • Reply 163 of 283
    amorphamorph Posts: 7,112member
    Quote:

    Originally posted by twinturbo

    I wonder how long that compiler's going to take to arrive . . .







    . . . still waiting . . .




    You're waiting for Panther, essentially. The compiler will be done when it's done. It's not the sort of application you want to be a rush job.



    Quote:

    (and no, I'm not mad, I just know that in my circle I have a ONE SHOT chance at proving that the new G5 is FAST to my PC-totting comrades



    Right, so this is about dick measuring. So it probably won't help to remind them of the oh-so-stellar performance of the Pentium 4 when it first appeared, right? Because the whole exercize is irrational to begin with.



    If it matters so much to you to have your machine outdo theirs right out of the box, I'd suggest putting off your order until Panther ships, and all the applications you use have been patched or recompiled.



    If you want to get work done, order the G5 now, use it at the speed it currently demonstrates (which, especially at the high end, will be a significant improvement over any previous PowerMac) and be happy as it gets faster over time.
  • Reply 164 of 283
    Quote:

    Originally posted by Amorph

    Right, so this is about dick measuring. So it probably won't help to remind them of the oh-so-stellar performance of the Pentium 4 when it first appeared, right? Because the whole exercize is irrational to begin with.



    I think the issue here is that we were all watching WWDC as Steve Jobs kicked the rear-end of a dual 3.0Ghz Xeon Dell, and now that we get the machines, they simply don't match up in performance. Where did THAT performance advantage go? That's all I want. I just want it to perform as advertised. I've got a brochure right now next to me that says in big, bold letters on the front "G5". Inside it tells me that I can encode MPEG-2 twice as fast, run photoshop twice as fast, and do 3.3 as many reverb plugins as the fastest PC competition. That sounds GREAT. Problem is WE get the machines, and they don't even compare to the middle-of-the-range P4's or two-year-old Athlon systems. So forget optimized or unoptimized, I simply want what they're advertising. If it performs the way they say, then everything's great. If it doesn't (at it doesn't seem to be at the moment, and I know that this is a 1.6Ghz machine, but even if you generously extrapolate the scores it still doesn't measure up), then I think something's wrong. Cause I'm led to believe from all the marketing, whether it be hype or not, that this thing is FASTER, not SLOWER than modern PC's. It's not even the same speed. It's slower. And that to me is the major disappointment. Yes it's the fastest Mac. Whoopee, mac's have been getting trounced for the last two years. So to me that means nothing. I don't care about the G4 cause this thing was not marketed as a G4 killer, it was marketed as a P4 killer. So something in this whole thing has got to give. If the owner of Luxology says that his demonstration wasn't a lie, and I'm sure that Wolfram wouldn't lie about the whole thing, nor Ed Catmul (what's he got to gain) and PIXAR, nor the chief engineer at Adobe, or any of these people who've been testing, then why the heck can't these systems now compete against what's out in the marketplace that they were supposedly kicking the butt of just a couple weeks ago?
  • Reply 165 of 283
    jlljll Posts: 2,713member
    Quote:

    Originally posted by twinturbo

    I think the issue here is that we were all watching WWDC as Steve Jobs kicked the rear-end of a dual 3.0Ghz Xeon Dell, and now that we get the machines, they simply don't match up in performance. Where did THAT performance advantage go?



    It's still there. Feel free to buy Photoshop for your pissing contest.





    Quote:

    Originally posted by twinturbo

    That's all I want. I just want it to perform as advertised. I've got a brochure right now next to me that says in big, bold letters on the front "G5". Inside it tells me that I can encode MPEG-2 twice as fast, run photoshop twice as fast, and do 3.3 as many reverb plugins as the fastest PC competition. That sounds GREAT. Problem is WE get the machines, and they don't even compare to the middle-of-the-range P4's or two-year-old Athlon systems. So forget optimized or unoptimized, I simply want what they're advertising.



    In an UNOPTIMIZED BENCHMARK app. Photoshop IS updated, Logic IS updated, and others will follow very soon. The Dual 2GHz isn't even shipping yet, and you're complaining - complaining about results in a benchmark app.



    If you can't wait a few weeks because of a freaking pissing contest, then you're insane.





    Quote:

    Originally posted by twinturbo

    So something in this whole thing has got to give. If the owner of Luxology says that his demonstration wasn't a lie, and I'm sure that Wolfram wouldn't lie about the whole thing, nor Ed Catmul (what's he got to gain) and PIXAR, nor the chief engineer at Adobe, or any of these people who've been testing, then why the heck can't these systems now compete against what's out in the marketplace that they were supposedly kicking the butt of just a couple weeks ago?



    How many times do you have to be told that Photoshop IS released in an optimized version. Mathematica is released in an optimized version too. RenderMan isn't released for Mac at all - it was for internal use and will be released in a few weeks for others to buy.



    But because the result in CineBench sucked, the other apps are automatically slow too?
  • Reply 166 of 283
    jlljll Posts: 2,713member
    Quote:

    Originally posted by Multimedia

    Great minds think alike. We posted at exactly the same minute from each of two continents.



    And the number of apps is actually three: Mathematica 5.0.
  • Reply 167 of 283
    Also, keep this in mind: everything that's advertised about the G5 being the "world's most powerful desktop computer" is based off the dual 2GHz system. A single 1.6GHz system is obivously NOT the world's most powerful computer. Quit whining. That's what you get for buying the low-end computer: low-end performance.



    --Wes
  • Reply 168 of 283
    kupan787kupan787 Posts: 586member
    From a few posts over at the macnn boards:



    Code:


    This refers to xBench







    The author of xBench says the low scores are due to a known issue with the app itself. It's really wonderful to get all worked up over nothing.



    Code:


    This refers to Cinebench







    So basically, current Cinebench 2003 benchmarks on the G5 are pretty much useless. The developers say they have yet to optimize for it. (They didn't get an advance G5 prototype G5 to test with.) See the messages below:



    ---



    I really wonder about what versions of C4D and Cinebench we are talking about?



    Here some first hand clarifications about the optimizations done in Cinebench 2003 and C4D V8...



    For the Mac OS the apps are compiled using CodeWarrior 8.3 with all speed optimizations switched on and scheduling is set for G4 7450...



    For Windows the apps are compiled with the latest Intel-Compilers and optimization is done for the P4...



    The source code (C++) for both platforms is exactly the same, except for a very small OS compatibility layer and some Apple specific GL optimizations (AppleVertexArryas and AppleFence), those have been implemented on request from Apple...



    AltiVec, MMX, ISSE or ISSE2 is not used at all. There are two main reasons for that:



    1) The time consuming calculations (intersections and color) have to be done in 64 bit, there is no way to do those in production quality with 32 bits only...



    2) All those Vector Units are optimized for linear data access, always one value directly after the other, guess why they are called Vector Units. Raytracing is by nature exactly the opposite of this. Data is scattered all over the memory, the ray is simply not predictable, if it would be, the entire raytracing problem would be solved...



    Simple test: Take an iBook G3 900 MHz and a Powerbook G4 867 Mhz and run on both machines some renderings using C4D, LW, Maya or EI. Why is the G4 not really faster? Because no one is able to use the current VE for production quality rendering...



    SIMD (Single Instrucion Multiple Data) is great for streaming data (Music, Posteffects, Movies), this is what is was made for, but using it as a general FPU is not working...



    Since Version 8 the apps are optimized for Hypherthreading, especially the GI (just try V7 against V8 on a HT enabled machine)...



    Maxon is directly working together with Apple, Intel and AMD to get the best render speed out of the hardware. Depending on how early we get our hands on new hardware, we do what we can to optimize for it. Most times we have been the first app utilizing new hardware beaus of that...



    Regarding the G5, it does not make much sense at all to discuss the current results, right now there are a few prototype machines with new GFX cards (probably beta drivers) and a patched OS running, what can one expect from a configuration like that...





    Cheers, Richard (one of the Maxon developers)



    ---



    Nathan,



    an example: There is an algorithm that needs to access memory and of course does some calculations. Now if you always use 64 bit numbers you have to move twice as much memory compared to 32 bit numbers. Now there are some calculations to do, but for those 32 bit does not give enough precision...



    Can you see the picture?



    As you might understand I can not show you the details of our render engine



    Since more than 8 years we are working, tweaking and optimizing on those routines, they have been developed and tested on the 68000, 68020, 68030, 68040 and even the 68060, 80386, 80486, P1, P2, P3, P4, AMD Athlon, PPC 601, PPC 604, G3, G4 and as soon as I get my hands on the G5, they will get optimized for this one as well...



    Cheers, Richard
  • Reply 169 of 283
    amorphamorph Posts: 7,112member
    Quote:

    Originally posted by twinturbo

    I've got a brochure right now next to me that says in big, bold letters on the front "G5". Inside it tells me that I can encode MPEG-2 twice as fast, run photoshop twice as fast, and do 3.3 as many reverb plugins as the fastest PC competition. That sounds GREAT. Problem is WE get the machines, and they don't even compare to the middle-of-the-range P4's or two-year-old Athlon systems. So forget optimized or unoptimized, I simply want what they're advertising.



    You got what they're advertizing: Every single one of those claims has been demonstrated and verified. Once the other applications have been optimized, like those identified in the brochure, you'll get that performance all around.



    This is not difficult to understand, and it's no different than the transition from 7400 to 7450, or the transition from the Pentium 3 to the Pentium 4, and it's nothing to get nervous or uptight about. The G5 is an amazing machine that, like all machines, runs better when it's specifically targeted by software.
  • Reply 170 of 283
    lmdlmd Posts: 5member
    kupan: great post.



    Even though there certainly will be improvement in cinebench performances after optimisations, i don't think we should expect miracles. Well, we'll wait and see. But right now the 1.6 G5 results aren't anything spectacular to say the least.



    Cinebench 2003



    Single CPU Render test:



    158.2 sec



    Open GL Hardware Lighting test:



    1) 13.16 sec; 7.8 fps

    2) 4.18 sec; 21.5 fps



    1466559 polygons/sec



    Open GL Doftware Lighting test:



    1) 19.96 sec; 5.1 fps

    2) 9.30 sec; 9.7 fps



    659952 polygons/sec



    Cinema4D Shading test:



    1) 50.19 sec; 2.0 fps

    2) 24.62 sec; 3.7 fps



    249291 polygons/sec





    You can compare current results with a chart that has compiled results of G4, AMD and intel systems.



    http://www.imashination.com/bench.html



    Hard to realise for a mac faithful like me, but still up until now I'll probably work a lot more faster and efficiently in cinema 4d (which I use a lot) with a pc system. i hope i'm proven wrong in the coming days.
  • Reply 171 of 283
    jlljll Posts: 2,713member
    Quote:

    Originally posted by lmd

    kupan: great post.



    Even though there certainly will be improvement in cinebench performances after optimisations, i don't think we should expect miracles. Well, we'll wait and see. But right now the 1.6 G5 results aren't anything spectacular to say the least.



    Cinebench 2003



    Single CPU Render test:



    .

    .

    .

    .

    .



    You can compare current results with a chart that has compiled results of G4, AMD and intel systems.



    http://www.imashination.com/bench.html



    Hard to realise for a mac faithful like me, but still up until now I'll probably work a lot more faster and efficiently in cinema 4d (which I use a lot) with a pc system. i hope i'm proven wrong in the coming days.




    Funny you complement kupan for his post but didn't see the part where he mentions that Cinebench is useless on a G5.
  • Reply 172 of 283
    kaikai Posts: 8member
    Quote:

    So if a single 1.6Ghz is the same speed as an Athlon 1.4, HOW THE HECK will a dual 2Ghz beat by 2x the dual 3.0Ghz Xeon???



    Easy answer - it won't! Not in Cinema/Cinebench, not in this Version!



    Let me elaborate about Maxon and Cinema for a minute:

    Upon reading alot of interviews with Maxon and reading posts of their programmers in various forae, this is what i gathered is Maxons approach:

    They do take great pride in their very efficient, very platform-independant code. For those of you that might have noticed: they even have their own UI-Toolkit for menues, dialogues etc, kinda like QT! They like to brag about it how it's 95% totally platform independant (the 5% are assumably API-stuff like File I/O, Network, OpenGL etc!) Why they only make C4D for two platforms and don't port to Linux like Alias/Maya and Avid/Softimage or why they weren't Be's posterchild back then when they were desperate for pro apps for BeOS, i don't know, it's something that has puzzled me for ages! ;-) If you have an answer, let me know!

    So that means in turn, that NO CPU-ARCHITECTURE will be optimized for, they only optimize their C-Code itself (and very well i must say, C4D still has a rocking renderer afterall!)! They don't favor PCs, it's just that Intel gave them great compilers! Which is also why they were so quick to support Intel's latest Gadget, "Hyperthreading"!

    This is also the real reason why they refused to support Altivec, forget the "Altivec is not double precision so we can't use it"-BS, Newtek showed with the Lightwave 7.0b upgrade what can be done with Altivec, they gained 34-87% over 7.0! Their SSE2-Optimizations only yielded 0-36%! (Besides: It also was faster in OS X than OS9, ranging from 2-17% depending on the test! ;-)

    Point is: Apple (or Metrowerks) did NOT have great compilers up until now! So C4D pretty much sucks on the G4 (and G5). Intel on the other hand has a pretty damn impressive compiler with ICC: auto-vectorizing, auto-multithreading, generating really efficient code that also performs pretty good on the Athlon (by chance i guess! ;-) Intel just couldn't avoid it!)



    Why do you think Apple rolls out Xcode just now? Why do they show a sudden strong interest in a really good, efficient optimizing GCC-compiler? That's just for the G5! The great thing about it: It will just be a recompile and some Codewarrior-Adjustments away! ;-) No manual optimizing necessary (though you can still do that with the profiling-tool SHARK from the CHUD-Tools!)



    And that's why there'll be a new Version (or patch) of C4D and/or Cinebench when Apple has the compiler done and all will be joy and happiness except for the PC-camp! ;-)



    Still: Looking at other apps that weren't greatly Altivec-optimized and are not G5-aware it seriously escapes me how C4D manages to perform THAT badly! 165 CB for a G5-1.6GHz and 96 CB for a G4-1GHz? Damn, that's more or less just LINEAR SCALING! And we have TWICE the FPUs and THREE times the Bus-Speed now! Okay, so there's no more L3-Cache, but since a 1GHz iMac without L3-Cache has also 84 CB the L3-Cache doesn't make THAT much of a difference, especially since we have twice the L2-Cache now!



    Quote:

    I want to know that it's fast. That it'll run my Photoshop, Final Cut, etc. the fastest.



    Calm down, Bronco! For these programs it definately will right from the start, rest assured! ;-)



    Quote:

    Not that I have to wait for this optimization and that optimization, etc.



    Oh Boy, a new CPU-Architecture! We might need new compiles to fully exploit it, help help, run away! We'll better stick with the G4 then, right?

    Lucky you you didn't seem to have followed the abysmal performance the Pentium4 had at launch! The P7 was also a whole new architecture, but it performed so badly that it was in fact SLOWER in *every* test (but a few Intel-sponsored ones!) than the P3s and Athlons that had 2/3rds its clockspeed!



    Quote:

    OK, the software is not optimized and you just will have to wait for optimised software that makes use of the 64 bit capabilities and so on



    You don't even need 64bit! A simple recompile with Apples GCC will do! Proper instruction scheduling and avoiding of certain Altivec-Commands will do just fine! Definate plus: Good G5-Code seems to be also good G4-Code, atleast that's what the author of Gaston told me! ;-)



    Quote:

    That kind of performance edge requires very aggressive use of AltiVec. I doubt Cinebench uses AltiVec much, if at all.



    No, it doesn't. With the G4 it did, but with TWO double-precision FPUs, a hardware Squareroot and a fat 1GHz-Bus plus independant busses for each CPU Altivec isn't even a necessity anymore! ;-)

    By the way: I don't know how far Apple and IBM are with auto-vectorizing (they're definately working on it!), but we will see a GCC that automatically rewrites scalar-code to vector-code (whenever possible) using Altivec in the not too distant future!
  • Reply 173 of 283
    lmdlmd Posts: 5member
    cinebench is based on cinema 4d. it was actually made to tell cinema 4d users how different systems would handle cinema 4d. So i don't think the word useless is totally right here, because cinema 4d will behave exactly like cinebench behaves on a g5 now (i.e. working but not in an optimised way). so the current results of cinebench will tell how cinema 4d runs on a g5 before optimisation. And this might take some time. So these results are useful until optimisation. Correct me if I'm wrong.
  • Reply 174 of 283
    kupan787kupan787 Posts: 586member
    Quote:

    Originally posted by lmd

    kupan: great post.



    I can't really take credit. The two posts were by different people over at macnn. The xBench one was by Big Mac, while the other was by Eug Wanker.



    I just posted them here, as it seem few were understanding what was going on.



    And as JLL meantioned, it seems like you missed the poiont of the post:



    Quote:

    So basically, current Cinebench 2003 benchmarks on the G5 are pretty much useless



  • Reply 175 of 283
    leonisleonis Posts: 3,427member
    another link regarding G5 vs Cinema....





    http://www.postforum.com/forums/read...=87441&t=87424
  • Reply 176 of 283
    kaikai Posts: 8member
    Thanks Kupan for confirming my speculations about Maxon! ;-)



    However, some of the stuff he says is just not true, Lightwave IS definately faster on a G4 than a G3!



    See this LW-Benchmark of a G3-600:



    Textures : 16.8s

    ZBufSort : 28.7s

    DoF : 35.5s

    Raytrace : 492.7s

    RadRefThings : 239.5s

    Variations : 437.0s

    TracerNoRad : 1630.5s

    TracerRad : 2701.3s



    A G4-500 (100MHz less! There never was a G4-600 unfortunately and that G3's the only LW7.5 one in there) is able to already beat it in Raytrace, RadRefThings and TracerRed:



    Textures : 20s

    ZBufSort : 34s

    DoF : 41s

    Raytrace : 445s

    RadRefThings : 183s

    Variations : 555s

    TracerNoRad : 1662s

    TracerRad : 2637s



    Quote:

    AltiVec, MMX, ISSE or ISSE2 is not used at all.



    That's funny, cause a few lines above he says they're using ICC for compiling with all P4-optimizations turned on! ;-) And that means, that due to the P4s very weak FPU, SSE2-Code is AUTOMATICALLY generated! Without it, the P4 just wouldn't be able to perform, which we all saw with our own eyes when it was introduced and the compilers weren't there yet!

    And C4D will run on Altivec just like it runs on SSE2 once IBM and Apple finish this compiler! ;-)



    Quote:

    SIMD (Single Instrucion Multiple Data) is great for streaming data (Music, Posteffects, Movies), this is what is was made for, but using it as a general FPU is not working...



    People use it for Quake3 or calculating Fractals, like with Altvec Fractal Carbon or Gaston, and the NASA even runs their Jet3D on it! RC5 is using it to calculate Keys. Apple is using it for BLAST gene-sequencing and has a BLAS implementation of linear Algebra on its Homepage! Plus, an Altivec-Prefetch always comes in handy (except when you're on a G5! ;-)



    There are lots of creative ways one can use Altivec for, and the ones quoted by Richard are just some! I guess Maxon's in denial-mode again, like always when confronted with the topic!...



    Besides: Intel IS using their SSE2-Unit as a general FPU! The P4s x87-Unit is so damn weak they basically have NO OTHER CHOICE! Plus it was their stated goal for the P4 right from the start, to use the SSE2-Unit as a replacement for the FPU!

    That is a FACT and you can read it everywhere (unless you work for Maxon! ;-)!
  • Reply 177 of 283
    geekmeetgeekmeet Posts: 107member
    Quote:

    Originally posted by Kai

    Easy answer - it won't! Not in Cinema/Cinebench, not in this Version!



    Let me elaborate about Maxon and Cinema for a minute:

    Upon reading alot of interviews with Maxon and reading posts of their programmers in various forae, this is what i gathered is Maxons approach:

    They do take great pride in their very efficient, very platform-independant code. For those of you that might have noticed: they even have their own UI-Toolkit for menues, dialogues etc, kinda like QT! They like to brag about it how it's 95% totally platform independant (the 5% are assumably API-stuff like File I/O, Network, OpenGL etc!) Why they only make C4D for two platforms and don't port to Linux like Alias/Maya and Avid/Softimage or why they weren't Be's posterchild back then when they were desperate for pro apps for BeOS, i don't know, it's something that has puzzled me for ages! ;-) If you have an answer, let me know!

    So that means in turn, that NO CPU-ARCHITECTURE will be optimized for, they only optimize their C-Code itself (and very well i must say, C4D still has a rocking renderer afterall!)! They don't favor PCs, it's just that Intel gave them great compilers! Which is also why they were so quick to support Intel's latest Gadget, "Hyperthreading"!

    This is also the real reason why they refused to support Altivec, forget the "Altivec is not double precision so we can't use it"-BS, Newtek showed with the Lightwave 7.0b upgrade what can be done with Altivec, they gained 34-87% over 7.0! Their SSE2-Optimizations only yielded 0-36%! (Besides: It also was faster in OS X than OS9, ranging from 2-17% depending on the test! ;-)

    Point is: Apple (or Metrowerks) did NOT have great compilers up until now! So C4D pretty much sucks on the G4 (and G5). Intel on the other hand has a pretty damn impressive compiler with ICC: auto-vectorizing, auto-multithreading, generating really efficient code that also performs pretty good on the Athlon (by chance i guess! ;-) Intel just couldn't avoid it!)



    Why do you think Apple rolls out Xcode just now? Why do they show a sudden strong interest in a really good, efficient optimizing GCC-compiler? That's just for the G5! The great thing about it: It will just be a recompile and some Codewarrior-Adjustments away! ;-) No manual optimizing necessary (though you can still do that with the profiling-tool SHARK from the CHUD-Tools!)



    And that's why there'll be a new Version (or patch) of C4D and/or Cinebench when Apple has the compiler done and all will be joy and happiness except for the PC-camp! ;-)



    Still: Looking at other apps that weren't greatly Altivec-optimized and are not G5-aware it seriously escapes me how C4D manages to perform THAT badly! 165 CB for a G5-1.6GHz and 96 CB for a G4-1GHz? Damn, that's more or less just LINEAR SCALING! And we have TWICE the FPUs and THREE times the Bus-Speed now! Okay, so there's no more L3-Cache, but since a 1GHz iMac without L3-Cache has also 84 CB the L3-Cache doesn't make THAT much of a difference, especially since we have twice the L2-Cache now!







    Calm down, Bronco! For these programs it definately will right from the start, rest assured! ;-)







    Oh Boy, a new CPU-Architecture! We might need new compiles to fully exploit it, help help, run away! We'll better stick with the G4 then, right?

    Lucky you you didn't seem to have followed the absymal performance the Pentium4 had at launch! The P7 was also a whole new architecture, but it performed so badly that it was in fact SLOWER in *every* test (but a few Intel-sponsored ones!) than the P3s and Athlons that had 2/3rds its clockspeed!







    You don't even need 64bit! A simple recompile with Apples GCC will do! Proper instruction scheduling and avoiding of certain Altivec-Commands will do just fine! Definate plus: Good G5-Code seems to be also good G4-Code, atleast that's what the author of Gaston told me! ;-)







    No, it doesn't. With the G4 it did, but with TWO double-precision FPUs, a hardware Squareroot and a fat 1GHz-Bus plus indepentant busses for each CPU Altivec isn't even a necessity anymore! ;-)

    By the way: I don't know how far Apple and IBM are with auto-vectorizing (they're definately working on it!), but we will see a GCC that automatically rewrites scalar-code to vector-code using Altivec in the not too distant future!




    GREAT POST!

    i guess people feel asleep during the keynote.

    i understood the significance of apple showing off their new developer tools,i guess others didnt.

    still,until apple rolls out a FULL 64-BIT os we are gonna hear this criticism.

    i feel bad for apple because they are damned if they do,damned if they dont.

    remember how slow 10.0 was?

    no one is complaining about speed with 10.2.

    but this criticism about benchmarks proves one thing to me:the g4 was no slouch like a lot of you said it was,remember?
  • Reply 178 of 283
    Even with heavy G5 optimizations, it looks like the G5 is still far behind the x86 camp according to a Maxon employee who posted here.



    The G5 is just outmatched and outgunned by current P4s. It will get worse as Athlon64 comes out next month and Intel Prescott(P5) later this year.



    Quote:

    OK, some news directly from the MAXON development lab:



    Of course all the following numbers are not final, no promise at all !!!!!!!



    This is based on the information we have right, now, there is still a of of work to do and we still have to wait for a new compiler...



    With the current CineBench a single G5 1.8GHz scores at about 188, the optimized version will maybe score at about 238...



    A hypothetical single G5 2.0GHz could score at about 210 on the old CB, optimized could be 265...



    A dual G5 2.0 could maybe score at about 480 with the optimized version of CB....



    Depending on the new compilers and our findings (thanks a lot to Apple for being extremely helpful and cooperative) we might even crack the 500 score for the dual G5 2GHz...



    Again, no promise and of course no release date





    Cheers, Richard



    Compare this to a single 3.2GHz P4 that scores about 370.
  • Reply 179 of 283
    Far behind with poorly optimized code, yes. As one of those people on the thread mentioned, recompiling for the G4e in some cases provided 50% improvements in speed over the G4 optimized equivalent... the G5 and G4e are very different architectures so it's hard to tell what optimization will do.



    -- Mark
  • Reply 180 of 283
    fluffyfluffy Posts: 361member
    Quote:

    Originally posted by Existence

    The G5 is just outmatched and outgunned by current P4s. It will get worse as Athlon64 comes out next month and Intel Prescott(P5) later this year.



    This is an irresponsible and unfounded blanket statement. Cinebench has always lagged behind general application and other content creation performance on the Mac, and there is no reason to believe that this will ever change. To make your statement implies greater knowledge of G5 performance than you currently possess. If in a month after extensive tests have been run on a wide variety of applications the G5 is not performing up to par then your statement will be valid but as it stands, especially since we have evidence of other applications outperforming the P4 to a significant degree, there is no reason to accept your conclusion as anything other than mindless trolling.
Sign In or Register to comment.