So if a single 1.6Ghz is the same speed as an Athlon 1.4, HOW THE HECK will a dual 2Ghz beat by 2x the dual 3.0Ghz Xeon???
Easy answer - it won't! Not in Cinema/Cinebench, not in this Version!
Let me elaborate about Maxon and Cinema for a minute:
Upon reading alot of interviews with Maxon and reading posts of their programmers in various forae, this is what i gathered is Maxons approach:
They do take great pride in their very efficient, very platform-independant code. For those of you that might have noticed: they even have their own UI-Toolkit for menues, dialogues etc, kinda like QT! They like to brag about it how it's 95% totally platform independant (the 5% are assumably API-stuff like File I/O, Network, OpenGL etc!) Why they only make C4D for two platforms and don't port to Linux like Alias/Maya and Avid/Softimage or why they weren't Be's posterchild back then when they were desperate for pro apps for BeOS, i don't know, it's something that has puzzled me for ages! ;-) If you have an answer, let me know!
So that means in turn, that NO CPU-ARCHITECTURE will be optimized for, they only
optimize their C-Code itself (and very well i must say, C4D still has a rocking renderer afterall!)! They don't favor PCs, it's just that Intel gave them great compilers
! Which is also why they were so quick to support Intel's latest Gadget, "Hyperthreading"!
This is also the real reason why they refused to support Altivec, forget the "Altivec is not double precision so we can't use it"-BS, Newtek showed with the Lightwave 7.0b upgrade what can be done with Altivec, they gained 34-87% over 7.0! Their SSE2-Optimizations only yielded 0-36%! (Besides: It also was faster in OS X than OS9, ranging from 2-17% depending on the test! ;-)
Point is: Apple (or Metrowerks) did NOT have great compilers up until now! So C4D pretty much sucks on the G4 (and G5). Intel on the other hand has a pretty damn impressive compiler with ICC: auto-vectorizing, auto-multithreading, generating really efficient code that also performs pretty good on the Athlon (by chance i guess! ;-) Intel just couldn't avoid it!)
Why do you think Apple rolls out Xcode just now? Why do they show a sudden strong interest in a really good, efficient optimizing GCC-compiler? That's just for the G5! The great thing about it: It will just be a recompile and some Codewarrior-Adjustments away! ;-) No manual optimizing necessary (though you can still do that with the profiling-tool SHARK from the CHUD-Tools!)
And that's why there'll be a new Version (or patch) of C4D and/or Cinebench when Apple has the compiler done and all will be joy and happiness except for the PC-camp! ;-)
Still: Looking at other apps that weren't greatly Altivec-optimized and are not G5-aware it seriously escapes me how C4D manages to perform THAT badly! 165 CB for a G5-1.6GHz and 96 CB for a G4-1GHz? Damn, that's more or less just LINEAR SCALING! And we have TWICE the FPUs and THREE times the Bus-Speed now! Okay, so there's no more L3-Cache, but since a 1GHz iMac without L3-Cache has also 84 CB the L3-Cache doesn't make THAT much of a difference, especially since we have twice the L2-Cache now!
I want to know that it's fast. That it'll run my Photoshop, Final Cut, etc. the fastest.
Calm down, Bronco! For these programs it definately will right from the start, rest assured! ;-)
Not that I have to wait for this optimization and that optimization, etc.
Oh Boy, a new CPU-Architecture! We might need new compiles to fully exploit it, help help, run away! We'll better stick with the G4 then, right?
Lucky you you didn't seem to have followed the abysmal performance the Pentium4 had at launch! The P7 was also a whole new architecture, but it performed so badly that it was in fact SLOWER in *every* test (but a few Intel-sponsored ones!) than the P3s and Athlons that had 2/3rds its clockspeed!
OK, the software is not optimized and you just will have to wait for optimised software that makes use of the 64 bit capabilities and so on
You don't even need 64bit! A simple recompile with Apples GCC will do! Proper instruction scheduling and avoiding of certain Altivec-Commands will do just fine! Definate plus: Good G5-Code seems to be also good G4-Code, atleast that's what the author of Gaston told me! ;-)
That kind of performance edge requires very aggressive use of AltiVec. I doubt Cinebench uses AltiVec much, if at all.
No, it doesn't. With the G4 it did, but with TWO double-precision FPUs, a hardware Squareroot and a fat 1GHz-Bus plus independant busses for each CPU Altivec isn't even a necessity anymore! ;-)
By the way: I don't know how far Apple and IBM are with auto-vectorizing (they're definately working on it!), but we will see a GCC that automatically rewrites scalar-code to vector-code (whenever possible) using Altivec in the not too distant future!