Who's faster
Just ran quick "real life" speed test on Mac Pro. Same scene rendered in Modo (which supposedly have the same code in Windows and Mac-Intel versions). Windows version made it in 13:13.9, OSX version - 14:24.8. Same hardware, same application. OSX is 10% slower.
Of course, that might be Modo's fault, but still... I wonder if anyone conducted similar tests?
Of course, that might be Modo's fault, but still... I wonder if anyone conducted similar tests?
Comments
Of course, that might be Modo's fault
QFT.
Still, how do you run Windows and OS-X on the same hardware? Do you use Boot Camp (buggy) or did you use Parallels (definitely slower Windows performance)? At the end of the day, a 10% performance hit isn't too bad for a more secure system.
Just ran quick "real life" speed test on Mac Pro. Same scene rendered in Modo (which supposedly have the same code in Windows and Mac-Intel versions). Windows version made it in 13:13.9, OSX version - 14:24.8. Same hardware, same application. OSX is 10% slower.
Of course, that might be Modo's fault, but still... I wonder if anyone conducted similar tests?
Windows games seem to run better than the Mac equivalents but it's mostly due to software. Once the Intel platform is more settled and developers feel less need to support PPC, more time can be spent optimizing for the Intel chips on the Mac. Did you run the benchmark multiple times and average to get the 10% difference?
which supposedly have the same code in Windows and Mac-Intel versions
Not possible.
Windows code will call into Windows API, and Mac code will call into the OS X API.
What they probably did is take the Windows code and kludge it up to call into some intermediate custom API that supposedly translates into the Mac API. This approach is always wasteful and doesn't take advantage of all the optimization Apple has put into their API.
PhotoShop is a great example of this, and MS Office is another.
Just ran quick "real life" speed test on Mac Pro. Same scene rendered in Modo (which supposedly have the same code in Windows and Mac-Intel versions). Windows version made it in 13:13.9, OSX version - 14:24.8. Same hardware, same application. OSX is 10% slower.
Of course, that might be Modo's fault, but still... I wonder if anyone conducted similar tests?
What other programs were running on the hardware at the time? How many times did you perform the same test? For things like speed tests, you need to run through the test about 40 times for it to be statistically accurate.
About how many times... there's no reason to run it 40 times I think. But I can run it again couple of times. You know, actually all tests suck. There's always something to argue about. API, background processes, time of day, and so on, it's endless. What's matter for me is that my usual everyday task on my machine run slower in OSX, that's it. If it's 10 minutes against 11, that's fine. But if my render will take 10 hours against 11 - I will probably run it in XP.
No programs in both cases. XP is fresh, clean installed, so no additional background processes. OSX is not so fresh, but again, no running applications except Modo and I don't use any kernel extensions or hacks or whatever they called.
About how many times... there's no reason to run it 40 times I think. But I can run it again couple of times. You know, actually all tests suck. There's always something to argue about. API, background processes, time of day, and so on, it's endless. What's matter for me is that my usual everyday task on my machine run slower in OSX, that's it. If it's 10 minutes against 11, that's fine. But if my render will take 10 hours against 11 - I will probably run it in XP.
40 is the typical number to give you a 3-5% margin of error (I forget what the actual percentage is). Most benchmarks run about this many times.
If you run the tests multiple times and find you're getting the exact same results each time, then 40 is overkill. Typically with real world tests, though, results vary wildly for no apparent good reason. That's why you have to repeat the process and average the results. Most serious benchmarkers have scripts to automate this.
Does OSX have greater os overhead? Maybe, but that same lack of "lock" on the CPU is what makes OSX so usable while multitasking. AFAICT, XP doesn't multitask, any more than OS 9 multitasked.
For this reason, I'm gonna get me a Mac Pro, Quad core to render on. Even with all 4 cores rendering 1.5 threads each, the system will still be instantaneously responsive.
Of course, I need a UbiBinary Maya first...
Having said all that, I hear you brother. Maybe having a dedicated windows machine for rendering is the best bang/buck for you?
Maya, Lightwave are PPC only
Softimage and Max are Windows only.
I could write a very long and very boring rant about how Apple should have bought Maya when Alias could have been purchased with the small change down the back of Apple's sofa. But I will spare you the rant.
Modo and Cinema 4d are the only choice you have - and the Luxology guys are pretty pro Mac - I can't see them failing to optimize the renderer.
A 10% difference sounds believable. It would be interesting to find out what engineering issues result in that difference, and what could be done about it.
C.
I run Modo mostly in OSX. And there's another thing that I don't like - OpenGL in OSX don't have antialiasing. In XP I can open nVidia tools and force AA for all applications, and in Modo all wireframes look nice and smooth. In OSX you don't have it. Hopefully Leopard will have better GL.
OS X is probably slower. Deal with it.
Placebo-- does your name come from the healing words you offer?
BTW, is there any way to set/change priorities of processes in OSX?
OS X has a built-in way but this GUI for it is nicer so to speak.
http://www.macupdate.com/info.php/id/9579
BTW, is there any way to set/change priorities of processes in OSX?
Yes, the command "renice" from the terminal. You can read the manual page with "man renice", through a terminal session.
I'm just saying, in both "feel" and in benchmarks, OS X is usually the slower. Blaming it on "optimization" is the classic apologism that we went through with the G4 and the G5.
Disagree.
I'd say that OSX generally delivers a more responsive user interface, even on very slow machines under a high load. I have only just upgraded from a 1GHz G4 because it was rarely sluggish. You can have a ton of apps open and flick between them like crazy.
Windows machines running very processor intensive tasks seem to give every resource to that task. Which of course mean that they complete the task faster. But responsiveness to the UI goes out of the window. Especially when chunks of the OS get paged out of memory. Most Windows machines I have used seem to take 30 to 40 seconds after the desktop appears to become responsive.
Which of course means that each is better for different jobs. For single tasking heavy apps, Windows would give you better performance. But for multi-tasking productivity apps - I'd much prefer OS X. This stuff rarely gets measured in any objective way.
Big exception is network time-outs. If OSX does a network action which takes a long time to complete (for instance the network resource is unavailable) , then the responsiveness of everything collapses and we see a world of beachballs.
C.