The reason I plugged the DELL was because Apple openly uses DELL as their arch nemesis, or rival in prebuilt machines IMHO.
Personally in the workstation marketplace I was a fan (and owner) of the Alienware price, and performance vs. BOXX, but then - you know who - (DELL) noticed what a great thing they had going, and decided to scoop them up. I was a little pissed off about that one, but I think Apple can now become a real force in the workstation marketplace for all things with what they "can possibly" offer that no one else can.
You're a smart guy onlooker... I never understood why you were so intrigued with Alienware when you could build your own machines. Either way, I never was a fan of alienware... but now that Dell has acquired them... they are dust. On top of that, what dell has done with their technology is a disgrace. Have you seen the new dellienware computers? *throws up*
Rendering the image is what it's all about isn't it? Obviously the Modeling, animation parts of the application isn't what you would be developing it for.
You would be incorporating it into the render engine. Wouldn't you be using the lib to influence how, and what % CPU, and GPU were extracting your data through Image codec. Maybe a dual Nvidia GPU's under SLI bridge can render an image much faster than a CPU. Doesn't the lib determine what percentages can, or need to be offloaded in other similar situations already?
The functions and objects that are available through the coregraphics library are not direct to the gpu. In other words there is a ton of code that apple wrote to make coregraphics work. You create the object in place of an image. That object has the under the hood abilities to talk to the gpu... it's transparent to the dev really. But that object also adds some functions to manipulate the object. In other words... there isn't a clean way to say... use xx percent of the gpu vs. cpu when creating this image. It basically does all of that for you.
I suppose during the hardcore renderings... you could offload the creation of the image to the gpu... but honestly... i don't see that saving a lot of time. At least in cinema 4d (the only one i'm truly familiar with rendering)... it renders line by line. Dual processor it starts top and half way down and goes does.... line by line. Usually those renderings take some hardcore cpu TIME to get each line. So you offload a line to the gpu... it might take as much or close to as much time as to just have the cpu handle it. Then the final image is made... then what? Is it being manipulated afterwards? If so then yah CoreGraphics can help... but unless its being further manipulated once the rendering is done I fail to see how it can help. Maybe i'm missing something?
Either way, Have you used cinema on a pc with sli? If so how much did it help during modeling? Do you think alias and maxon would have to implement code for sli in the mac version if macs went sli?