If the GPUs are doing the processing for filters, etc. How does it help with rendering and creating a movie file where the effects need to be part of the video stream in the data file?
Apple hasn't provided technical data on how this works yet because the only app that currently uses Core Video is Motion and that's not shipping yet.
My guess is that like RT effects what you're doing is working inside the card on a proxy file and applying effects in realtime to that in a nondestructive way. Once you decide to save the file and edits the app then processes the changes on the larger dataset.
This is total conjecture from me based on no empiracle evidence whatsoever. It just seems like this technology is similar to other apps that use proxies for fast nondestructive editing.
I await the possibility of a pdf explaining this in terms that are far over my head.
As far as I can tell, they simply substitute CPU for GPU for this sort of processing. Traditionally it's like this:
Code:
Hard disk -> RAM -> CPU -> RAM -> Hard disk
CoreImage/CoreVideo have them like this:
Code:
Hard disk -> RAM -> CPU -> GPU [ -> CPU ] -> RAM -> Hard disk
and CPU is used only to prepare (compile) the sequence of actions to be performed by GPU.
The thing that keeps me wondering is will ImageUnits run in emulated mode on older machines? For example, someone creates a Photoshop killer app which is built around CoreImage with ImageUnit plug-ins. Will that app run on a G4 with an old GPU at all with all processing done in CPU (since an old GPU doesn't support these functions)? Or will it just not run? If I'm not terribly mistaken, there is no algorithm on earth which can be implemented in GPU, but not in CPU. CPUs are very universal and though they are too slow for certain tasks at which GPUs excel, they none the less should be able to run the same CoreImage libraries. What do you think?
I seem to remember it being implied that the effects could scale to the available hardware.
Hopefully, it will be possible to specify which features are dropped and which are picked up by the cpu when the gpu isn't able to do them. This would allow the developer and/or user to choose between speed and fidelity.
Comments
My guess is that like RT effects what you're doing is working inside the card on a proxy file and applying effects in realtime to that in a nondestructive way. Once you decide to save the file and edits the app then processes the changes on the larger dataset.
This is total conjecture from me based on no empiracle evidence whatsoever. It just seems like this technology is similar to other apps that use proxies for fast nondestructive editing.
I await the possibility of a pdf explaining this in terms that are far over my head.
Hard disk -> RAM -> CPU -> RAM -> Hard disk
CoreImage/CoreVideo have them like this:
Hard disk -> RAM -> CPU -> GPU [ -> CPU ] -> RAM -> Hard disk
and CPU is used only to prepare (compile) the sequence of actions to be performed by GPU.
The thing that keeps me wondering is will ImageUnits run in emulated mode on older machines? For example, someone creates a Photoshop killer app which is built around CoreImage with ImageUnit plug-ins. Will that app run on a G4 with an old GPU at all with all processing done in CPU (since an old GPU doesn't support these functions)? Or will it just not run? If I'm not terribly mistaken, there is no algorithm on earth which can be implemented in GPU, but not in CPU. CPUs are very universal and though they are too slow for certain tasks at which GPUs excel, they none the less should be able to run the same CoreImage libraries. What do you think?
Hopefully, it will be possible to specify which features are dropped and which are picked up by the cpu when the gpu isn't able to do them. This would allow the developer and/or user to choose between speed and fidelity.