I wouldn't be so sure. The point of the Core frameworks is to provide an abstraction so that the developers don't have to worry about the specifics of the hardware implementation. If you update a framework to work on another piece of hardware, that should mean that every program that use that framework should be able to use it without changing the program. Maybe it doesn't work that great in practice, but that's the basic idea.
I would: CUDA is for General Purpose Computing on the GPU.
It does nothing for tasks at which graphics chips already excel, like shaders.
I would: CUDA is for General Purpose Computing on the GPU.
It does nothing for tasks at which graphics chips already excel, like shaders.
Obviously, it depends on the task, but some tasks will get a 2x performance improvement using CUDA compared to shaders (on top of the 3x improvement going from G7x to G8x)
Obviously, it depends on the task, but some tasks will get a 2x performance improvement using CUDA compared to shaders (on top of the 3x improvement going from G7x to G8x)
Comments
I wouldn't be so sure. The point of the Core frameworks is to provide an abstraction so that the developers don't have to worry about the specifics of the hardware implementation. If you update a framework to work on another piece of hardware, that should mean that every program that use that framework should be able to use it without changing the program. Maybe it doesn't work that great in practice, but that's the basic idea.
I would: CUDA is for General Purpose Computing on the GPU.
It does nothing for tasks at which graphics chips already excel, like shaders.
I would: CUDA is for General Purpose Computing on the GPU.
It does nothing for tasks at which graphics chips already excel, like shaders.
Obviously, it depends on the task, but some tasks will get a 2x performance improvement using CUDA compared to shaders (on top of the 3x improvement going from G7x to G8x)
- some tasks more, some tasks less.
http://forums.nvidia.com/index.php?s...4&#entry161304
Plus, it's easier to write for CUDA, than for shaders, and more tasks will benefit, so it's win-win.
Obviously, it depends on the task, but some tasks will get a 2x performance improvement using CUDA compared to shaders (on top of the 3x improvement going from G7x to G8x)
- some tasks more, some tasks less.
http://forums.nvidia.com/index.php?s...4&#entry161304
Plus, it's easier to write for CUDA, than for shaders, and more tasks will benefit, so it's win-win.
That's great, but, you've entirely missed my point
We're talking about shaders. If what you're doing is a shader, writing for CUDA is stupid: you already have the shader.
Yes, other tasks do get performance benefits by using CUDA rather than shaders. But shaders don't need that overhead, and will remain shaders.