It's properly designed 2D apps that can benefit heftily from these cards. Have you ever played around in Core Image Funhouse with a 9800 or better graphics card? Compared to performing the same blurs and distortions in Photoshop or Fireworks, it's ridiculously fast. Literally as fast as you can drag the blur slider.
Sadly, Adobe don't use Core Image so your point is moot for most graphic design users. I suspect Adobe still won't use Core Image in CS3 either.
I'm also not sure if Core Image is accurate enough for actual print or high res design work too but I've not dug about in the developer docs to see if it supports the colourspaces and sizes used in traditional design.
As far as I've been aware of it, it's a neat trick for previews and effects on screen or even for just flinging pixels about when you really aren't that bothered about accuracy. How you mix in that creative freedom with the often constrained structures of a design grid and pantone swatches, I don't know.
...As far as I've been aware of it, [Core Image is] a neat trick for previews and effects on screen or even for just flinging pixels about when you really aren't that bothered about accuracy. How you mix in that creative freedom with the often constrained structures of a design grid and pantone swatches, I don't know...
I think in terms of contribution to computer graphics in general Core Image is a worthwhile piece of R&D out of Cupertino.
I'm not into 3D at the moment but 3D graphics faces a similar situation: How much of the 3D done in-GPU is actually used in the final render passes? Possibly not much, but as 3D cards approach photorealism levels the "hand-off" of 3D renders to the CPU will become more streamlined.
Maybe one day we'll see a similar thing for 2D graphics. Suffice to say that if Core Image-type abilities become more accurate, moving towards a full non-destructive, GPU-accelerated, node-based workflow in a Photoshop-like application would be a massive step for graphic artists.
I'm not trying to prove anything, if I'm wrong, I'd like to know why. All I have is that it's from an audio interview with a Final Cut plug-in developer that said that sometimes the output is sometimes slightly unpredictable and not as good from a professional quality standpoint. I don't even know how I can dig that up. I think it makes sense given how long it takes to render video when the preview is done in real time.
This isn't about the CPU rendering the final output...I wanna know who told you the 'output' from the GPU is 'unpredictable'.
Actually, I think I'm totally off base with Core Image and it's suitability for use in Photoshop.
http://developer.apple.com/macosx/coreimage.html makes it pretty clear of it's intended purpose and accuracy. How Core Image renders the image on screen however might be an issue if the GPU isn't as accurate as doing it longhand with the CPU but that wouldn't stop Adobe say from using CI for doing the hard work, then it's just up to Apple to get it's CI code right.
I wonder though if Adobe would junk all it's filter and effect code not to mention it's layers code on the Mac to use CI when there's nothing like that on Windows (at least not before Vista). It would make Photoshop an absolute screamer of a product on the Mac.
Actually, I think I'm totally off base with Core Image and it's suitability for use in Photoshop.
http://developer.apple.com/macosx/coreimage.html makes it pretty clear of it's intended purpose and accuracy. How Core Image renders the image on screen however might be an issue if the GPU isn't as accurate as doing it longhand with the CPU ...
...but it is as accurate. I dunno if that was what you were implying.
Quote:
Core Image changes the game. Developers can now easily create real-time capable image processing solutions that automatically take full advantage of the latest hardware without worrying about future architectural changes. Even better, Core Image can perform its processing using 32-bit floating point math. This means that you can work with high bit-depth images and perform multiple image processing steps with no loss of accuracy.
Quote:
Because Core Image uses 32-bit floating point numbers instead of fixed scalars, it can handle over 10^25 colors. Each pixel is specified by a set of four floating point numbers, one each for the red, green, blue, and alpha components of a pixel. This color space is far greater than the human eye can perceive. This level of precision in specifying color components allows image fidelity to be preserved even through a large number of processing steps without truncation.
Quote:
As you have seen, Core Image changes the game of image processing. It gives application developers the ability to create applications that can fully utilize the performance and capabilities of modern graphics hardware. It allows for manipulation of deep bit images with incredible accuracy and color fidelity. And finally, Image Units defines a new way to share image processing capabilities between applications and paves the road for a marketplace of plug-ins that can be used by any image processing application on the system that supports Core Image.
Sadly, Adobe don't use Core Image so your point is moot for most graphic design users. I suspect Adobe still won't use Core Image in CS3 either.
I'm also not sure if Core Image is accurate enough for actual print or high res design work too but I've not dug about in the developer docs to see if it supports the colourspaces and sizes used in traditional design.
As far as I've been aware of it, it's a neat trick for previews and effects on screen or even for just flinging pixels about when you really aren't that bothered about accuracy. How you mix in that creative freedom with the often constrained structures of a design grid and pantone swatches, I don't know.
Sadly Adobe didn't use Altivec until they announced it. It could happen. I think it could be beneficial to their users. Which is why Adobe used Altivec.
...but it is as accurate. I dunno if that was what you were implying.
I was stating that Core Image is accurate enough. The GPU isn't necessarily accurate enough.
At the moment, the responsibility for accuracy is all due to Adobe and presumably they like it that way as it's their reputation they have on the line. If they used CI, the responsibility lies with Apple and the card manufacturer.
Quote:
Originally posted by onlooker
Sadly Adobe didn't use Altivec until they announced it. It could happen. I think it could be beneficial to their users. Which is why Adobe used Altivec.
It could absolutely ROCK if Adobe used it as it would totally SCREAM by comparison to CS2 or the Windows version, at least pre-Vista which IIRC has something similar to Core Image. It's not as simple a change as dropping in a new rendering engine in though like they did with Altivec and the G5. They'd have to code up Image Units for each of their layer/filter/effect tools. Most designers also rely on an army of addon effects like AlienSkin too.
Maybe that's why it's taking so long for a Universal Binary. Maybe Adobe just decided that missing out on CI or whatever they have on Windows Vista would be stupid.
Maybe that's why it's taking so long for a Universal Binary. Maybe Adobe just decided that missing out on CI or whatever they have on Windows Vista would be stupid.
"Taking long"? It's not taking long. It's a perfectly normal Adobe CS release cycle. It just so happened that, because they're money-seeking pricks, they refuse to release the current(!) version as Universal Binary, unlike, oh, pretty much anyone else.
"Taking long"? It's not taking long. It's a perfectly normal Adobe CS release cycle. It just so happened that, because they're money-seeking pricks, they refuse to release the current(!) version as Universal Binary, unlike, oh, pretty much anyone else.
I don't think it's even been ported to Mach-O, and I think It's taking so long because they didn't move it to Mach-O when Apple advised it over a year (almost two) ago.
I was stating that Core Image is accurate enough. The GPU isn't necessarily accurate enough.
We seem to be confused by this point. How is Core Image implemented? How is the GPU "not" being accurate. Looks like we need to explore this point more.
Evidence of iMovie and Final Cut still requiring CPU renders may mean Core Image filters coded to be fast but less accurate.
Does this mean that Core Image can still produce accurate-enough on-the-fly GPU-driven renders if coded to do so???
Just because some Core Image filters are coded to be not-so-accurate does not mean Core Image does not have the potential to be accurate, I think.
Well, they do use bundles, which usually is good indication for Mach-O (I'm not sure if CFM in a bundle is even possible). However, IIRC, they still use CodeWarrior, despite it having been clear for years now that CodeWarrior on Mac OS isn't going to see much of a revival. They should have listened to Apple regarding moving to Xcode. They decided not to. They keep claiming Xcode isn't mature enough, but you can easily tell how much bullshit that is by the fact that Apple, Omni and many other software companies have been able to ship insanely complex software quite well. I have no respect for it.
We seem to be confused by this point. How is Core Image implemented? How is the GPU "not" being accurate. Looks like we need to explore this point more.
The basic point is that 3D apps usually use the GPU for fast preview rendering, using OpenGL acceleration, but then use the CPU for precise final renders.
Core Image uses OpenGL (relying on ARB_fragment_program, for instance). Thus, it seems reasonable to assume that Core Image suffers the same precision problem.
Comments
Originally posted by Placebo
It's properly designed 2D apps that can benefit heftily from these cards. Have you ever played around in Core Image Funhouse with a 9800 or better graphics card? Compared to performing the same blurs and distortions in Photoshop or Fireworks, it's ridiculously fast. Literally as fast as you can drag the blur slider.
Sadly, Adobe don't use Core Image so your point is moot for most graphic design users. I suspect Adobe still won't use Core Image in CS3 either.
I'm also not sure if Core Image is accurate enough for actual print or high res design work too but I've not dug about in the developer docs to see if it supports the colourspaces and sizes used in traditional design.
As far as I've been aware of it, it's a neat trick for previews and effects on screen or even for just flinging pixels about when you really aren't that bothered about accuracy. How you mix in that creative freedom with the often constrained structures of a design grid and pantone swatches, I don't know.
...As far as I've been aware of it, [Core Image is] a neat trick for previews and effects on screen or even for just flinging pixels about when you really aren't that bothered about accuracy. How you mix in that creative freedom with the often constrained structures of a design grid and pantone swatches, I don't know...
I think in terms of contribution to computer graphics in general Core Image is a worthwhile piece of R&D out of Cupertino.
I'm not into 3D at the moment but 3D graphics faces a similar situation: How much of the 3D done in-GPU is actually used in the final render passes? Possibly not much, but as 3D cards approach photorealism levels the "hand-off" of 3D renders to the CPU will become more streamlined.
Maybe one day we'll see a similar thing for 2D graphics. Suffice to say that if Core Image-type abilities become more accurate, moving towards a full non-destructive, GPU-accelerated, node-based workflow in a Photoshop-like application would be a massive step for graphic artists.
Originally posted by JeffDM
I'm not trying to prove anything, if I'm wrong, I'd like to know why. All I have is that it's from an audio interview with a Final Cut plug-in developer that said that sometimes the output is sometimes slightly unpredictable and not as good from a professional quality standpoint. I don't even know how I can dig that up. I think it makes sense given how long it takes to render video when the preview is done in real time.
This isn't about the CPU rendering the final output...I wanna know who told you the 'output' from the GPU is 'unpredictable'.
Originally posted by kim kap sol
This isn't about the CPU rendering the final output...I wanna know who told you the 'output' from the GPU is 'unpredictable'.
The GPU doesn't use high precision.
Originally posted by Chucker
The GPU doesn't use high precision.
It doesn't. What makes you say that?
Could you imagine if WYSINWYG in Aperture?
Originally posted by kim kap sol
Could you imagine if WYSWNWYG in Aperture?
The quality is more than good enough for screen display.
Originally posted by Chucker
The quality is more than good enough for screen display.
What's high precision to you?
Almost all modern GPUs are 32-bit. Is that not good enough for print...it's definitely good enough for video.
http://developer.apple.com/macosx/coreimage.html makes it pretty clear of it's intended purpose and accuracy. How Core Image renders the image on screen however might be an issue if the GPU isn't as accurate as doing it longhand with the CPU but that wouldn't stop Adobe say from using CI for doing the hard work, then it's just up to Apple to get it's CI code right.
I wonder though if Adobe would junk all it's filter and effect code not to mention it's layers code on the Mac to use CI when there's nothing like that on Windows (at least not before Vista). It would make Photoshop an absolute screamer of a product on the Mac.
Originally posted by aegisdesign
Actually, I think I'm totally off base with Core Image and it's suitability for use in Photoshop.
http://developer.apple.com/macosx/coreimage.html makes it pretty clear of it's intended purpose and accuracy. How Core Image renders the image on screen however might be an issue if the GPU isn't as accurate as doing it longhand with the CPU ...
...but it is as accurate. I dunno if that was what you were implying.
Core Image changes the game. Developers can now easily create real-time capable image processing solutions that automatically take full advantage of the latest hardware without worrying about future architectural changes. Even better, Core Image can perform its processing using 32-bit floating point math. This means that you can work with high bit-depth images and perform multiple image processing steps with no loss of accuracy.
Because Core Image uses 32-bit floating point numbers instead of fixed scalars, it can handle over 10^25 colors. Each pixel is specified by a set of four floating point numbers, one each for the red, green, blue, and alpha components of a pixel. This color space is far greater than the human eye can perceive. This level of precision in specifying color components allows image fidelity to be preserved even through a large number of processing steps without truncation.
As you have seen, Core Image changes the game of image processing. It gives application developers the ability to create applications that can fully utilize the performance and capabilities of modern graphics hardware. It allows for manipulation of deep bit images with incredible accuracy and color fidelity. And finally, Image Units defines a new way to share image processing capabilities between applications and paves the road for a marketplace of plug-ins that can be used by any image processing application on the system that supports Core Image.
Originally posted by aegisdesign
Sadly, Adobe don't use Core Image so your point is moot for most graphic design users. I suspect Adobe still won't use Core Image in CS3 either.
I'm also not sure if Core Image is accurate enough for actual print or high res design work too but I've not dug about in the developer docs to see if it supports the colourspaces and sizes used in traditional design.
As far as I've been aware of it, it's a neat trick for previews and effects on screen or even for just flinging pixels about when you really aren't that bothered about accuracy. How you mix in that creative freedom with the often constrained structures of a design grid and pantone swatches, I don't know.
Sadly Adobe didn't use Altivec until they announced it. It could happen. I think it could be beneficial to their users. Which is why Adobe used Altivec.
Originally posted by kim kap sol
...but it is as accurate. I dunno if that was what you were implying.
I was stating that Core Image is accurate enough. The GPU isn't necessarily accurate enough.
At the moment, the responsibility for accuracy is all due to Adobe and presumably they like it that way as it's their reputation they have on the line. If they used CI, the responsibility lies with Apple and the card manufacturer.
Originally posted by onlooker
Sadly Adobe didn't use Altivec until they announced it. It could happen. I think it could be beneficial to their users. Which is why Adobe used Altivec.
It could absolutely ROCK if Adobe used it as it would totally SCREAM by comparison to CS2 or the Windows version, at least pre-Vista which IIRC has something similar to Core Image. It's not as simple a change as dropping in a new rendering engine in though like they did with Altivec and the G5. They'd have to code up Image Units for each of their layer/filter/effect tools. Most designers also rely on an army of addon effects like AlienSkin too.
Maybe that's why it's taking so long for a Universal Binary. Maybe Adobe just decided that missing out on CI or whatever they have on Windows Vista would be stupid.
Originally posted by aegisdesign
I was stating that Core Image is accurate enough. The GPU isn't necessarily accurate enough.
Ok...that's a given. Limitations come from the hardware. I haven't seen something other than a 32-bit precision GPU in 3 years though.
Originally posted by aegisdesign
Maybe that's why it's taking so long for a Universal Binary. Maybe Adobe just decided that missing out on CI or whatever they have on Windows Vista would be stupid.
"Taking long"? It's not taking long. It's a perfectly normal Adobe CS release cycle. It just so happened that, because they're money-seeking pricks, they refuse to release the current(!) version as Universal Binary, unlike, oh, pretty much anyone else.
Originally posted by Chucker
"Taking long"? It's not taking long. It's a perfectly normal Adobe CS release cycle. It just so happened that, because they're money-seeking pricks, they refuse to release the current(!) version as Universal Binary, unlike, oh, pretty much anyone else.
I don't think it's even been ported to Mach-O, and I think It's taking so long because they didn't move it to Mach-O when Apple advised it over a year (almost two) ago.
I was stating that Core Image is accurate enough. The GPU isn't necessarily accurate enough.
We seem to be confused by this point. How is Core Image implemented? How is the GPU "not" being accurate. Looks like we need to explore this point more.
Evidence of iMovie and Final Cut still requiring CPU renders may mean Core Image filters coded to be fast but less accurate.
Does this mean that Core Image can still produce accurate-enough on-the-fly GPU-driven renders if coded to do so???
Just because some Core Image filters are coded to be not-so-accurate does not mean Core Image does not have the potential to be accurate, I think.
Originally posted by sunilraman
We seem to be confused by this point. How is Core Image implemented? How is the GPU "not" being accurate. Looks like we need to explore this point more.
The basic point is that 3D apps usually use the GPU for fast preview rendering, using OpenGL acceleration, but then use the CPU for precise final renders.
Core Image uses OpenGL (relying on ARB_fragment_program, for instance). Thus, it seems reasonable to assume that Core Image suffers the same precision problem.
Originally posted by kim kap sol
Ok...that's a given. Limitations come from the hardware. I haven't seen something other than a 32-bit precision GPU in 3 years though.
Perhaps we're at the tipping point then that GPU maths is good enough to rely on. Apple seem to think so.