Originally posted by TenoBell
I'm not talking about the ability to manipulate the image. I'm talking about taking a basic RAW picture. Rendering it through different RAW converters from different manufacturers and objectively evaluating the results.
Each RAW converter has to place the image inside of a color space container and some type of limited color gamut. The only way to have a fully objective evaluation of each rendering is to have a starting point of neutrality in respect to color and contrast.
There needs to be some point for neutral skin tone reproduction, neutral color, and neutral gray scale. The only way to do this would be for all RAW converters to work under the same color space.
From what I recently learned these RAW converters have three different color space gamuts they could work under, Adobe RGB, sRGB, and ProPhoto RGB. The fact that they may not use the same color space to me completely devalues any real evaluation between the converters. The histograms between all of the renders will be completely different.
My next question would be if all of the converters did use the same color space such as Adobe RGB, would the luminance and chrominance values mean the same thing between all of them.
On top of that each converter is applying sharpening and contrast to hide noise and digital artifacts in the picture. From what I can see this is primarily what most of the evaluation is based upon. Which converter most effectively hides defects.
Under this type of evaluation Aperture 1.0 did the worst job at that. But it was probably the one showing the most honest truth about the way the picture really looked.
I tried to reply to your earlier post before, but after I wrote a fair amount, it crashed, so I gave up for then.
Here's the point about the colorspaces. I'm sure that you understand the problem with limited colorspace. Each device has their own limitations. This is true for film of all types as well, of course.
What colorspaces do is to allow matching between various output devices, and the original material.
If the colorspace is wrong then the result can be either over saturated colors, truncated colors, banding, or all three. At other times it can result in excessively muted colors. It also results in incorrect contrast, incorrect white and black points, etc.
Adobe RGB has been a high end standard for a while. MS and hp invented the sRGB standard to match the cheap 14" color monitors that were being used at the time on most PC's.
As most people use PC's that became a standard for many files that were to be used for reproduction mainly on them, such as web output.
Unfortunately, many companies began to use it inappropriately, simply because it was easy to do.
When you resolve a RAW image file (or any other image file), you have to know what the output will be. For web use, or for any use that will be expected to have a small colorspace, such as a laser printer, sRGB is fine. But for print, or inkjet, or dye-sub, or for Fuji Pictography, or any other hi quality output, you would use Adobe RGB, though there are some other colorspaces that are sometimes used. Then there is CMYK, or course, or Hexachrome.
Work on digital files, or scans is just as precise as that for film. In fact, I've always thought that film was distinctly less precise. I can often see changes between camera's in a scene, or scene to scene. A great deal of film work also depends on a trained eye. The vagary of film and processing ensures that no two film prints will ever be the same.
Kodak specs the film as +- 20CC (CMY) color, and =- 1/3 stop, run to run. Even with filtering in the printers, that can't be reduced to below about =- 5CC, and =- 1/5 stop. Processing adds to that uncertainty.
We monitored our machines very closely, running control strips every two hours, but, some inconsistency always crept in.
This is a VERY complex area, far too complex to receive a fair hearing here.
All I can say is that I also thought that Aperture (ver 1) was giving a less processed file. But when I took my own pictures, and processed them in all three converters, Apple's were the least close to what I had shot, here at home, in controlled conditions.
When I brought the darker areas of the pics done in the other converters up in brightness to match Apertures results. they also gained noise, but not the artifacting. Apple is processing to bring out details in shadows that other converters are not doing. In doing that, it is bringing the noise level up as well. But, it also brought artifacts into the file.
This is not unusual for processing. An example is Digital Ice, used mostly in scanners. This does remove scratches and dust, but it also changes parts of the file. What Aperture 1 was doing, it seemed to me, was giving the file an adjustment that was similar to PS's Highlight/Shadow control, but without the finesse.