Apple's Intel Aperture 1.1 Update pushed back

12346»

Comments

  • Reply 101 of 111
    sunilramansunilraman Posts: 8,133member
    How does motion film color software work?
  • Reply 102 of 111
    kim kap solkim kap sol Posts: 2,987member
    Quote:

    Originally posted by sunilraman

    Nice. Calling things "gay" is so last century, dude. WTF



    Correction...'ghey'...it's this century's way of calling things.
  • Reply 103 of 111
    tednditedndi Posts: 1,921member
    Quote:

    Originally posted by sunilraman

    Heh. Not a big Sinatra fan myself. Too old skool. You're showing your age, Mel. You should be listening to stuff on this website: www.di.fm (the top several channels, not the channels at the bottom)



    DUF DUF DUF DUF DUF DUF DUF w00t !!!!!!!!!!!!!






    That's a pretty sweet site! Thanks!!!!!



    You should still know who Frank Sinatra is though.



    Otherwise you are just ghey!









  • Reply 104 of 111
    tenobelltenobell Posts: 7,014member
    Quote:

    If the RAW files as interpreted by the software have enough resolution and minimal noise and a wide enough latitude of colors and textures, and the software is able to provide good tweaking and organising



    I'm not talking about the ability to manipulate the image. I'm talking about taking a basic RAW picture. Rendering it through different RAW converters from different manufacturers and objectively evaluating the results.



    Each RAW converter has to place the image inside of a color space container and some type of limited color gamut. The only way to have a fully objective evaluation of each rendering is to have a starting point of neutrality in respect to color and contrast.



    There needs to be some point for neutral skin tone reproduction, neutral color, and neutral gray scale. The only way to do this would be for all RAW converters to work under the same color space.



    From what I recently learned these RAW converters have three different color space gamuts they could work under, Adobe RGB, sRGB, and ProPhoto RGB. The fact that they may not use the same color space to me completely devalues any real evaluation between the converters. The histograms between all of the renders will be completely different.



    My next question would be if all of the converters did use the same color space such as Adobe RGB, would the luminance and chrominance values mean the same thing between all of them.



    On top of that each converter is applying sharpening and contrast to hide noise and digital artifacts in the picture. From what I can see this is primarily what most of the evaluation is based upon. Which converter most effectively hides defects.



    Under this type of evaluation Aperture 1.0 did the worst job at that. But it was probably the one showing the most honest truth about the way the picture really looked.
  • Reply 105 of 111
    tenobelltenobell Posts: 7,014member
    Quote:

    How does motion film color software work?



    Film contains dyes layered over a transparent plastic base. Its color reproduction is limited to real physical rules.



    Motion picture film is evaluated within its ability to record accurate skin tone, color reproduction, and gray scale. This process has its own standard number system called printer lights that lets the DP know at what exposure and film density a particular film stock records proper color.



    To gain an understand of a particular film stocks ability to do this you shoot what called a lab density test. You have a model, a black/white chart, and a color chart. You shoot this scene underexposed, properly exposed, and over exposed. You develop this test and print it for projection.



    After development the DP will be given a lab report with printer lights on it. There are three sets of number for RGB that range from 1-60. 35-45 is in the middle and considered normal exposure. With in normal exposure you should see accurate skin tone, color, and gray scale reproduction.



    Quote:

    It IS also interesting that gone are the days of choosing film stock and choosing ISO and all that -- with digital it's all down to the way a particular digital SLR's CCD/CMOS handles various shots and light conditions -- that's your "film stock and ISO settings"



    This has been done with film for years. Its called pushing or pulling film to change its ISO. To increase the film ISO the lab will leave film in developing chemicals a bit longer, to decrease the film ISO the lab will take the film out of its developing chemicals sooner than normal.



    The DP can also shift printer lights above 45 or below 35 and that will increase or decrease print density which is the same as effecting its ISO.





    Once film is scanned into digital files it moves from photochemical color space to digital video color space. Digital color space is not limited by physical rules but is limited by the amount of data that can be transported, stored, and rendered. This limitation directly affects video recording formats as well as video presentation formats.



    Standard definition has a color standard called the 601 ITU color, HD color standard is 709 ITU color, 2K data and 4K data have their own standard color gamuts.



    Currently there are no exact numerical values for digital RGB color grading the same way there is for photochemical film manipulation. That system is being ratified right now and should be in place soon.
  • Reply 106 of 111
    melgrossmelgross Posts: 33,600member
    Quote:

    Originally posted by TenoBell

    I'm not talking about the ability to manipulate the image. I'm talking about taking a basic RAW picture. Rendering it through different RAW converters from different manufacturers and objectively evaluating the results.



    Each RAW converter has to place the image inside of a color space container and some type of limited color gamut. The only way to have a fully objective evaluation of each rendering is to have a starting point of neutrality in respect to color and contrast.



    There needs to be some point for neutral skin tone reproduction, neutral color, and neutral gray scale. The only way to do this would be for all RAW converters to work under the same color space.



    From what I recently learned these RAW converters have three different color space gamuts they could work under, Adobe RGB, sRGB, and ProPhoto RGB. The fact that they may not use the same color space to me completely devalues any real evaluation between the converters. The histograms between all of the renders will be completely different.



    My next question would be if all of the converters did use the same color space such as Adobe RGB, would the luminance and chrominance values mean the same thing between all of them.



    On top of that each converter is applying sharpening and contrast to hide noise and digital artifacts in the picture. From what I can see this is primarily what most of the evaluation is based upon. Which converter most effectively hides defects.



    Under this type of evaluation Aperture 1.0 did the worst job at that. But it was probably the one showing the most honest truth about the way the picture really looked.




    I tried to reply to your earlier post before, but after I wrote a fair amount, it crashed, so I gave up for then.



    Here's the point about the colorspaces. I'm sure that you understand the problem with limited colorspace. Each device has their own limitations. This is true for film of all types as well, of course.



    What colorspaces do is to allow matching between various output devices, and the original material.



    If the colorspace is wrong then the result can be either over saturated colors, truncated colors, banding, or all three. At other times it can result in excessively muted colors. It also results in incorrect contrast, incorrect white and black points, etc.



    Adobe RGB has been a high end standard for a while. MS and hp invented the sRGB standard to match the cheap 14" color monitors that were being used at the time on most PC's.



    As most people use PC's that became a standard for many files that were to be used for reproduction mainly on them, such as web output.



    Unfortunately, many companies began to use it inappropriately, simply because it was easy to do.



    When you resolve a RAW image file (or any other image file), you have to know what the output will be. For web use, or for any use that will be expected to have a small colorspace, such as a laser printer, sRGB is fine. But for print, or inkjet, or dye-sub, or for Fuji Pictography, or any other hi quality output, you would use Adobe RGB, though there are some other colorspaces that are sometimes used. Then there is CMYK, or course, or Hexachrome.



    Work on digital files, or scans is just as precise as that for film. In fact, I've always thought that film was distinctly less precise. I can often see changes between camera's in a scene, or scene to scene. A great deal of film work also depends on a trained eye. The vagary of film and processing ensures that no two film prints will ever be the same.



    Kodak specs the film as +- 20CC (CMY) color, and =- 1/3 stop, run to run. Even with filtering in the printers, that can't be reduced to below about =- 5CC, and =- 1/5 stop. Processing adds to that uncertainty.



    We monitored our machines very closely, running control strips every two hours, but, some inconsistency always crept in.



    This is a VERY complex area, far too complex to receive a fair hearing here.



    All I can say is that I also thought that Aperture (ver 1) was giving a less processed file. But when I took my own pictures, and processed them in all three converters, Apple's were the least close to what I had shot, here at home, in controlled conditions.



    When I brought the darker areas of the pics done in the other converters up in brightness to match Apertures results. they also gained noise, but not the artifacting. Apple is processing to bring out details in shadows that other converters are not doing. In doing that, it is bringing the noise level up as well. But, it also brought artifacts into the file.



    This is not unusual for processing. An example is Digital Ice, used mostly in scanners. This does remove scratches and dust, but it also changes parts of the file. What Aperture 1 was doing, it seemed to me, was giving the file an adjustment that was similar to PS's Highlight/Shadow control, but without the finesse.
  • Reply 107 of 111
    sunilramansunilraman Posts: 8,133member
    Originally posted by TednDi

    You should still know who Frank Sinatra is though... Otherwise you are just ghey!




    I know who Frank Sinatra is, just not that particular reference by Melgross. Ah, you're all ghey !!
  • Reply 108 of 111
    sunilramansunilraman Posts: 8,133member
    Originally posted by TenoBell

    There needs to be some point for neutral skin tone reproduction, neutral color, and neutral gray scale. The only way to do this would be for all RAW converters to work under the same color space...My next question would be if all of the converters did use the same color space such as Adobe RGB, would the luminance and chrominance values mean the same thing between all of them.




    Even if all the digital SLRs moved to Adobe RGB RAW I think the way each camera "interprets" Adobe RGB would be just different. I don't think it would have the precision of film or print as you and Melgross talk about. I just don't see camera manufacturers having that level of standardisation yet.
  • Reply 109 of 111
    sunilramansunilraman Posts: 8,133member
    Originally posted by melgross

    ...What colorspaces do is to allow matching between various output devices, and the original material...Adobe RGB has been a high end standard for a while. MS and hp invented the sRGB standard to match the cheap 14" color monitors that were being used at the time on most PC's...As most people use PC's that became a standard for many files that were to be used for reproduction mainly on them, such as web output... Unfortunately, many companies began to use it inappropriately, simply because it was easy to do...






    That's why in web design color calibration is almost meaningless. Unlike film and photography in which a defined physical output is produced, web design depends on how the end-user's machine is calibrated and even how good their monitor is. Within the web design studio one could standardise on Adobe RGB for all work and calibrate all the screens so that work (PSDs or JPGs, etc) passed around within the studio would always have some level of consistency.



    I used to want to pull my hair out when we designed something nice, then walked over to like the coding or HR department and see all our work completely mangled by old, shitty, or uncalibrated monitors, or just monitors with way different contrast and brightness settings.



    Heh. In the old days when you did a web page in 1024x768 at 16bit color, you'd want to cry when you saw it on some end-user's machine running at 800x600 at 256colours.



    In the past few years I've been totally slack with colorspaces and conversions and calibrations. I go for things "by eye" and then just make a quick check on my co-workers' machines. And then just learn to forgive things if the orange isn't quite the orange I thought I was working with. I just make a decision if it looks "good enough" on their screens, then so be it. I've only done ad agency work for brief periods but even then all internal review was off the creatives' screens and the in-house HDTV. A few years back when doing websites for conferences, it is interesting, now that I reflect on it, people like Oracle didn't come back and say "the corporate colours are all messed up". Not even once, strangely. IIRC I'd use the Pantone colours of their corporate identity and then adjust by eye. Sometimes their corporate colors had CMYK and RGB but again, the RGB didn't always work out to what you wanted to show the client.
  • Reply 110 of 111
    melgrossmelgross Posts: 33,600member
    Quote:

    Originally posted by sunilraman

    Originally posted by TenoBell

    There needs to be some point for neutral skin tone reproduction, neutral color, and neutral gray scale. The only way to do this would be for all RAW converters to work under the same color space...My next question would be if all of the converters did use the same color space such as Adobe RGB, would the luminance and chrominance values mean the same thing between all of them.




    Even if all the digital SLRs moved to Adobe RGB RAW I think the way each camera "interprets" Adobe RGB would be just different. I don't think it would have the precision of film or print as you and Melgross talk about. I just don't see camera manufacturers having that level of standardisation yet.




    You can't interpret a colorspace. That's the entire point of defining it in the first place.



    But, you always have to keep in mind the fact that ALL photography, motion picture, tv, still, is an art, as well as a science. This will be true as long as the photographer, or filmmaker has a personal preference as to what (s)he wants to get across.



    There are known numbers that we can deal with for grayscale, skin tone, sky color, etc., and we do use them. But, not all skin tone is exactly the same, even among limited population groups. But, it's easier to recognise ruddy complexions, different Asian complexions, and those of Afro-Americans, than you might think. It's the process of judging the relative differences between different parts of the face, whites of the eyes, teeth, etc, that enable a skilled operator to be able to adjust , or tweek this, after the gereralized numbers are applied.



    Using white points and black points, most colors snap to where they should be, or very close to where they should be. Some additional gray point checking can adjust this further. Knowing the lighting will help to fill in some of the last questions.



    It's only after that decision that the scientific methods of standardization come into play. Then, it's a matter of making sure that the vision is maintained across all manners of reproduction.



    That's why the eye of the person working with the files is so important. I knew the preferences of all my clients that I personally worked with. We worked together. After we made sure that their vision was complete, I then made certain that it stayed that way.
  • Reply 111 of 111
    melgrossmelgross Posts: 33,600member
    Quote:

    Originally posted by sunilraman

    Originally posted by melgross

    ...What colorspaces do is to allow matching between various output devices, and the original material...Adobe RGB has been a high end standard for a while. MS and hp invented the sRGB standard to match the cheap 14" color monitors that were being used at the time on most PC's...As most people use PC's that became a standard for many files that were to be used for reproduction mainly on them, such as web output... Unfortunately, many companies began to use it inappropriately, simply because it was easy to do...






    That's why in web design color calibration is almost meaningless. Unlike film and photography in which a defined physical output is produced, web design depends on how the end-user's machine is calibrated and even how good their monitor is. Within the web design studio one could standardise on Adobe RGB for all work and calibrate all the screens so that work (PSDs or JPGs, etc) passed around within the studio would always have some level of consistency.



    I used to want to pull my hair out when we designed something nice, then walked over to like the coding or HR department and see all our work completely mangled by old, shitty, or uncalibrated monitors, or just monitors with way different contrast and brightness settings.



    Heh. In the old days when you did a web page in 1024x768 at 16bit color, you'd want to cry when you saw it on some end-user's machine running at 800x600 at 256colours.



    In the past few years I've been totally slack with colorspaces and conversions and calibrations. I go for things "by eye" and then just make a quick check on my co-workers' machines. And then just learn to forgive things if the orange isn't quite the orange I thought I was working with. I just make a decision if it looks "good enough" on their screens, then so be it. I've only done ad agency work for brief periods but even then all internal review was off the creatives' screens and the in-house HDTV. A few years back when doing websites for conferences, it is interesting, now that I reflect on it, people like Oracle didn't come back and say "the corporate colours are all messed up". Not even once, strangely. IIRC I'd use the Pantone colours of their corporate identity and then adjust by eye. Sometimes their corporate colors had CMYK and RGB but again, the RGB didn't always work out to what you wanted to show the client.




    The entire concept of sRGB, as I pointed out, was to compensate for a lack of calibration on the part of cheap home monitors, which perhaps 95% of the population uses.



    Very few monitors are ever taken off the 9,300 degree white point balance their manufacturer has set them to. Knowing which phosphors are being used then completes an accurate enough picture (sic) for most web design.



    Being that the eye/brain combination translates relative color for us, it's more than good enough for that.



    Of course, if you are trying to decide on a sweater, or a paint shade, this isn't the ideal method. But, even there, most people can come close enough, in their mind, as to what it will look like, as long as they are told, in advance, that the color isn't accurate.



    That's why there are "web-safe" colors. I've found that those colors do reproduce pretty closely between most cheap monitors, though the use of LCD panels has changed that somewhat.
Sign In or Register to comment.