you are right. An actual review would be good and fair.
however, you must admit you also need it be a bit more reasonable of your expectation of camera that is based on 2004 CCD technology sensor in the D70. How about we compare photos from my D600 Full Frame sensor to your RX100 1" sensor. Down sampled to 16MP. ISO 6400. to see which one is smoother. You game?
I am not asking for pissing contest here. I just want you to be a bit more reasonable in comparing sensors technology to at least similar eras. OK?
I think you missed my point.
Which was that downsampling can provide significant improvements if you have enough pixels to start with, and don't need more pixels then what you end up with.
If I just crop 6MP out of RX100 image, per-pixel IQ will be below D70s 6MP image. Regardless of D70s age. Yes, D70s is old tech, but it does have DX size sensor with only 6MP on it, making individual pixels quite large. Modern DX sensor with 6MP would do much better, but modern DX sensors have much higher resolution with much smaller individual pixels, so you don't really get that much advantage in per-pixel IQ as you might expect... but I digress. You don't have to be an expert to see that - RX100 at 100% magnification will give less sharpness and more noise, even in daylight photos, especially on darker colours. I live by the ocean, and water photos on RX100 have more pronounced noise on ISO125 than D70s on base ISO200, with same exposure time.
But downsampling RX100 20MP images to same resolution as D70s gives result that is more pleasing to my eyes, even at 100% zoom. Less noise, smoother (not softer) with great detail retention.
Which is what Nokia is trying to achieve with Lumia 1020 camera tech. And I think they are, compared to other phone cameras. What is the point of bringing D600 in this dialogue? Does any current smartphone have 20MP+ FX sensor?
Re rest of your post, no I didn't have chance to check all available comparison photos. I'll do that and reply in separate post.
While most of what you say is right on, it's deceptive to say that the TIFF file is not compressed since a huge amount of the RAW image data gets interpolated and tossed out during the post processing of the TIFF. Ironically an uncompressed TIFF file is usually larger than a RAW file since it has uncompressed RGB values (no alpha unless you add it later) for every pixel while the RAW file has only one value for each pixel (sensor cell.)
TIFF: each pixel has 8 bitsof RGB info for each pixel or 24 bits of information per pixel.
RAW: Each sensor cell has only a single R, G or B value of 12 or 16 bits, so 12 or 16 bits per sensor cell (about the same number as there are pixels in the TIIF file.)
Because of this, an uncompressed RAW file is 1/2 to 2/3 the size of an uncompressed TIFF even though it contains at least twice as much useful image information.
not sure if deceptive is the right word. its more like TIFF is the negative AFTER it has been developed and the RAW is before it has been developed. I would not call it compressed... more like it "lost" undeveloped information because it has gone through the development process. But I agree with the thought behind you comment. It is post processed and by definition lost information after it was decided how it should be developed and processed.
not worth getting into TIFF bit hair splitting on this thread. People can "bing" support TIFF bit formats if they like. ;-) Same for RAW bit format. Most RAW files I use are "lossless" compressed format.
That doesn't sound quite right if the images are of similar pixel dimension. The main reason that TIFF is larger than RAW is that RAW generally uses lossless compression, whereas basic TIFF is uncompressed. Am I missing what you meant?
Yeah, it's surprising. People forget the that the sensors in a camera do not sense RGB directly, only monochrome light levels. Each sensor has a R, G or B filter over it. Because of this the sensors are arrayed in a pattern of R G and B like:
R G B G R G B G
G B G R G B G R
(I think it's the G cell that for some good (but easily forgotten by me) reason is gets double representation.)
So the RAW file has a 12 or 16 bit number for each cell. In camera post processing into a TIFF interpolates (guesses) the RGB value for every cell (effectively this is when the pixel, picture element, is created.) With a RAW file the interpolation is done by you later, on your computer. This is why you can change color temperature, etc. while preserving more image information.
So Tiff has 24 bits of processed data for each pixel while the RAW only has 16 bits. The RAW, however can theoretically be post processed to have 36 to 64 bits of RGB data for each pixel (if nothing is thrown out.) But that's overstating the useful information content.
RAW processing is all about making decisions on what to keep and what to throw out.
not sure if deceptive is the right word. its more like TIFF is the negative AFTER it has been developed and the RAW is before it has been developed. I would not call it compressed... more like it "lost" undeveloped information because it has gone through the development process. But I agree with the thought behind you comment. It is post processed and by definition lost information after it was decided how it should be developed and processed.
not worth getting into TIFF bit hair splitting on this thread. People can "bing" support TIFF bit formats if they like. ;-) Same for RAW bit format. Most RAW files I use are "lossless" compressed format.
Yeah, I didn't mean to imply you were being deceptive, just that it sounds like the TIFF is untouched.
Your development analogy is a good one. TIFF files can't be redeveloped. It's like sending it to Walgreens to be developed. They do it how they want.
The nice thing with RAW is that you can develop the negative however you want — and more importantly, you can redevelop it however you want.
That doesn't sound quite right if the images are of similar pixel dimension. The main reason that TIFF is larger than RAW is that RAW generally uses lossless compression, whereas basic TIFF is uncompressed. Am I missing what you meant?
Yeah, it's surprising. People forget the that the sensors in a camera do not sense RGB directly, only monochrome light levels. Each sensor has a R, G or B filter over it. Because of this the sensors are arrayed in a pattern of R G and B like:
R G B G R G B G
G B G R G B G R
(I think it's the G cell that for some good (but easily forgotten by me) reason is gets double representation.)
So the RAW file has a 12 or 16 bit number for each cell. The TIFF interpolates the RGB value for each cell (effectively this is when the pixel, picture element, is created.) With a RAW file the interpolation is done by you late, on your computer. This is why you can change color temperature, etc. while preserving more image information.
Agreed, but that means that the RAW file contains one 12 or 16 bit number for 4 times the number of sensor locations as there are pixels in the resulting TIFF file, where each pixel contains three 16 bit (if not lowered to 8 bit) numbers representing the RGB values. That's actually 33% more data in the RAW file if all four sensors are fully recorded. Hence my point that it is the lossless compression of the data in the RAW file that allows it to be smaller.
Which was that downsampling can provide significant improvements if you have enough pixels to start with, and don't need more pixels then what you end up with.
If I just crop 6MP out of RX100 image, per-pixel IQ will be below D70s 6MP image. Regardless of D70s age. Yes, D70s is old tech, but it does have DX size sensor with only 6MP on it, making individual pixels quite large. Modern DX sensor with 6MP would do much better, but modern DX sensors have much higher resolution with much smaller individual pixels, so you don't really get that much advantage in per-pixel IQ as you might expect... but I digress. You don't have to be an expert to see that - RX100 at 100% magnification will give less sharpness and more noise, even in daylight photos, especially on darker colours. I live by the ocean, and water photos on RX100 have more pronounced noise on ISO125 than D70s on base ISO200, with same exposure time.
But downsampling RX100 20MP images to same resolution as D70s gives result that is more pleasing to my eyes, even at 100% zoom. Less noise, smoother (not softer) with great detail retention.
Which is what Nokia is trying to achieve with Lumia 1020 camera tech. And I think they are, compared to other phone cameras. What is the point of bringing D600 in this dialogue? Does any current smartphone have 20MP+ FX sensor?
Re rest of your post, no I didn't have chance to check all available comparison photos. I'll do that and reply in separate post.
You said more MP on smaller sensor when down sampled will give better photo than larger sensor with less MP. You used CCD sensor technology from 9 year ago to prove your point against a modern CMOS sensor. I said this was not a fair comparison and suggested using even bigger sensor in the D600 of the same era as the RX100 and both down sample to 6MP and compare. Sorry I can't get my hands on a large sensor low MP camera of similar era to the RX100.
CCD was never good an keeping noise in check unless you had a 3CCD sensor like in the Panasonic designs. The D70 shots were only really good up to 400 iso. Which is a joke by modern CMOS standards with similar sensor size like in the D7000. This is one of the reasons all camera companies abandoned CCD.
So to make this more fair, perhaps we should wait till dpreview does a full review using their standard images and compare image from Fuji X20 - 12MP camera 2/3" sensor with the Nokia 41MP 2/3" sensor and down sample the 41MP file to 12MP to see which one produces a better image. Did you think it will be close? ;-)
Unless Nokia knows something Sony does not, I dont think 41MP on a 2/3" sensor produces yields a reasonable sensor pitch for quality images considering to the limits of industry leading Sony sensors are willing to push their other sensors today. I think all it will be "over sampling" is a lot noise and then spending CPU cycles it filter out, which in the end will likely negatively affect the final resulting image. I think I would rather have less pixels to start that produces a noise free capture then mess with the image full of noise (due to aggressive high pitch) and try to carefully filter out the noise without messing up the resulting image.
Agreed, but that means that the RAW file contains one 12 or 16 bit number for 4 times the number of sensor locations as there are pixels in the resulting TIFF file, where each pixel contains three 16 bit (if not lowered to 8 bit) numbers representing the RGB values. That's actually 33% more data in the RAW file if all four sensors are fully recorded. Hence my point that it is the lossless compression of the data in the RAW file that allows it to be smaller.
One would think so, but each sensor maps 1 to 1 with a pixel in the finished image (there are a few extra sensors at the edge.) In post processing (in camera or in RAW conversion) each pixel's values are constructed from the measured value of its corresponding sensor and the other missing values from nearby sensors.
One would think so, but each sensor maps 1 to 1 with a pixel in the finished image (there are a few extra sensors at the edge.) In post processing (in camera or in RAW conversion) each pixel's values are constructed from the measured value of its corresponding sensor and the other missing values from nearby sensors.
This method probably has to do with information theory, I suppose.
Many image processing methods have been explored, but this apparently provides the best result in terms of maximum high quality information produced from the RAW sensor values.
Agreed, but that means that the RAW file contains one 12 or 16 bit number for 4 times the number of sensor locations as there are pixels in the resulting TIFF file, where each pixel contains three 16 bit (if not lowered to 8 bit) numbers representing the RGB values. That's actually 33% more data in the RAW file if all four sensors are fully recorded. Hence my point that it is the lossless compression of the data in the RAW file that allows it to be smaller.
One would think so, but each sensor maps 1 to 1 with a pixel in the finished image (there are a few extra sensors at the edge.) In post processing (in camera or in RAW conversion) each pixel's values are constructed from the measured value of its corresponding sensor and the other missing values from nearby sensors.
Right - I'd forgotten that they always interpolate for the missing colors. So the TIFF image does contain extra information - two interpolated colors plus the one measured color at each pixel location (processed), and is also uncompressed by default.
Right - I'd forgotten that they always interpolate for the missing colors. So the TIFF image does contain extra information - two interpolated colors plus the one measured color at each pixel location (processed), and is also uncompressed by default.
Yes, "extra information," in that the TIFF has a larger file size.
But it contains much less real and usable image information than the smaller RAW file.
The TIFF contains processed,image information. A nice abbreviation of the image (Readers Digest condensed version of the book.)
The RAW file contains all the information. A full recording of the image as shot. (It's the full original novel, as written by the author.)
Right - I'd forgotten that they always interpolate for the missing colors. So the TIFF image does contain extra information - two interpolated colors plus the one measured color at each pixel location (processed), and is also uncompressed by default.
Yes, "extra information," in that the TIFF has a larger file size.
But it contains much less real and usable image information than the smaller RAW file.
Yes - that's what I meant - it contains unnecessary interpolated information and has also been processed. In theory though, if one knew the processing algorithms used to generate the TIFF from the RAW then one should be able to reconstruct the RAW from the TIFF, not that it would make any sense to want to do that.
In theory though, if one knew the processing algorithms used to generate the TIFF from the RAW then one should be able to reconstruct the RAW from the TIFF, not that it would make any sense to want to do that.
Nope, it's not even theoretically reversible. The finest information, essentially the least significant figures, are tossed and that information is lost forever and cannot be reconstructed.
It's a lossy transformation, just like a JPEG but not such a severe pruning.
[You can't reconstruct the original "Tale of Two Cities" from the Reader's Digest Condensed Version of the book. (Probably anyone under 40 has no idea what a Reader's Digest Condensed Version of a book is!)]
You said more MP on smaller sensor when down sampled will give better photo than larger sensor with less MP. You used CCD sensor technology from 9 year ago to prove your point against a modern CMOS sensor. I said this was not a fair comparison and suggested using even bigger sensor in the D600 of the same era as the RX100 and both down sample to 6MP and compare. Sorry I can't get my hands on a large sensor low MP camera of similar era to the RX100.
CCD was never good an keeping noise in check unless you had a 3CCD sensor like in the Panasonic designs. The D70 shots were only really good up to 400 iso. Which is a joke by modern CMOS standards with similar sensor size like in the D7000. This is one of the reasons all camera companies abandoned CCD.
So to make this more fair, perhaps we should wait till dpreview does a full review using their standard images and compare image from Fuji X20 - 12MP camera 2/3" sensor with the Nokia 41MP 2/3" sensor and down sample the 41MP file to 12MP to see which one produces a better image. Did you think it will be close? ;-)
Unless Nokia knows something Sony does not, I dont think 41MP on a 2/3" sensor produces yields a reasonable sensor pitch for quality images considering to the limits of industry leading Sony sensors are willing to push their other sensors today. I think all it will be "over sampling" is a lot noise and then spending CPU cycles it filter out, which in the end will likely negatively affect the final resulting image. I think I would rather have less pixels to start that produces a noise free capture then mess with the image full of noise (due to aggressive high pitch) and try to carefully filter out the noise without messing up the resulting image.
Again, no - that is not what I am saying
I'm saying that RX100 native pixel-level IQ is trailing behind D70s native pixel-level IQ, but when I downsample RX100 image, I get IQ boost on per-pixel level, while sacrificing pixel count. Since RX100 has so much more pixels, I can sacrifice lots of them and still have reasonably sized image for scenarios more demanding than Facebook photo upload.
Presence of D70s here is completely anecdotal - I'm mentioning it only because I have it so I can compare them, but point is that I am "improving" RX100 IQ (or, rather, dismissing it's shortcomings) by downsampling images.
If I happened to have D7000, my conclusion would be that on native resolution, RX100 has much poorer per-pixel IQ, but when I downsample RX100 images, I am getting per-pixel IQ that compares better with D7000, while reducing pixel count significantly below D7000 pixel count.
If I had D600 (awesome camera, maybe one day...) maybe I would say that on native resolution, RX100 has horrendously, must-see-it-to-believe-it poorer per-pixel IQ, but when I downsample RX100 images, I am getting per-pixel IQ that reduces gap to D600 per-pixel IQ, while reducing pixel count significantly below D600 pixel count.
In all 3 scenarios, point is not what I compare RX100 with. Point is that I get improved IQ by sacrificing pixels, and that I have enough pixels to sacrifice.
Based on that, I think I have good basis to expect that by downsampling Lumia 1020 images to 8MP, I will get better looking images than any other current smartphone on the market, while keeping same pixel count. I don't expect that will make Lumia1020 an alternative for DSLR, or even advanced pocket camera like RX100... but I do expect it will make it a better alternative to average P&S camera, and better camera than any other smartphone is at present.
In theory though, if one knew the processing algorithms used to generate the TIFF from the RAW then one should be able to reconstruct the RAW from the TIFF, not that it would make any sense to want to do that.
Nope, it's not even theoretically reversible. The finest information, essentially the least significant figures, are tossed and that information is lost forever and cannot be reconstructed.
It's a lossy transformation, just like a JPEG but not such a severe pruning.
[You can't reconstruct the original "Tale of Two Cities" from the Reader's Digest Condensed Version of the book. (Probably anyone under 40 has no idea what a Reader's Digest Condensed Version of a book is!)]
I would agree if that were so, but if the RAW and TIFF both contain 16 bit information then I guess I don't see which least significant bits are being discarded. Lossy only applies to compression algorithms, not to interpolation. Can you elaborate?
to be technical, isolation via DOF control and BOKEH are two separate effects. It just happens the BOKEH effect occurs when you have a short DOF. The quality and the pattern of the BOKEH is based on the physical design of the lens and not purely function of DOF. right?
I'm no expert on bokeh, but I think DOF isolation and bokeh refer to essentially the same thing, but people talk about the esthetic quality of the bokeh which as you point out depends on specifics of the lens and the diaphragm. Have you ever noticed that during an eclipse that sunlight passing through the tree leaves casts little highlights (whatever you call the opposite of a shadow) in a crescent shape that perfectly mimics the crescent of the eclipsed sun? You would think this wouldn't be very noticeable, but it actually is quite striking and amazingly different from what you normally see. It literally changes the mood in a subtle way.
I suppose an esthetically nice bokeh must be similar. A subtle difference, but aficionados notice it. (I haven't thought of it this way before. I'll have to pay more attention to it.)
I'm saying that RX100 native pixel-level IQ is trailing behind D70s native pixel-level IQ, but when I downsample RX100 image, I get IQ boost on per-pixel level, while sacrificing pixel count. Since RX100 has so much more pixels, I can sacrifice lots of them and still have reasonably sized image for scenarios more demanding than Facebook photo upload.
Presence of D70s here is completely anecdotal - I'm mentioning it only because I have it so I can compare them, but point is that I am "improving" RX100 IQ (or, rather, dismissing it's shortcomings) by downsampling images.
If I happened to have D7000, my conclusion would be that on native resolution, RX100 has much poorer per-pixel IQ, but when I downsample RX100 images, I am getting per-pixel IQ that compares better with D7000, while reducing pixel count significantly below D7000 pixel count.
If I had D600 (awesome camera, maybe one day...) maybe I would say that on native resolution, RX100 has horrendously, must-see-it-to-believe-it poorer per-pixel IQ, but when I downsample RX100 images, I am getting per-pixel IQ that reduces gap to D600 per-pixel IQ, while reducing pixel count significantly below D600 pixel count.
In all 3 scenarios, point is not what I compare RX100 with. Point is that I get improved IQ by sacrificing pixels, and that I have enough pixels to sacrifice.
Based on that, I think I have good basis to expect that by downsampling Lumia 1020 images to 8MP, I will get better looking images than any other current smartphone on the market, while keeping same pixel count. I don't expect that will make Lumia1020 an alternative for DSLR, or even advanced pocket camera like RX100... but I do expect it will make it a better alternative to average P&S camera, and better camera than any other smartphone is at present.
yes. I get what you are saying.
Not sure you are getting what I am saying however. since this is my third attempt, this is no doubt because I must be doing a terrible job of expressing myself. sorry.
So lets make this simple.
two 2/3" sensors. One has "big" pixels the other has "little" pixels but more of them.
Mr. Big Pixel - Fuji X20 2/3" sensor 12 Million Big Pixels.
Mr. Little Pixel - Nokia 1020 2/3" sensor. 41 Million Small Pixels. Downsample this to 12MP.
Do you think a 12MP (41MP down sampled to 12MP) image will be Better from the Nokia because it has an opportunity to "oversample" and why? Or do you think the Fuji native 12 MP will have better IQ than the (41->12MP) Nokia image and why?
bonus questions:
4 Nokia small pixels roughly make up the same physical area of 1 Fuji BIG pixel. Assume you apply just enough light to the BIG Fuji pixel to yields the correct color value without error. Now take the same amount of light, and apply it to the 4 smaller pixels taking up the same surface area. What is the probability that all four pixels will yield the correct color value without error (i.e. noise)? Is it 100% or something lower? How do you down sample this to yield the correct color value for the 41->12MP conversion in which the 4 small pixels will be replaced by 1 big pixel? Do you take the average value? Would the average value be the same what the Fuji produced? Or do you vote and find the ones which don't match? For example, 3 yield simliar values, but 1 is way off? What is the probability of making the wrong decision in this voting technique? Is it less than 100%? Go back and review your answers to IQ of 12MP Native Fuji vs 41MP-->12 MP downsample Nokia questions above.
I would agree if that were so, but if the RAW and TIFF both contain 16 bit information then I guess I don't see which least significant bits are being discarded. Lossy only applies to compression algorithms, not to interpolation. Can you elaborate?
Oh sure. If you go to a 16 bit tiff there is no qualitative difference.
I was assuming you meant processing it into an 8 bit TIFF. (In downsampling info would be lost.)
But once you have made all of your decisions on settings for exposure, color temp, setting the curves, retouching etc., there's no real reason to remain at 16 bits for final output. Those extra bits are really only important if you still plan to make adjustments to the image.
Do digital labs accept anything other than an 8 bit jpegs for printing?
I would agree if that were so, but if the RAW and TIFF both contain 16 bit information then I guess I don't see which least significant bits are being discarded. Lossy only applies to compression algorithms, not to interpolation. Can you elaborate?
Oh sure. If you go to a 16 bit tiff there is no difference.
I was assuming you meant processing it into an 8 bit TIFF. (In downsampling info would be lost.)
But once you have made all of your decisions on settings for exposure, color temp, setting the curves, retouching etc., there's no real reason to remain at 16 bits for final output. Those extra bits are really only important if you still plane to make adjustments to the image.
Do you think the image will be Better from the Nokia because it has an opportunity to "oversample" and why? Or do you think the Fuji will have better IQ and why?
My vote is for big pixels (sensors.)
Less noise and vignetting.
I would still take a 2.7 MP Nikon D-1 (OK, a 12 MP D-2x ;-) ) over many newer bigger pixel DSLRs. (but that wasn't the question, I suppose.)
Both strategies have their advantages though. Either strategy may be superior for a particular hardware combinations. Comparison of actual results is probably the only way to decide. Philosophically though, I tend to favor a strategy that pursues results with quality rather than brute force (lots of pixels.) This is the time tested and orthodox approach though, and I appreciate iconoclastic approaches as well.
Either strategy may be superior for a particular hardware combinations.
The assumption is we are using sensor technology from a similar eras; Not 9-10 years apart. Specifically sensor technology as it stands today. Same sensor size; 12MP applied over 2/3" vs 41MP applied over 2/3" as it stands today.
here is a graphical overview of how large 2/3" sensor is compared to APS-C/DX and Full Frame.
Comments
I think you missed my point.
Which was that downsampling can provide significant improvements if you have enough pixels to start with, and don't need more pixels then what you end up with.
If I just crop 6MP out of RX100 image, per-pixel IQ will be below D70s 6MP image. Regardless of D70s age. Yes, D70s is old tech, but it does have DX size sensor with only 6MP on it, making individual pixels quite large. Modern DX sensor with 6MP would do much better, but modern DX sensors have much higher resolution with much smaller individual pixels, so you don't really get that much advantage in per-pixel IQ as you might expect... but I digress. You don't have to be an expert to see that - RX100 at 100% magnification will give less sharpness and more noise, even in daylight photos, especially on darker colours. I live by the ocean, and water photos on RX100 have more pronounced noise on ISO125 than D70s on base ISO200, with same exposure time.
But downsampling RX100 20MP images to same resolution as D70s gives result that is more pleasing to my eyes, even at 100% zoom. Less noise, smoother (not softer) with great detail retention.
Which is what Nokia is trying to achieve with Lumia 1020 camera tech. And I think they are, compared to other phone cameras. What is the point of bringing D600 in this dialogue? Does any current smartphone have 20MP+ FX sensor?
Re rest of your post, no I didn't have chance to check all available comparison photos. I'll do that and reply in separate post.
Quote:
Originally Posted by DESuserIGN
While most of what you say is right on, it's deceptive to say that the TIFF file is not compressed since a huge amount of the RAW image data gets interpolated and tossed out during the post processing of the TIFF. Ironically an uncompressed TIFF file is usually larger than a RAW file since it has uncompressed RGB values (no alpha unless you add it later) for every pixel while the RAW file has only one value for each pixel (sensor cell.)
TIFF: each pixel has 8 bitsof RGB info for each pixel or 24 bits of information per pixel.
RAW: Each sensor cell has only a single R, G or B value of 12 or 16 bits, so 12 or 16 bits per sensor cell (about the same number as there are pixels in the TIIF file.)
Because of this, an uncompressed RAW file is 1/2 to 2/3 the size of an uncompressed TIFF even though it contains at least twice as much useful image information.
not sure if deceptive is the right word. its more like TIFF is the negative AFTER it has been developed and the RAW is before it has been developed. I would not call it compressed... more like it "lost" undeveloped information because it has gone through the development process. But I agree with the thought behind you comment. It is post processed and by definition lost information after it was decided how it should be developed and processed.
not worth getting into TIFF bit hair splitting on this thread. People can "bing" support TIFF bit formats if they like. ;-) Same for RAW bit format. Most RAW files I use are "lossless" compressed format.
Quote:
Originally Posted by muppetry
That doesn't sound quite right if the images are of similar pixel dimension. The main reason that TIFF is larger than RAW is that RAW generally uses lossless compression, whereas basic TIFF is uncompressed. Am I missing what you meant?
Yeah, it's surprising. People forget the that the sensors in a camera do not sense RGB directly, only monochrome light levels. Each sensor has a R, G or B filter over it. Because of this the sensors are arrayed in a pattern of R G and B like:
R G B G R G B G
G B G R G B G R
(I think it's the G cell that for some good (but easily forgotten by me) reason is gets double representation.)
So the RAW file has a 12 or 16 bit number for each cell. In camera post processing into a TIFF interpolates (guesses) the RGB value for every cell (effectively this is when the pixel, picture element, is created.) With a RAW file the interpolation is done by you later, on your computer. This is why you can change color temperature, etc. while preserving more image information.
So Tiff has 24 bits of processed data for each pixel while the RAW only has 16 bits. The RAW, however can theoretically be post processed to have 36 to 64 bits of RGB data for each pixel (if nothing is thrown out.) But that's overstating the useful information content.
RAW processing is all about making decisions on what to keep and what to throw out.
Quote:
Originally Posted by snova
not sure if deceptive is the right word. its more like TIFF is the negative AFTER it has been developed and the RAW is before it has been developed. I would not call it compressed... more like it "lost" undeveloped information because it has gone through the development process. But I agree with the thought behind you comment. It is post processed and by definition lost information after it was decided how it should be developed and processed.
not worth getting into TIFF bit hair splitting on this thread. People can "bing" support TIFF bit formats if they like. ;-) Same for RAW bit format. Most RAW files I use are "lossless" compressed format.
Yeah, I didn't mean to imply you were being deceptive, just that it sounds like the TIFF is untouched.
Your development analogy is a good one. TIFF files can't be redeveloped. It's like sending it to Walgreens to be developed. They do it how they want.
The nice thing with RAW is that you can develop the negative however you want — and more importantly, you can redevelop it however you want.
Quote:
Originally Posted by DESuserIGN
Quote:
Originally Posted by muppetry
That doesn't sound quite right if the images are of similar pixel dimension. The main reason that TIFF is larger than RAW is that RAW generally uses lossless compression, whereas basic TIFF is uncompressed. Am I missing what you meant?
Yeah, it's surprising. People forget the that the sensors in a camera do not sense RGB directly, only monochrome light levels. Each sensor has a R, G or B filter over it. Because of this the sensors are arrayed in a pattern of R G and B like:
R G B G R G B G
G B G R G B G R
(I think it's the G cell that for some good (but easily forgotten by me) reason is gets double representation.)
So the RAW file has a 12 or 16 bit number for each cell. The TIFF interpolates the RGB value for each cell (effectively this is when the pixel, picture element, is created.) With a RAW file the interpolation is done by you late, on your computer. This is why you can change color temperature, etc. while preserving more image information.
Agreed, but that means that the RAW file contains one 12 or 16 bit number for 4 times the number of sensor locations as there are pixels in the resulting TIFF file, where each pixel contains three 16 bit (if not lowered to 8 bit) numbers representing the RGB values. That's actually 33% more data in the RAW file if all four sensors are fully recorded. Hence my point that it is the lossless compression of the data in the RAW file that allows it to be smaller.
Quote:
Originally Posted by nikon133
I think you missed my point.
Which was that downsampling can provide significant improvements if you have enough pixels to start with, and don't need more pixels then what you end up with.
If I just crop 6MP out of RX100 image, per-pixel IQ will be below D70s 6MP image. Regardless of D70s age. Yes, D70s is old tech, but it does have DX size sensor with only 6MP on it, making individual pixels quite large. Modern DX sensor with 6MP would do much better, but modern DX sensors have much higher resolution with much smaller individual pixels, so you don't really get that much advantage in per-pixel IQ as you might expect... but I digress. You don't have to be an expert to see that - RX100 at 100% magnification will give less sharpness and more noise, even in daylight photos, especially on darker colours. I live by the ocean, and water photos on RX100 have more pronounced noise on ISO125 than D70s on base ISO200, with same exposure time.
But downsampling RX100 20MP images to same resolution as D70s gives result that is more pleasing to my eyes, even at 100% zoom. Less noise, smoother (not softer) with great detail retention.
Which is what Nokia is trying to achieve with Lumia 1020 camera tech. And I think they are, compared to other phone cameras. What is the point of bringing D600 in this dialogue? Does any current smartphone have 20MP+ FX sensor?
Re rest of your post, no I didn't have chance to check all available comparison photos. I'll do that and reply in separate post.
You said more MP on smaller sensor when down sampled will give better photo than larger sensor with less MP. You used CCD sensor technology from 9 year ago to prove your point against a modern CMOS sensor. I said this was not a fair comparison and suggested using even bigger sensor in the D600 of the same era as the RX100 and both down sample to 6MP and compare. Sorry I can't get my hands on a large sensor low MP camera of similar era to the RX100.
CCD was never good an keeping noise in check unless you had a 3CCD sensor like in the Panasonic designs. The D70 shots were only really good up to 400 iso. Which is a joke by modern CMOS standards with similar sensor size like in the D7000. This is one of the reasons all camera companies abandoned CCD.
So to make this more fair, perhaps we should wait till dpreview does a full review using their standard images and compare image from Fuji X20 - 12MP camera 2/3" sensor with the Nokia 41MP 2/3" sensor and down sample the 41MP file to 12MP to see which one produces a better image. Did you think it will be close? ;-)
Unless Nokia knows something Sony does not, I dont think 41MP on a 2/3" sensor produces yields a reasonable sensor pitch for quality images considering to the limits of industry leading Sony sensors are willing to push their other sensors today. I think all it will be "over sampling" is a lot noise and then spending CPU cycles it filter out, which in the end will likely negatively affect the final resulting image. I think I would rather have less pixels to start that produces a noise free capture then mess with the image full of noise (due to aggressive high pitch) and try to carefully filter out the noise without messing up the resulting image.
Quote:
Originally Posted by muppetry
Agreed, but that means that the RAW file contains one 12 or 16 bit number for 4 times the number of sensor locations as there are pixels in the resulting TIFF file, where each pixel contains three 16 bit (if not lowered to 8 bit) numbers representing the RGB values. That's actually 33% more data in the RAW file if all four sensors are fully recorded. Hence my point that it is the lossless compression of the data in the RAW file that allows it to be smaller.
One would think so, but each sensor maps 1 to 1 with a pixel in the finished image (there are a few extra sensors at the edge.) In post processing (in camera or in RAW conversion) each pixel's values are constructed from the measured value of its corresponding sensor and the other missing values from nearby sensors.
Quote:
Originally Posted by DESuserIGN
One would think so, but each sensor maps 1 to 1 with a pixel in the finished image (there are a few extra sensors at the edge.) In post processing (in camera or in RAW conversion) each pixel's values are constructed from the measured value of its corresponding sensor and the other missing values from nearby sensors.
This method probably has to do with information theory, I suppose.
Many image processing methods have been explored, but this apparently provides the best result in terms of maximum high quality information produced from the RAW sensor values.
Quote:
Originally Posted by DESuserIGN
Quote:
Originally Posted by muppetry
Agreed, but that means that the RAW file contains one 12 or 16 bit number for 4 times the number of sensor locations as there are pixels in the resulting TIFF file, where each pixel contains three 16 bit (if not lowered to 8 bit) numbers representing the RGB values. That's actually 33% more data in the RAW file if all four sensors are fully recorded. Hence my point that it is the lossless compression of the data in the RAW file that allows it to be smaller.
One would think so, but each sensor maps 1 to 1 with a pixel in the finished image (there are a few extra sensors at the edge.) In post processing (in camera or in RAW conversion) each pixel's values are constructed from the measured value of its corresponding sensor and the other missing values from nearby sensors.
Right - I'd forgotten that they always interpolate for the missing colors. So the TIFF image does contain extra information - two interpolated colors plus the one measured color at each pixel location (processed), and is also uncompressed by default.
Quote:
Originally Posted by muppetry
Right - I'd forgotten that they always interpolate for the missing colors. So the TIFF image does contain extra information - two interpolated colors plus the one measured color at each pixel location (processed), and is also uncompressed by default.
Yes, "extra information," in that the TIFF has a larger file size.
But it contains much less real and usable image information than the smaller RAW file.
The TIFF contains processed,image information. A nice abbreviation of the image (Readers Digest condensed version of the book.)
The RAW file contains all the information. A full recording of the image as shot. (It's the full original novel, as written by the author.)
Quote:
Originally Posted by DESuserIGN
Quote:
Originally Posted by muppetry
Right - I'd forgotten that they always interpolate for the missing colors. So the TIFF image does contain extra information - two interpolated colors plus the one measured color at each pixel location (processed), and is also uncompressed by default.
Yes, "extra information," in that the TIFF has a larger file size.
But it contains much less real and usable image information than the smaller RAW file.
Yes - that's what I meant - it contains unnecessary interpolated information and has also been processed. In theory though, if one knew the processing algorithms used to generate the TIFF from the RAW then one should be able to reconstruct the RAW from the TIFF, not that it would make any sense to want to do that.
Quote:
Originally Posted by muppetry
In theory though, if one knew the processing algorithms used to generate the TIFF from the RAW then one should be able to reconstruct the RAW from the TIFF, not that it would make any sense to want to do that.
Nope, it's not even theoretically reversible. The finest information, essentially the least significant figures, are tossed and that information is lost forever and cannot be reconstructed.
It's a lossy transformation, just like a JPEG but not such a severe pruning.
[You can't reconstruct the original "Tale of Two Cities" from the Reader's Digest Condensed Version of the book. (Probably anyone under 40 has no idea what a Reader's Digest Condensed Version of a book is!)]
Again, no - that is not what I am saying
I'm saying that RX100 native pixel-level IQ is trailing behind D70s native pixel-level IQ, but when I downsample RX100 image, I get IQ boost on per-pixel level, while sacrificing pixel count. Since RX100 has so much more pixels, I can sacrifice lots of them and still have reasonably sized image for scenarios more demanding than Facebook photo upload.
Presence of D70s here is completely anecdotal - I'm mentioning it only because I have it so I can compare them, but point is that I am "improving" RX100 IQ (or, rather, dismissing it's shortcomings) by downsampling images.
If I happened to have D7000, my conclusion would be that on native resolution, RX100 has much poorer per-pixel IQ, but when I downsample RX100 images, I am getting per-pixel IQ that compares better with D7000, while reducing pixel count significantly below D7000 pixel count.
If I had D600 (awesome camera, maybe one day...) maybe I would say that on native resolution, RX100 has horrendously, must-see-it-to-believe-it poorer per-pixel IQ, but when I downsample RX100 images, I am getting per-pixel IQ that reduces gap to D600 per-pixel IQ, while reducing pixel count significantly below D600 pixel count.
In all 3 scenarios, point is not what I compare RX100 with. Point is that I get improved IQ by sacrificing pixels, and that I have enough pixels to sacrifice.
Based on that, I think I have good basis to expect that by downsampling Lumia 1020 images to 8MP, I will get better looking images than any other current smartphone on the market, while keeping same pixel count. I don't expect that will make Lumia1020 an alternative for DSLR, or even advanced pocket camera like RX100... but I do expect it will make it a better alternative to average P&S camera, and better camera than any other smartphone is at present.
Quote:
Originally Posted by DESuserIGN
Quote:
Originally Posted by muppetry
In theory though, if one knew the processing algorithms used to generate the TIFF from the RAW then one should be able to reconstruct the RAW from the TIFF, not that it would make any sense to want to do that.
Nope, it's not even theoretically reversible. The finest information, essentially the least significant figures, are tossed and that information is lost forever and cannot be reconstructed.
It's a lossy transformation, just like a JPEG but not such a severe pruning.
[You can't reconstruct the original "Tale of Two Cities" from the Reader's Digest Condensed Version of the book. (Probably anyone under 40 has no idea what a Reader's Digest Condensed Version of a book is!)]
I would agree if that were so, but if the RAW and TIFF both contain 16 bit information then I guess I don't see which least significant bits are being discarded. Lossy only applies to compression algorithms, not to interpolation. Can you elaborate?
Quote:
Originally Posted by snova
to be technical, isolation via DOF control and BOKEH are two separate effects. It just happens the BOKEH effect occurs when you have a short DOF. The quality and the pattern of the BOKEH is based on the physical design of the lens and not purely function of DOF. right?
I'm no expert on bokeh, but I think DOF isolation and bokeh refer to essentially the same thing, but people talk about the esthetic quality of the bokeh which as you point out depends on specifics of the lens and the diaphragm. Have you ever noticed that during an eclipse that sunlight passing through the tree leaves casts little highlights (whatever you call the opposite of a shadow) in a crescent shape that perfectly mimics the crescent of the eclipsed sun? You would think this wouldn't be very noticeable, but it actually is quite striking and amazingly different from what you normally see. It literally changes the mood in a subtle way.
I suppose an esthetically nice bokeh must be similar. A subtle difference, but aficionados notice it. (I haven't thought of it this way before. I'll have to pay more attention to it.)
Is this what you are referring to?
Quote:
Originally Posted by nikon133
Again, no - that is not what I am saying
I'm saying that RX100 native pixel-level IQ is trailing behind D70s native pixel-level IQ, but when I downsample RX100 image, I get IQ boost on per-pixel level, while sacrificing pixel count. Since RX100 has so much more pixels, I can sacrifice lots of them and still have reasonably sized image for scenarios more demanding than Facebook photo upload.
Presence of D70s here is completely anecdotal - I'm mentioning it only because I have it so I can compare them, but point is that I am "improving" RX100 IQ (or, rather, dismissing it's shortcomings) by downsampling images.
If I happened to have D7000, my conclusion would be that on native resolution, RX100 has much poorer per-pixel IQ, but when I downsample RX100 images, I am getting per-pixel IQ that compares better with D7000, while reducing pixel count significantly below D7000 pixel count.
If I had D600 (awesome camera, maybe one day...) maybe I would say that on native resolution, RX100 has horrendously, must-see-it-to-believe-it poorer per-pixel IQ, but when I downsample RX100 images, I am getting per-pixel IQ that reduces gap to D600 per-pixel IQ, while reducing pixel count significantly below D600 pixel count.
In all 3 scenarios, point is not what I compare RX100 with. Point is that I get improved IQ by sacrificing pixels, and that I have enough pixels to sacrifice.
Based on that, I think I have good basis to expect that by downsampling Lumia 1020 images to 8MP, I will get better looking images than any other current smartphone on the market, while keeping same pixel count. I don't expect that will make Lumia1020 an alternative for DSLR, or even advanced pocket camera like RX100... but I do expect it will make it a better alternative to average P&S camera, and better camera than any other smartphone is at present.
yes. I get what you are saying.
Not sure you are getting what I am saying however. since this is my third attempt, this is no doubt because I must be doing a terrible job of expressing myself. sorry.
So lets make this simple.
two 2/3" sensors. One has "big" pixels the other has "little" pixels but more of them.
Mr. Big Pixel - Fuji X20 2/3" sensor 12 Million Big Pixels.
Mr. Little Pixel - Nokia 1020 2/3" sensor. 41 Million Small Pixels. Downsample this to 12MP.
Do you think a 12MP (41MP down sampled to 12MP) image will be Better from the Nokia because it has an opportunity to "oversample" and why? Or do you think the Fuji native 12 MP will have better IQ than the (41->12MP) Nokia image and why?
bonus questions:
4 Nokia small pixels roughly make up the same physical area of 1 Fuji BIG pixel. Assume you apply just enough light to the BIG Fuji pixel to yields the correct color value without error. Now take the same amount of light, and apply it to the 4 smaller pixels taking up the same surface area. What is the probability that all four pixels will yield the correct color value without error (i.e. noise)? Is it 100% or something lower? How do you down sample this to yield the correct color value for the 41->12MP conversion in which the 4 small pixels will be replaced by 1 big pixel? Do you take the average value? Would the average value be the same what the Fuji produced? Or do you vote and find the ones which don't match? For example, 3 yield simliar values, but 1 is way off? What is the probability of making the wrong decision in this voting technique? Is it less than 100%? Go back and review your answers to IQ of 12MP Native Fuji vs 41MP-->12 MP downsample Nokia questions above.
Quote:
Originally Posted by muppetry
I would agree if that were so, but if the RAW and TIFF both contain 16 bit information then I guess I don't see which least significant bits are being discarded. Lossy only applies to compression algorithms, not to interpolation. Can you elaborate?
Oh sure. If you go to a 16 bit tiff there is no qualitative difference.
I was assuming you meant processing it into an 8 bit TIFF. (In downsampling info would be lost.)
But once you have made all of your decisions on settings for exposure, color temp, setting the curves, retouching etc., there's no real reason to remain at 16 bits for final output. Those extra bits are really only important if you still plan to make adjustments to the image.
Do digital labs accept anything other than an 8 bit jpegs for printing?
Quote:
Originally Posted by DESuserIGN
Quote:
Originally Posted by muppetry
I would agree if that were so, but if the RAW and TIFF both contain 16 bit information then I guess I don't see which least significant bits are being discarded. Lossy only applies to compression algorithms, not to interpolation. Can you elaborate?
Oh sure. If you go to a 16 bit tiff there is no difference.
I was assuming you meant processing it into an 8 bit TIFF. (In downsampling info would be lost.)
But once you have made all of your decisions on settings for exposure, color temp, setting the curves, retouching etc., there's no real reason to remain at 16 bits for final output. Those extra bits are really only important if you still plane to make adjustments to the image.
Agreed.
Quote:
Originally Posted by snova
Do you think the image will be Better from the Nokia because it has an opportunity to "oversample" and why? Or do you think the Fuji will have better IQ and why?
My vote is for big pixels (sensors.)
Less noise and vignetting.
I would still take a 2.7 MP Nikon D-1 (OK, a 12 MP D-2x ;-) ) over many newer bigger pixel DSLRs. (but that wasn't the question, I suppose.)
Both strategies have their advantages though. Either strategy may be superior for a particular hardware combinations. Comparison of actual results is probably the only way to decide. Philosophically though, I tend to favor a strategy that pursues results with quality rather than brute force (lots of pixels.) This is the time tested and orthodox approach though, and I appreciate iconoclastic approaches as well.
Quote:
Originally Posted by DESuserIGN
Either strategy may be superior for a particular hardware combinations.
The assumption is we are using sensor technology from a similar eras; Not 9-10 years apart. Specifically sensor technology as it stands today. Same sensor size; 12MP applied over 2/3" vs 41MP applied over 2/3" as it stands today.
here is a graphical overview of how large 2/3" sensor is compared to APS-C/DX and Full Frame.
http://en.wikipedia.org/wiki/Image_sensor_format
Pixel Pitch comparison below for iPhones, Nokia vs Fuji Compact Camera of same sensor size, high end DSLR.
iPhone 5 (1/3.2") 8MP pixel pitch would yield 64MP Full Frame, 46MP DX/APS-C
iPhone 4S (1/4") 5MP pixel pitch would yield 54MP Full Frame, 36MP DX/APS-C
The Nokia (2/3") 1020 41MP dot pitch would yield 161 MP Full Frame, 107 MP DX/APS-C.
The Fuji (2/3") X20 12MP dot pixel pitch would yield 47MP Full Frame, 32 MP DX/APS-C.
Sony sensors in high end Nikon D800 yields 36MP Full Frame, 24MP DX/APS-C in D7100.