Originally Posted by melgross
I'm trying to figure out how they would do this. It's usually done with a prism, but there's no room for one.
For some time, I've been thinking that the camera could be built in two parts, so to speak. The lens would face forwards, of course, but a mirror could reflect the light 90 degrees. The sensor could sit at that 90 degree angle. It could also be more complex, with two mirrors, allowing the sensor to lie flat against the front or back of the camera, saving more thickness. Alignment would be harder, but it could be worth it, allowing a much bigger sensor.
They could also break the lens down at the nodal point, with a mirror, allowing the lens to also sit at a 90 degree angle for the rear of the lens, allowing a longer, more complex lens (dare I suggest this would make an optical zoom possible, yes, I do!).
Yeah, it's not easy to measure the same signal in two different ways. If they used a trichroic prism and a separate sensor per color component like some professional video cameras, they can get rid of the bayer filter and that reduces thickness as well as gets increased color quality, no mosaic patterns, and better light sensitivity but because the color and brightness processes use the same sensors, they're limited in how they can filter the incoming light to get the best output for both. This really would depend on being able to send the light sideways and if you can do that, the sensor thickness wouldn't be a problem any more.
The patent details are here:
Their motivation with the two sensors would be making the device thinner. This method helps reduce the camera length so they could get away with a smaller sensor.
The bayer filter (or similar) would be excluded for the luma sensor so that all photodetectors measure brightness and they aren't losing detail passing it through the filter. This gives them increased clarity on luma and they can interpolate color values on the chroma sensor to match the increased resolution of the luma sensor. What I said earlier about saving space with the chroma in 8bpc isn't correct, the chroma sensor still needs to record as wide a range of color values as possible. It should be possible to avoid multiple exposures though because the color process and luma would be filtered separately. The chroma sensor needs to have a way to even the light intensity to allow the detectors to accurately measure the color values with no regard to how those detectors actually measure the intensity as that's the job of the other sensor.
The Arri Alexa sensor does this with a dual gain setup:
This gives them HDR output:
"Dual Gain Architecture simultaneously provides two separate read-out paths from each pixel with different amplification. The first path contains the regular, highly amplified signal. The second path contains a signal with lower amplification to capture the information that is clipped in the first path. Both paths feed into the cameras A/D converters, delivering a 14 bit image for each path. These images are then combined into a single 16 bit high dynamic range image. This method enhances low light performance and prevents the highlights from being clipped, thereby significantly extending the dynamic range of the image."
Kodak had a modification to the bayer filter adding transparent cells but there's still the issue of crosstalk to deal with:
If a phone could direct the light sideways without using a mirror, it opens up a lot of possibilities for multiple lenses and optical zoom and sensor thickness is no longer a problem. I suppose ideally the lens would be on the top or side of the phone. It should fit that way as the lens diameter and sensor widths are lower than the phone width but it would probably be too weird to shoot with. Lytro is sort of like this and it would be more secure to hold but the Lytro still has the display facing you.
Perhaps they can construct a luma sensor to act like a mirror so that light hits the luma sensor directly, measures the intensity and then reflects it onto a chroma sensor. As soon as the luma sensor knows how bright each pixel is, it would adjust the per pixel gain of the chroma sensor to most accurately measure the color values across the whole image. Perhaps that would be too much signal loss for the chroma sensor but then maybe the luma sensor doesn't just reflect the light; it can amplify it.
Whatever the best way is, they're the one's getting paid for it so let them figure it out.