Apple exploring split cameras for thinner iPhones, location-based security

13

Comments

  • Reply 41 of 65
    melgrossmelgross Posts: 33,600member
    tzeshan wrote: »
    You don't need two cameras to accomplish this.  You can do this by taking two pictures rapidly one after the other.

    You can't, really. While you can obtain a minor amount if noise reduction that way if the software has characterized the noise components of the sensor, that's complex, and takes processing time with specialized hardware. This is done on some DSLRs and some other cameras for long exposures. But what they do is to take a blank picture first, evaluate the noise at that moment, and then take another picture that corrects for that noise, which varies with temperature, the number of pictures that have rapidly been taken, and other factors.

    Otherwise, what does taking two pictures at the same exposure get you? It can get you blur from rapidly moving objects. How does it get you a thinner camera?
  • Reply 42 of 65
    melgrossmelgross Posts: 33,600member
    tzeshan wrote: »

    Use a thinner camera of course.  The trick of using two cameras is it effectively doubles the lens, that is the ISO.

    What? You are awfully glib about this. Making a thinner camera is the problem, not the solution. And using two cameras doesn't effectly double the ISO.
  • Reply 43 of 65
    melgrossmelgross Posts: 33,600member
    Marvin wrote: »
    The other benefit would be faster HDR because they don't need multiple exposures. Once you have the chroma and luma separate, the luma can be adjusted in post-production. This saves space too as the chroma can be stored in 8bpc and the luma in raw sensor data. It could also allow for HDR video as it only needs one frame. Luma, left, Chroma middle, combined right:

    1000

    The chroma sensor just needs to be sensitive enough to determine the correct color data. I wonder if it can do that kind of sampling by not relying solely on incoming light but by projecting a signal out like infrared light in a flash and then measuring the effect of that signal on the scene. Naturally the sky is bright enough anyway and too far for any outgoing light to affect but dark shadows could receive an outgoing signal that shows up what the colors are. The luma sensor would measure the incoming light as it has to try and recreate the light variation being seen.

    That's a pretty good explanation.
  • Reply 44 of 65
    melgrossmelgross Posts: 33,600member
    Marvin wrote: »
    Two sensors doesn't require two lenses though, that would add more complexity and cost than needed as they'd have to ensure each pair of lenses was within a certain tolerance of the other and always focused the same way. It would be best to just reflect the light onto each sensor from a single lens setup.

    I'm trying to figure out how they would do this. It's usually done with a prism, but there's no room for one.

    For some time, I've been thinking that the camera could be built in two parts, so to speak. The lens would face forwards, of course, but a mirror could reflect the light 90 degrees. The sensor could sit at that 90 degree angle. It could also be more complex, with two mirrors, allowing the sensor to lie flat against the front or back of the camera, saving more thickness. Alignment would be harder, but it could be worth it, allowing a much bigger sensor.

    They could also break the lens down at the nodal point, with a mirror, allowing the lens to also sit at a 90 degree angle for the rear of the lens, allowing a longer, more complex lens (dare I suggest this would make an optical zoom possible, yes, I do!).
  • Reply 45 of 65
    tzeshan wrote: »

    I am talking about the example you gave about the race car.  Since the car moved one car length, the picture taken will still not be clear.  Your use of two cameras do not solve this problem. 

    Taking two pictures at different times vs two pictures at the same time is similar to the difference between a rolling shutter vs a global shutter. There's more motion distortion in a rolling shutter because it doesn't expose every pixel in the frame at the same time.
  • Reply 46 of 65
    mpantonempantone Posts: 2,155member
    Quote:
    Originally Posted by melgross View Post



    I'm trying to figure out how they would do this. It's usually done with a prism, but there's no room for one.

    Not quite. If you use one lens to project onto two different imaging planes, you need a mirror.

     

    The prism only comes into play if you need to project the image in its correct orientation. The direct image is backwards and upside down; that's how it's displayed on the ground glass focusing screen on view cameras, and thus the film plane. That's not an issue with either photochemical film or a digital imaging sensor. The image orientation is corrected in the post-capture processing.

     

    The introduction of a reflex mirror will flip the image vertically, but not laterally. That's what you see on a focusing screen on a waist-level viewfinder on some old medium-format cameras, as well as the classic camera obscura with a reflex mirror.

     

    The roof pentaprism is necessary to flip the image laterally for a WYSIWYG display. This is important for SLR cameras.

     

    The prism is convenient for humans who prefer to view the scene in the same orientation as the naked eye. It's not necessary for image capturing purposes, it's just a helpful tool for an optical viewfinder.

     

    A digital imaging sensor flips the image vertically and laterally before it is displayed on the screen. If there is an optical viewfinder on a point-and-shoot camera, it is not using the primary optics.

  • Reply 47 of 65
    Quote:

    Originally Posted by mpantone View Post

     

    No, it is not.

     

    The problem with using two cameras/lens groupings at the same time is parallax. The same event is being capture at the same moment, the perspectives are slightly different.

     

    When you use the same camera at two different times, the perspectives are the same, the events are different.

     

    Whether or not parallax is a major issue depends on the spacing between the lens, and the distance to the subject.

     

    If two people set up identical SLRs on tripods next to each other and take pictures of a mountain fifty miles away, parallax is not an issue. If they point their SLRs at a bumblebee pollinating a flower fifteen centimeters away, then perspective makes the two images vastly different.

     

    Let's say Joe has a camera focused on a tree and set to take two pictures at 15:14:07.01 and 15:14:07.03. Let's say a lightning bolt strikes at strikes the tree at 15:14:07.03. Joe has taken two different pictures.

     

    Parallax is an issue that is amplified at shorter working distances which is why the single-lens reflex system grew in popularity over the old twin-lens reflex systems and viewfinder cameras.

     

    Note that parallax isn't inherently a "bad" thing, but it often is undesirable in photography.

     

    Parallax is a key component of human vision (and many other animals), notably in the determination of depth perception. 


    Do we not have the technology to counteract parallax now? Seems like we should.

  • Reply 48 of 65
    mpantonempantone Posts: 2,155member

    Well, the photography industry decided to switch to a single lens to avoid parallax. They did this in the Fifties and Sixties. When digital imaging sensors appeared on the market at consumer prices, photography companies gained more opportunities (e.g., cameras in cellphones).

     

    Parallax is a condition that happens with two or more lenses in different locations which generates different perspectives. Parallax is described as a series of mathematical formulas, as it is governed by the laws of physics. 

     

    There are technologies to compensate for parallax in dual lens cameras, but they are not effective in all situations, and all of them end up adding more complexity and cost.

     

    Again, reading a photography primer or consulting a reputable source on optical physics (and some photography websites) covers the topic way better than it could be addressed here at AI.

  • Reply 49 of 65
    melgrossmelgross Posts: 33,600member
    mpantone wrote: »
    Not quite. If you use one lens to project onto two different imaging planes, you need a mirror.

    The prism only comes into play if you need to project the image in its correct orientation. The direct image is backwards and upside down; that's how it's displayed on the ground glass focusing screen on view cameras, and thus the film plane. That's not an issue with either photochemical film or a digital imaging sensor. The image orientation is corrected in the post-capture processing.

    The introduction of a reflex mirror will flip the image vertically, but not laterally. That's what you see on a focusing screen on a waist-level viewfinder on some old medium-format cameras, as well as the classic camera obscura with a reflex mirror.

    The roof pentaprism is necessary to flip the image laterally for a WYSIWYG display. This is important for SLR cameras.

    The prism is convenient for humans who prefer to view the scene in the same orientation as the naked eye. It's not necessary for image capturing purposes, it's just a helpful tool for an optical viewfinder.

    A digital imaging sensor flips the image vertically and laterally before it is displayed on the screen. If there is an optical viewfinder on a point-and-shoot camera, it is not using the primary optics.

    All of the cameras I've seen over the years that do this do it with a prism. Reflex mirrors have too much inefficiency. A prism allows much more light to be used. Here a half stop is critical. And I didn't say a pentaprism. I'm quite aware of the difference between an erecting prism and one that is not.
  • Reply 50 of 65
    melgrossmelgross Posts: 33,600member
    Do we not have the technology to counteract parallax now? Seems like we should.

    He seems to be working very hard to show that he understands the subject. But he doesn't seem to understand what Apple is trying to do. There is no reason to believe in the first place, that two lenses would be used. Apple has said nothing about two cameras, which two lenses would imply.

    All we know now is that they are seemingly using LAB for the image mode, and will use thinner sensors enabled by this sensor split. Otherwise, nothing.
  • Reply 51 of 65
    mpantonempantone Posts: 2,155member

    Well, I understand a prism can be used to direct an image multiple directions from a single lens.

     

    I still don't see any practical applications of this in a consumer camera and nothing in a smartphone either, but then again, I don't follow the camera industry on a daily basis. I have one old digital camera, and two film cameras, so I admit I'm not the most informed gadget person around.

  • Reply 52 of 65
    melgross wrote: »
    He seems to be working very hard to show that he understands the subject. But he doesn't seem to understand what Apple is trying to do. There is no reason to believe in the first place, that two lenses would be used. Apple has said nothing about two cameras, which two lenses would imply.

    All we know now is that they are seemingly using LAB for the image mode, and will use thinner sensors enabled by this sensor split. Otherwise, nothing.

    Yes; splitting the sensors is a different thing and I can see how that would be more straightforward, in that it avoids any problems with parallax.
  • Reply 53 of 65
    wizard69wizard69 Posts: 13,377member
    wings wrote: »
    This is great. We're well on our way for the phone to be as thin as a sheet of paper. And once we get there the only problems left will be (1) how to pick it up, and (2) how to avoid paper cuts.

    Actually that isn't all that funny, too thin and the phones become difficult to handle. I don't see cell phones getting a lot thinner unless they are embedded into something else (or someone)
  • Reply 54 of 65
    mdriftmeyermdriftmeyer Posts: 7,503member
    Interesting trend with Apple expansion of Patents.

    In 2014, to date [via Latestpatents.com]

    Google Patents Granted for 2014, to date: 480

    Apple Patents Granted for 2014, to date: 420


    Note: Apple is going to have a larger patent year than Google

    Patents filed, to date, for 2014:

    Google: 278

    Apple: 479

    Apple's patent filings have only been expanding and accelerating.

    They will easily pass 2,000 patents granted in 2014 making it their largest single year of granted patents.

    Everyone knows they already get some of the most coveted patents in consumer products and manufacturing. It's just getting better.
  • Reply 55 of 65
    solipsismxsolipsismx Posts: 19,566member
    Interesting trend with Apple expansion of Patents.

    In 2014, to date [via Latestpatents.com]

    Google Patents Granted for 2014, to date: 480

    Apple Patents Granted for 2014, to date: 420


    Note: Apple is going to have a larger patent year than Google

    Patents filed, to date, for 2014:

    Google: 278

    Apple: 479

    Apple's patent filings have only been expanding and accelerating.

    They will easily pass 2,000 patents granted in 2014 making it their largest single year of granted patents.

    Everyone knows they already get some of the most coveted patents in consumer products and manufacturing. It's just getting better.

    Is there a way to see the number of filled patents for any given year easily? I'm asking because if it's not too difficult I'd like to compile them and then check that against major Apple releases to see if we can potentially predict when they might be releasing their next new product category.
  • Reply 56 of 65
    bigpicsbigpics Posts: 1,397member
    Quote:
    Originally Posted by mpantone View Post

     

    No, it is not.

     

    The problem with using two cameras/lens groupings at the same time is parallax. The same event is being capture at the same moment, the perspectives are slightly different.

     

    When you use the same camera at two different times, the perspectives are the same, the events are different.

     

    Whether or not parallax is a major issue depends on the spacing between the lens, and the distance to the subject.

     

    If two people set up identical SLRs on tripods next to each other and take pictures of a mountain fifty miles away, parallax is not an issue. If they point their SLRs at a bumblebee pollinating a flower fifteen centimeters away, then perspective makes the two images vastly different.

     

    Let's say Joe has a camera focused on a tree and set to take two pictures at 15:14:07.01 and 15:14:07.03. Let's say a lightning bolt strikes at strikes the tree at 15:14:07.03. Joe has taken two different pictures.

     

    Parallax is an issue that is amplified at shorter working distances which is why the single-lens reflex system grew in popularity over the old twin-lens reflex systems and viewfinder cameras.

     

    Note that parallax isn't inherently a "bad" thing, but it often is undesirable in photography.

     

    Parallax is a key component of human vision (and many other animals), notably in the determination of depth perception. 


     

    Yaay. Actual photography and optics knowledge.  All too rare in these discussions on AI.  What are you, old?  :-D

     

    Quote:


    Originally Posted by mpantone View Post

     

    Ah, at this point we introduce the notion of exposure time and field of view. This is best covered by a photography textbook, but we'll try to explain it briefly here.

     

    Yes, the car is moving at 83.33 m/s. Let's say it's a nice sunny day at the track and my camera's shutter is set at 1/4000th second. During that brief moment, the car will have travelled about 21 centimeters. Is that too long to get a sharp shot? Well, how far am I from the subject? Is the car ten feet away or on the other side of the track? At this point, the sharpness is determined by the movement of the image relative to the field of view, not the actual speed.

     

    Let's introduce another concept, the notion of tracking. Let's say the cars are far away, but I'm using a tripod head that swivels smoothly, whether it be a ballhead or pan-and-tilt head really doesn't matter for this example. If I attempt to keep the car in the center of the frame, the motion relative to the field of view is far less.

     

    At this point, I suggest you read a basic primer on photography then go take your camera out and shoot some action scenes. It could be cars, baseball pitchers, flying seagulls, bumblebees, etc., it doesn't really matter.


     

    Shhhh.  Your tone's getting a little pedantic here even though you're still speaking truth. And truth into the wind. :-D



    Remember:  Most people whose grounding in photography began in the phonecam era - including (amazingly?) many of the reviewers for even big sites - still don't even know the OPTICAL difference between optical and digital zoom, and here you're asking them to grasp how angular velocity interacts with arc distance traversed as a function of subject distance.



    Ooooooo.  Math + optics is hard.

     

    Quote:
    Originally Posted by melgross View Post



    I'm trying to figure out how they would do this. It's usually done with a prism, but there's no room for one.



    For some time, I've been thinking that the camera could be built in two parts, so to speak. The lens would face forwards, of course, but a mirror could reflect the light 90 degrees. The sensor could sit at that 90 degree angle. It could also be more complex, with two mirrors, allowing the sensor to lie flat against the front or back of the camera, saving more thickness. Alignment would be harder, but it could be worth it, allowing a much bigger sensor.



    They could also break the lens down at the nodal point, with a mirror, allowing the lens to also sit at a 90 degree angle for the rear of the lens, allowing a longer, more complex lens (dare I suggest this would make an optical zoom possible, yes, I do!).

     

    I've read articles arguing this is impractical in terms of optical losses and other arcane stuff I don't really remember.  But let's pooh-pooh the pooh-poohers and hope you are right, Msr. Gaberator.



    Despite all the clever digital innovations - new ways of manipulating visual data that've never existed - to me - knowing the power of focal length, the holy grail of phone cam design will be achieving optical zoom.

  • Reply 57 of 65
    MarvinMarvin Posts: 15,443moderator
    melgross wrote: »
    I'm trying to figure out how they would do this. It's usually done with a prism, but there's no room for one.

    For some time, I've been thinking that the camera could be built in two parts, so to speak. The lens would face forwards, of course, but a mirror could reflect the light 90 degrees. The sensor could sit at that 90 degree angle. It could also be more complex, with two mirrors, allowing the sensor to lie flat against the front or back of the camera, saving more thickness. Alignment would be harder, but it could be worth it, allowing a much bigger sensor.

    They could also break the lens down at the nodal point, with a mirror, allowing the lens to also sit at a 90 degree angle for the rear of the lens, allowing a longer, more complex lens (dare I suggest this would make an optical zoom possible, yes, I do!).

    Yeah, it's not easy to measure the same signal in two different ways. If they used a trichroic prism and a separate sensor per color component like some professional video cameras, they can get rid of the bayer filter and that reduces thickness as well as gets increased color quality, no mosaic patterns, and better light sensitivity but because the color and brightness processes use the same sensors, they're limited in how they can filter the incoming light to get the best output for both. This really would depend on being able to send the light sideways and if you can do that, the sensor thickness wouldn't be a problem any more.

    The patent details are here:

    http://www.google.com/patents/US20120162465

    Their motivation with the two sensors would be making the device thinner. This method helps reduce the camera length so they could get away with a smaller sensor.

    The bayer filter (or similar) would be excluded for the luma sensor so that all photodetectors measure brightness and they aren't losing detail passing it through the filter. This gives them increased clarity on luma and they can interpolate color values on the chroma sensor to match the increased resolution of the luma sensor. What I said earlier about saving space with the chroma in 8bpc isn't correct, the chroma sensor still needs to record as wide a range of color values as possible. It should be possible to avoid multiple exposures though because the color process and luma would be filtered separately. The chroma sensor needs to have a way to even the light intensity to allow the detectors to accurately measure the color values with no regard to how those detectors actually measure the intensity as that's the job of the other sensor.

    The Arri Alexa sensor does this with a dual gain setup:

    http://www.arri.com/camera/digital_cameras/technology/arri_imaging_technology/alexas_sensor/

    This gives them HDR output:

    "Dual Gain Architecture simultaneously provides two separate read-out paths from each pixel with different amplification. The first path contains the regular, highly amplified signal. The second path contains a signal with lower amplification to capture the information that is clipped in the first path. Both paths feed into the camera?s A/D converters, delivering a 14 bit image for each path. These images are then combined into a single 16 bit high dynamic range image. This method enhances low light performance and prevents the highlights from being clipped, thereby significantly extending the dynamic range of the image."

    Kodak had a modification to the bayer filter adding transparent cells but there's still the issue of crosstalk to deal with:

    http://hothardware.com/News/Kodak-s-New-Panchromatic-Pixel-Blaster-/

    If a phone could direct the light sideways without using a mirror, it opens up a lot of possibilities for multiple lenses and optical zoom and sensor thickness is no longer a problem. I suppose ideally the lens would be on the top or side of the phone. It should fit that way as the lens diameter and sensor widths are lower than the phone width but it would probably be too weird to shoot with. Lytro is sort of like this and it would be more secure to hold but the Lytro still has the display facing you.

    Perhaps they can construct a luma sensor to act like a mirror so that light hits the luma sensor directly, measures the intensity and then reflects it onto a chroma sensor. As soon as the luma sensor knows how bright each pixel is, it would adjust the per pixel gain of the chroma sensor to most accurately measure the color values across the whole image. Perhaps that would be too much signal loss for the chroma sensor but then maybe the luma sensor doesn't just reflect the light; it can amplify it.

    Whatever the best way is, they're the one's getting paid for it so let them figure it out. ;)
  • Reply 58 of 65
    Marvin wrote: »
    melgross wrote: »
    I'm trying to figure out how they would do this. It's usually done with a prism, but there's no room for one.

    For some time, I've been thinking that the camera could be built in two parts, so to speak. The lens would face forwards, of course, but a mirror could reflect the light 90 degrees. The sensor could sit at that 90 degree angle. It could also be more complex, with two mirrors, allowing the sensor to lie flat against the front or back of the camera, saving more thickness. Alignment would be harder, but it could be worth it, allowing a much bigger sensor.

    They could also break the lens down at the nodal point, with a mirror, allowing the lens to also sit at a 90 degree angle for the rear of the lens, allowing a longer, more complex lens (dare I suggest this would make an optical zoom possible, yes, I do!).

    Yeah, it's not easy to measure the same signal in two different ways. If they used a trichroic prism and a separate sensor per color component like some professional video cameras, they can get rid of the bayer filter and that reduces thickness as well as gets increased color quality, no mosaic patterns, and better light sensitivity but because the color and brightness processes use the same sensors, they're limited in how they can filter the incoming light to get the best output for both. This really would depend on being able to send the light sideways and if you can do that, the sensor thickness wouldn't be a problem any more.

    The patent details are here:

    http://www.google.com/patents/US20120162465

    Their motivation with the two sensors would be making the device thinner. This method helps reduce the camera length so they could get away with a smaller sensor.

    The bayer filter (or similar) would be excluded for the luma sensor so that all photodetectors measure brightness and they aren't losing detail passing it through the filter. This gives them increased clarity on luma and they can interpolate color values on the chroma sensor to match the increased resolution of the luma sensor. What I said earlier about saving space with the chroma in 8bpc isn't correct, the chroma sensor still needs to record as wide a range of color values as possible. It should be possible to avoid multiple exposures though because the color process and luma would be filtered separately. The chroma sensor needs to have a way to even the light intensity to allow the detectors to accurately measure the color values with no regard to how those detectors actually measure the intensity as that's the job of the other sensor.

    The Arri Alexa sensor does this with a dual gain setup:

    http://www.arri.com/camera/digital_cameras/technology/arri_imaging_technology/alexas_sensor/

    This gives them HDR output:

    "Dual Gain Architecture simultaneously provides two separate read-out paths from each pixel with different amplification. The first path contains the regular, highly amplified signal. The second path contains a signal with lower amplification to capture the information that is clipped in the first path. Both paths feed into the camera?s A/D converters, delivering a 14 bit image for each path. These images are then combined into a single 16 bit high dynamic range image. This method enhances low light performance and prevents the highlights from being clipped, thereby significantly extending the dynamic range of the image."

    Kodak had a modification to the bayer filter adding transparent cells but there's still the issue of crosstalk to deal with:

    http://hothardware.com/News/Kodak-s-New-Panchromatic-Pixel-Blaster-/

    If a phone could direct the light sideways without using a mirror, it opens up a lot of possibilities for multiple lenses and optical zoom and sensor thickness is no longer a problem. I suppose ideally the lens would be on the top or side of the phone. It should fit that way as the lens diameter and sensor widths are lower than the phone width but it would probably be too weird to shoot with. Lytro is sort of like this and it would be more secure to hold but the Lytro still has the display facing you.

    Perhaps they can construct a luma sensor to act like a mirror so that light hits the luma sensor directly, measures the intensity and then reflects it onto a chroma sensor. As soon as the luma sensor knows how bright each pixel is, it would adjust the per pixel gain of the chroma sensor to most accurately measure the color values across the whole image. Perhaps that would be too much signal loss for the chroma sensor but then maybe the luma sensor doesn't just reflect the light; it can amplify it.

    Whatever the best way is, they're the one's getting paid for it so let them figure it out. ;)

    As a result of the discussion on another thread:

    ...

    From my reading, I've also learned that sapphire would also be useful in camera lenses, because is conducts all the UV-visible light spectrum and can be used to focus ...

    This could mean improved iPhone camera optics (zoom, autofocus, etc.)

    I'll see if I can find the references to optics of the sapphire.

    TC's cornering the market on sapphire manufacturing could yield a double-whammy -- silicon on sapphire (SoS) ICs for the iWatch and fantastic iPhone camera improvements:


    [VIDEO]
  • Reply 59 of 65
    Here's one reference to sapphire being used in camera lenses and focusing:

    [QUOTE]Sapphire Lenses

    Series MPCX and MPCV optical grade sapphire lenses are manufactured from optical grade grown sapphire. Monocrystalline sapphire is slightly birefringent. The lenses are available in positive and negative configurations. Typical applications include:

    IR laser Beamsteering optics
    Imaging optics
    Chemical & erosion resistant front surface optics.
    Focusing Optics
    This series of sapphire lenses is currently in limited production. Please consult the factory for availability. Other diameters and focal lengths may be special ordered. We can also supply parts with anti-reflection and other thin film coatings. [/QUOTE]


    http://www.melleroptics.com/shopping/shopdisplayproducts.asp?id=19
  • Reply 60 of 65
    hmmhmm Posts: 3,405member
    Quote:
    Originally Posted by Marvin View Post





    The other benefit would be faster HDR because they don't need multiple exposures. Once you have the chroma and luma separate, the luma can be adjusted in post-production. This saves space too as the chroma can be stored in 8bpc and the luma in raw sensor data. It could also allow for HDR video as it only needs one frame. Luma, left, Chroma middle, combined right:







    The chroma sensor just needs to be sensitive enough to determine the correct color data. I wonder if it can do that kind of sampling by not relying solely on incoming light but by projecting a signal out like infrared light in a flash and then measuring the effect of that signal on the scene. Naturally the sky is bright enough anyway and too far for any outgoing light to affect but dark shadows could receive an outgoing signal that shows up what the colors are. The luma sensor would measure the incoming light as it has to try and recreate the light variation being seen.

     

     

    Are you aware  of such a sensor technology? The current ones are in fact filtered in arrays. There are gaps between pixels that are reserved for additional electronics. It's also important to note that dynamic range is based upon best case scenario. At higher ISO ratings the processing is completely different due to a worse signal to noise ratio which digitizes as noise. They clear parts of it up using various algorithms, most of which do some variation of multiply pixels to create one raster by 4 essentially rasterizing to a lower resolution, then resample at 4x the area and apply a weighted contribution of that result onto the normally rasterized image. Some of them get significantly more complicated. If you're interested I know I have a few downloaded research papers on the topic somewhere on one of my drives due to weird levels of obsessiveness with the subject (and having attempted to write a raw processor).

     

    Quote:

    Originally Posted by Marvin View Post

     



    Two sensors doesn't require two lenses though, that would add more complexity and cost than needed as they'd have to ensure each pair of lenses was within a certain tolerance of the other and always focused the same way. It would be best to just reflect the light onto each sensor from a single lens setup.

     

    How you would you align two sensors on one lens? The gaps are there for other electronics. Foveon sensors relied on layering. I'm also unsure what you would gain from the chroma sensor idea. It's interesting and all, but you might still end up beyond 8 bits as captured. Otherwise you might still end up with an overly dithered output due to color banding. It's an extremely rough approximation (due to non linear transformations), but if you open an image editing program and adjust contrast or color by color mode blending only, you'll see what i mean.

Sign In or Register to comment.