Apple concept would improve iPhone camera quality with use of mirrors

13

Comments

  • Reply 41 of 77
    Quote:

    Originally Posted by Fuzzypaws View Post

     

     

    Let them make the phone thicker. It's not like the 5 or 5s were too thick, you could go back to that thickness and fill the extra space inside the phone with more battery. "Make the phone as thin as possible" is a weird and impractical fetish.


     

    Another random thought would be to simply double the diameter of the lens for the phone camera. Is there some /reason/ why phone cameras have to have these standardized tiny lenses? Make a bigger sensor behind the bigger lens with more light to work with.

  • Reply 42 of 77
    mcdark wrote: »
    But entirely within the Apple ethos; style over substance.

    Can you explain what you mean?
  • Reply 43 of 77
    MacProMacPro Posts: 19,727member
    philboogie wrote: »
    As are we; we discussed this tech a few years back on this site, with a couple of people. I'll never find the thread, but I do remember it.

    Damn and we didn't patent it? :\

    Seriously though, I was looking at the load of other areas all three were involved in.
  • Reply 44 of 77
    mpantonempantone Posts: 2,040member
    Quote:

    Originally Posted by Fuzzypaws View Post

     

    Another random thought would be to simply double the diameter of the lens for the phone camera. Is there some /reason/ why phone cameras have to have these standardized tiny lenses? Make a bigger sensor behind the bigger lens with more light to work with.




    But then the working distance between the lens element and the imaging plane would have to increase.

  • Reply 45 of 77
    MacProMacPro Posts: 19,727member
    mpantone wrote: »

    But then the working distance between the lens element and the imaging plane would have to increase.

    I wonder if using fresnel lens technology might help? Or do Apple already use this?
  • Reply 46 of 77
    mpantonempantone Posts: 2,040member
    Quote:
    Originally Posted by digitalclips View Post



    I wonder if using fresnel lens technology might help? Or do Apple already use this?



    There are no fresnel lenses in the current iPhone optics.

     

    I'm not sure if the image quality from a fresnel lens is good enough. Also there may be challenges in manufacturing fresnel lens in the size necessary for smartphone camera optics, at least affordably and in high enough quality.

     

    My guess is that if there is any fresnel lens remotely viable for use in a smartphone, the component has been tested in Cupertino and did not make the cut. Also, I'm guessing that camera module manufacturers also have tried fresnel lenses in various prototype designs.

  • Reply 47 of 77
    Quote:

    Originally Posted by McDark View Post

     



    Yeah, haven't some compact cameras had this kind of setup for years? Sony cameras pioneered it IIRC. And I had a Samsung one that had much the same. It was crap and didn't last long, but it was still a good idea in principle; a zoom lens without a protruding telescopic lens. If they can make it work in a phone, great. But as we know, if it comes down to this or shaving another 0.00001 mm off the thickness, I'm sure we know which way Apple will go.


     

    The basic periscope arrangement is interesting and unusual, but isn't really new. Patents are not well-understood and rarely reported accurately. The claims are the important part. What does seem to be the "invention" here is the use of a moveable mirror to accomplish the image stabilization. Every other system I have seen has either moved lens elements or the sensor. This would seem to work well with the periscope arrangement as mirror are normally not part of the image capture optics.

  • Reply 48 of 77
    ahmlcoahmlco Posts: 432member
    Quote:

    Originally Posted by rtdunham View Post



    As digital cameras got smaller and smaller I remember seeing one that incorporated a zoom lens using a mirror so that the zooming occurred not front to rear but side to side, internally. An extremely slim camera. Maybe someone here can remember what that model was and provide more details. I think we'd find them very interesting. I long for a zoom in an iPhone and have often thought back to that camera and wondered if Apple could do something like that.



    Nikon Coolpix S1

  • Reply 49 of 77
    wizard69wizard69 Posts: 13,377member
    zoetmb wrote: »
    The iPhone is so light and easy to hold that I really don't think that vibration or user shake is much of an issue.  Remember that image stabilization only works to rectify user movement, not subject movement.    
    The heavier the better. I use to carry around an RZ doing outdoor photography, amazing camera in its day! It was certainly heavy but that allowed pulling off some sharp hand held photos.
    It's a far different issue in heavy DSLRs using large and heavy zoom lenses.   It always boggles my mind that I can take very stable videos on my iPhone, but when I hand hold video on my Nikon D800, it's usually all "shaky-cam" if I'm making long shots, like shooting someone performing a song.   
    There are many factors involved, lens focal length, resolution at the sensor and etc. all impact how a camera records shake. Plus you need to hold it right.
    What would improve quality more than IS is the ability to shoot at faster shutter speeds because then you could freeze action.   That would generally require more sensitive sensors and/or faster lenses as well as the ability to control ISO and shutter speed, if not f-stop as well.  
    Maybe one day we will get those features in a cell phone camera. Interestingly folded optics like this would certainly afford Apple the ability to increase the focal length and maybe even increase light transmission. Of course all of those beam splitters will result in a reduction of light transmission.

    This looks to be revolutionary for cell phone camera technology.
  • Reply 50 of 77
    wizard69wizard69 Posts: 13,377member
    supersheep wrote: »
    Looks like an amazing idea. However, I don't see it built into an iPhone soon. Apple would have to either make the iPhone thicker to make room for the by 90° flipped sensor or make the sensor smaller. 
    This could be a problem but I can see Apple making the next iPhone a wedge. It is a form factor that has worked well over the years for other products. It might actually allow Apple to improve low light sensitivity.

    By adding a zoom lens you lose light. Basic principle true for every zoom lens versus a fixed one. I see a whole chain of problems in this patent. <span style="line-height:1.4em;">On the other hand though, it's Apple.
    You also loose at the beam splitters. You may gain some back though by use of larger optical elements. In the end though optical zoom can greatly enhance photographic quality if your only other option is to crop existing pixels. Zooming in can be seen as cropping in camera gaining full resolution of the sensor.
    They have some pretty smart people working there and have a way of figuring out pretty clever solutions to problems of this kind.</span>

    Like anyother engineering task there are trade offs. Done right optical zoom would be a big win.
  • Reply 51 of 77

    Sure, those splitters might lose light. But as far as I see, they use three image sensors. One for each color. That might help quite a bit. 3CCD was a thing in video cameras a decade ago, and as far as I remember, it was quite good. By reducing the sensors to one color, each light-sensing pixel would be bigger (doesn't have to share the space with two other colors). This might compensate for the lost light.

     

    I would be thrilled if they could pull it off. A greater focal length (something around eqv. 50mm-90mm) would make a huge difference for taking beautiful portrait pictures.

  • Reply 52 of 77

    Or they could just add another 45 mirror so that the sensor can again lie on the flat surface of the phone's body. but i just saw that there is a colour splitting prism at the bottom that looks like it projects light onto multiple sensors. I'm no good at diagrams but it looks like that.

  • Reply 53 of 77

    The illustration looks very similar to the camera module in Asus' ZenFone Zoom: http://androidheadlines.com/2015/02/asus-zenfone-zoom-uses-hoya-cube-lens-assembly.html

  • Reply 54 of 77
    wizard69wizard69 Posts: 13,377member
    supersheep wrote: »
    Sure, those splitters might lose light. But as far as I see, they use three image sensors. One for each color.
    Maybe! It is hard to tell. However they would need a sensor for focusing, shake compensation and exposure measurement. Often that is built into the main sensor.
    That might help quite a bit. 3CCD was a thing in video cameras a decade ago, and as far as I remember, it was quite good. By reducing the sensors to one color, each light-sensing pixel would be bigger (doesn't have to share the space with two other colors). This might compensate for the lost light.
    I suppose that is possible. My feeling right now is that the folded light path might be more efficient so it might make up for losses. Looking back on some of the other recent rumors they might actually have two sensors one for luminance and one for color.
    I would be thrilled if they could pull it off. A greater focal length (something around eqv. 50mm-90mm) would make a huge difference for taking beautiful portrait pictures.
    Most certainly! I'm hoping that they can hit 3X giving us somewhere around 35-100mm equivalent. That range covers a lot of useful photography.
  • Reply 55 of 77
    wizard69 wrote: »
    I'm hoping that they can hit 3X giving us somewhere around 35-100mm equivalent. That range covers a lot of useful photography.

    Preferably retaining the f/2.2 at the long end as well.
  • Reply 56 of 77
    muppetrymuppetry Posts: 3,331member
    Quote:

    Originally Posted by wizard69 View Post

     
    Quote:

    Originally Posted by supersheep View Post



    Sure, those splitters might lose light. But as far as I see, they use three image sensors. One for each color.


    Maybe! It is hard to tell. However they would need a sensor for focusing, shake compensation and exposure measurement. Often that is built into the main sensor.

    Quote:

    That might help quite a bit. 3CCD was a thing in video cameras a decade ago, and as far as I remember, it was quite good. By reducing the sensors to one color, each light-sensing pixel would be bigger (doesn't have to share the space with two other colors). This might compensate for the lost light.


    I suppose that is possible. My feeling right now is that the folded light path might be more efficient so it might make up for losses. Looking back on some of the other recent rumors they might actually have two sensors one for luminance and one for color.

    Quote:

    I would be thrilled if they could pull it off. A greater focal length (something around eqv. 50mm-90mm) would make a huge difference for taking beautiful portrait pictures.


    Most certainly! I'm hoping that they can hit 3X giving us somewhere around 35-100mm equivalent. That range covers a lot of useful photography.



    The patent certainly indicates separate sensors for each color, and so the beam splitter is not losing light - simply distributing it between the sensors. And since light intensity is not wasted on sensors that filter it out (e.g. red light falling on a blue sensor element) as happens in a conventional single-sensor system, this design is theoretically more light efficient.

  • Reply 57 of 77
    MarvinMarvin Posts: 15,322moderator
    melgross wrote: »
    Dang! This is what I've been saying here for years. The only difference is that I'd like to see the sensor flat against the back of the camera, with one more mirror allowing that additional angle. That would allow a bigger sensor.

    The prism shown in the diagram would help with the sensor. They describe this in another patent:

    http://patents.justia.com/patent/20130063629

    Splitting the light with dichroic or trichroic prisms gets rid of the bayer filter ( http://en.wikipedia.org/wiki/Bayer_filter ):

    "Three image sensors are also provided, where each sensor is positioned to receive a respective one of the color components that emerge from the respective face of the cube. The image sensors may be clear pixel array sensors that have no color filter array or color separation capabilities, making them relatively inexpensive yet more accurate (due to no color interpolation or demosaicing required). In such a color splitting architecture, the amount of light incident on each pixel is about three times greater than in a conventional Bayer-pattern color filter array (CFA) sensor. Also, the color splitting cube may essentially avoid the color-crosstalk that is typical of traditional Bayer-pattern CFA sensors."

    The photodetector sensors are monochromatic so they normally have to filter light through color filters, which absorbs a lot of the light. Splitting the light instead would let them get as much as 3x the light. Much better low light sensitivity, fewer color artifacts. They mention allowing infrared to be captured this way too so they can have night vision (other phones have this feature). This can help with depth or just edge detection for better accuracy in low light.

    I'd like to see HDR video too. Other manufacturers alternate sensor gain at consecutive frames to do this. Given that the iPhone can do 240fps, maybe they can just do this too and manage at least 60fps HDR. If they switch it fast it would help minimize recombination artifacts. Another option would be that with the extra photodetectors, half can be set at a low gain, half at high gain and that way they can do HDR at 240fps and no alternating required.

    The setup would be:
    - have the 90 degree mirror passing light down the length of the phone, this lets them get rid of the camera bulge and put optical image stabilization in the smaller iPhone 6
    - have a series of movable lenses, allowing optical zoom
    - have a series of prisms that split the light colors onto at least 3 sensors allowing 3x more light than single sensor plus bayer filter
    - have the 3 sensors setup with dual gain photodetectors to allow HDR imaging in a single pass
    - optionally have a 4th IR sensor for added image processing

    Allowing shallow depth of field, optical zoom and HDR video would be decent improvements. I wonder if they'll leave it for the iPhone 7 or make this for the 6S. Fixing the bands and the camera bump plus improving the camera quality and having faster internals would push quite a few people to upgrade again.

    The mirror setup could also allow the front-facing camera to use the higher quality capture and zoom. It would need two mirrors for the front-facing one and it means you could only use one camera at a time but I'm not sure if any apps use both at once or would ever need to. It means high quality pictures for the selfie generation. Oooh night vision selfies. Celebs can tweet their 'makeup-free' asleep face and undercover sex tapes.
  • Reply 58 of 77
    iqatedoiqatedo Posts: 1,823member

    So much here to ponder:

     

    Quote:

    Originally Posted by Marvin View Post





    The prism shown in the diagram would help with the sensor. They describe this in another patent:



    http://patents.justia.com/patent/20130063629



    Splitting the light with dichroic or trichroic prisms gets rid of the bayer filter ( http://en.wikipedia.org/wiki/Bayer_filter ):



    "Three image sensors are also provided, where each sensor is positioned to receive a respective one of the color components that emerge from the respective face of the cube. The image sensors may be clear pixel array sensors that have no color filter array or color separation capabilities, making them relatively inexpensive yet more accurate (due to no color interpolation or demosaicing required). In such a color splitting architecture, the amount of light incident on each pixel is about three times greater than in a conventional Bayer-pattern color filter array (CFA) sensor. Also, the color splitting cube may essentially avoid the color-crosstalk that is typical of traditional Bayer-pattern CFA sensors."



    The photodetector sensors are monochromatic so they normally have to filter light through color filters, which absorbs a lot of the light. Splitting the light instead would let them get as much as 3x the light. Much better low light sensitivity, fewer color artifacts. They mention allowing infrared to be captured this way too so they can have night vision (other phones have this feature). This can help with depth or just edge detection for better accuracy in low light.



    I'd like to see HDR video too. Other manufacturers alternate sensor gain at consecutive frames to do this. Given that the iPhone can do 240fps, maybe they can just do this too and manage at least 60fps HDR. If they switch it fast it would help minimize recombination artifacts. Another option would be that with the extra photodetectors, half can be set at a low gain, half at high gain and that way they can do HDR at 240fps and no alternating required.



    The setup would be:

    - have the 90 degree mirror passing light down the length of the phone, this lets them get rid of the camera bulge and put optical image stabilization in the smaller iPhone 6

    - have a series of movable lenses, allowing optical zoom

    - have a series of prisms that split the light colors onto at least 3 sensors allowing 3x more light than single sensor plus bayer filter

    - have the 3 sensors setup with dual gain photodetectors to allow HDR imaging in a single pass

    - optionally have a 4th IR sensor for added image processing



    Allowing shallow depth of field, optical zoom and HDR video would be decent improvements. I wonder if they'll leave it for the iPhone 7 or make this for the 6S. Fixing the bands and the camera bump plus improving the camera quality and having faster internals would push quite a few people to upgrade again.



    The mirror setup could also allow the front-facing camera to use the higher quality capture and zoom. It would need two mirrors for the front-facing one and it means you could only use one camera at a time but I'm not sure if any apps use both at once or would ever need to. It means high quality pictures for the selfie generation. Oooh night vision selfies. Celebs can tweet their 'makeup-free' asleep face and undercover sex tapes.

     

    As for your final point, MEMs mirrors are increasingly popular, might be a match for this application (swapping out optical paths). I'm still bullish on spectrometry in the long run for biological analyses.

  • Reply 59 of 77
    wizard69wizard69 Posts: 13,377member
    philboogie wrote: »
    Preferably retaining the f/2.2 at the long end as well.

    That would be nice but I'd be shocked if they could produce a constant aperture zoom in such a small space. The tendency in most zooms is to increase the F ratio so I fear the loss of a couple of stops.
  • Reply 60 of 77
    wizard69wizard69 Posts: 13,377member
    muppetry wrote: »

    The patent certainly indicates separate sensors for each color, and so the beam splitter is not losing light - simply distributing it between the sensors.
    Look again, only one is labeled as an image sensor. That doesn't mean the other sensors aren't contributing to the image just that it isn't clear what they are all doing. I wouldn't be surprised at all to find at least one dedicated to stabilization.
    And since light intensity is not wasted on sensors that filter it out (e.g. red light falling on a blue sensor element) as happens in a conventional single-sensor system, this design is theoretically more light efficient.
    I suspect that the design is one of those win some, loose some designs. A larger optical path may lead to more light transmitted but with losses elsewhere.
Sign In or Register to comment.