Apple exploring split cameras for thinner iPhones, location-based security

124»

Comments

  • Reply 61 of 65
    MarvinMarvin Posts: 15,326moderator
    hmm wrote: »
    How you would you align two sensors on one lens?

    See the last post. Some cameras already use multiple sensors with single lenses. If they had to go with two lenses, they could have them so close as to be like a figure of 8 so I don't suppose it would be all that big of a deal. It might affect really closeup shots.
    hmm wrote: »
    I'm also unsure what you would gain from the chroma sensor idea.

    Some advantages are described in the patent but luma data is where most of the detail is in an image so instead of diluting it through the color filter, you just allow a monochrome sensor to get as much detail and sharpness as possible.

    There could be a better way to detect color than photodetectors with color filters in front. Tackling the two sets of data independently gives more freedom to deal with incoming light to get the best quality for both.

    They could of course just forget their obsession with thin and put in a giant sensor with a telescopic lens attached but that would be too easy.

    Some smartphones already do HDR video but they run at a high FPS and alternate the exposure between even and odd frames and then combine them. I think they combine the exposures after recording is done:

    http://www.anandtech.com/show/6747/htc-one-review/8


    [VIDEO]


    "In the normal 1080p video it’s totally saturated, in the HDR video we can see clouds and sky, and the street appears different as well. There are however some recombination artifacts visible during fast panning left and right. There’s also some odd hysteresis at the beginning as you begin shooting video where the brightness of the whole scene changes and adapts for a second. This ostensibly prevents the whole scene from quickly changing exposures, but at the beginning it’s a little weird to see that fade-in effect which comes with all HDR video. HTC enables video HDR by running its video capture at twice the framerate with an interlaced high and low exposure that then get computationally recombined into a single frame which gets sent off to the encoder."

    You can see the sky burns out in the normal one and you get that horrible exposure shift that makes you know it's from a phone or cheap SLR camera. The HDR one has a more film quality to it, likely because professional cameras shoot in HDR and don't have the exposure shifts. Ultimately, the aim would be to get the output as close to the expensive cameras as possible using whatever technique gives the best results:

  • Reply 62 of 65
    wizard69wizard69 Posts: 13,377member
    You guys have an interesting discussion going on here as to how Apple could possibly implant this in a cell phone. The common approach of beam splitters, prisms and the like, would be an extremely tight fit in a cell phone. The issue of parallax is a real problem but the question then becomes how close do you have to be to for that problem not to be correctible with a little electronic processing. Mind you Apple could potentially have the lenses only a few millimeters apart.

    What ever Apple does here I suspect that electronic processing will play a big role in the success or failure of the technology. How that will happen depends big time on the final optical and sensor arrangement. Frankly I'm not even convinced they will have anything ready for iPhone 6. I'm still of the mind that if I need a real camera I will reach for one, if Apple produces something that out does a dedicated camera in a cell phone they could change my mind.

    In any event it will be interesting if Apple does deliver a vastly improved camera with the next iPhone.
  • Reply 63 of 65
    hmmhmm Posts: 3,405member
    Quote:
    Originally Posted by Marvin View Post





    There could be a better way to detect color than photodetectors with color filters in front. Tackling the two sets of data independently gives more freedom to deal with incoming light to get the best quality for both.

     

    They would need to use a different type of sensor for the color information than luminance. Assuming sensitivity is close enough within wavelengths and IR can be suitably filtered from the recorded spectrum, they could seemingly use unfiltered cmos or ccd chips. The typical RGBG bayer array is in fact made using filtered sensors, as the sensors can't otherwise  differentiate between colors. I'm mostly skeptical on the idea that a much lower depth will be almost as good when it comes to perceived chroma. Right now with hdr it's based on a normalized 0 to 1 range where superbrights are computed based on where they no longer clip. I've noticed the trend toward this in smartphones too, and you'll start to see more interesting color correction applications pop up as a result if these guys provide any access to uncompressed footage for editing purposes. I have wanted to see an uncompressed output for some time from these companies or the ability to customize the way the video is processed as it goes. If they want to keep it fairly automated, it will be some attempt to to map everything within range prior to compression. With several exposures that should be possible. I'll look for info on that HTC later, but unfortunately I don't think Apple would be so forthcoming. As of right now you can't get anything really raw from any of the iphones.



     

    Quote:




    They could of course just forget their obsession with thin and put in a giant sensor with a telescopic lens attached but that would be too easy.

    Some smartphones already do HDR video but they run at a high FPS and alternate the exposure between even and odd frames and then combine them. I think they combine the exposures after recording is done:

    http://www.anandtech.com/show/6747/htc-one-review/8

     

    "In the normal 1080p video it’s totally saturated, in the HDR video we can see clouds and sky, and the street appears different as well. There are however some recombination artifacts visible during fast panning left and right. There’s also some odd hysteresis at the beginning as you begin shooting video where the brightness of the whole scene changes and adapts for a second. This ostensibly prevents the whole scene from quickly changing exposures, but at the beginning it’s a little weird to see that fade-in effect which comes with all HDR video. HTC enables video HDR by running its video capture at twice the framerate with an interlaced high and low exposure that then get computationally recombined into a single frame which gets sent off to the encoder."

    You can see the sky burns out in the normal one and you get that horrible exposure shift that makes you know it's from a phone or cheap SLR camera. The HDR one has a more film quality to it, likely because professional cameras shoot in HDR and don't have the exposure shifts. Ultimately, the aim would be to get the output as close to the expensive cameras as possible using whatever technique gives the best results:



    In the first video I noted the sky. They would have needed to compress the range regardless to get that. It's just that if  plenty of information is captured, they may be able to use a feathered range to make adjustments. If you're interested in seeing something like that in practice nuke and davinci resolve have light versions. Resolve lite is especially good. The point was that you can take a certain range regardless of what the software calls it and make a fairly discrete adjustment to that range due to being able to differentiate well. Otherwise you could have information for a sky which is still remapped out of range in the target gamut. I've never seen any evidence that professional cameras specifically shoot in HDR in the sense of multiple exposures. Their dynamic range has increased over the years. At this point some sensors for still capture claim as high as 14 stops of dynamic range. What you actually get varies depending on conditions, and video grading software does a better job of making use of that range. The thing of having values beyond the typical clamp range isn't new, but more professional cameras have functional linear workflows that preserve the original data as far as possible by just rasterizing previews and maintaining the original  framebuffer values rather than gamma corrected versions. I'm tired today so I may have messed up my explanation somewhere.

     

     

    The second one is my favorite, because it used a song from Ferris Bueller's day off.   The subtitles are great.

     

    quote from the video

    Quote:


     

    The only color correction I did was to even out the color temperature discrepancies between the two cameras, and those adjustements were done on the Alexa footage since I value my sanity.



     

    The Alexa was a bit more subtle. It had less noise. I've seen these differences across still and video assets.

  • Reply 64 of 65
    philboogiephilboogie Posts: 7,675member
    wizard69 wrote: »
    I'm still of the mind that if I need a real camera I will reach for one, if Apple produces something that out does a dedicated camera in a cell phone they could change my mind.

    Dedicated as in DSLR? In which case I would love a full manual option in iOS.
  • Reply 65 of 65
    clemynxclemynx Posts: 1,552member
    Quote:

    Originally Posted by mpantone View Post

     

    There is no such thing according to the iOS end user documentation.

     

    There is a sleep mode in iOS, which turns off the display. The device still receives messages/push notifications, takes calls, plays music, etc.

     

    The only other setting is "Off."




    I know, that's why I didn't understand his comment. Thought it was some kind of hidden routine in iOS. I still don't understand his comment.

Sign In or Register to comment.