Apple exploring split cameras for thinner iPhones, location-based security

24

Comments

  • Reply 21 of 65
    solipsismxsolipsismx Posts: 19,566member
    tzeshan wrote: »

    I did not read the patent.  The question is why using two cameras can make HW thinner despite the patent claim.

    I believe it's because the components don't need to be stacked, but can be placed laterally thus allowing the device components to be thinner even though they may take up more volume with their new arrangement.
  • Reply 22 of 65
    MacProMacPro Posts: 19,727member
    I'm seeing an iPhone you can shave with in the near future ... :smokey:
  • Reply 23 of 65
    tzeshantzeshan Posts: 2,351member
    Quote:

    Originally Posted by SolipsismX View Post





    I believe it's because the components don't need to be stacked, but can be placed laterally thus allowing the device components to be thinner even though they may take up more volume with their new arrangement.

    Pacing laterally will make HW thinnger?

  • Reply 24 of 65
    Marvin wrote: »
    solipsismx wrote: »
    tzeshan wrote: »
    You don't need two cameras to accomplish this.  You can do this by taking two pictures rapidly one after the other.

    How will that allow camera HW to be thinner?

    The other benefit would be faster HDR because they don't need multiple exposures. Once you have the chroma and luma separate, the luma can be adjusted in post-production. This saves space too as the chroma can be stored in 8bpc and the luma in raw sensor data. It could also allow for HDR video as it only needs one frame. Luma, left, Chroma middle, combined right:

    1000

    The chroma sensor just needs to be sensitive enough to determine the correct color data. I wonder if it can do that kind of sampling by not relying solely on incoming light but by projecting a signal out like infrared light in a flash and then measuring the effect of that signal on the scene. Naturally the sky is bright enough anyway and too far for any outgoing light to affect but dark shadows could receive an outgoing signal that shows up what the colors are. The luma sensor would measure the incoming light as it has to try and recreate the light variation being seen.

    The capability of "projecting a signal out like infrared light in a flash and then measuring the effect of that signal" that you suggest -- I wonder if that could also be used to measure distance which could be used for autofocus and other non-photography capabilities: measuring; distance calculation; gesture recognition, etc.

    After reading somewhere about a smart phone app that has features to assist the blind and people with poor eyesight -- I did some experimenting and found that the rear-facing camera on the iPhone 5S (and even the iPad 4) is superior to using any magnifying glass that I have ... and it has a light!
  • Reply 25 of 65
    MarvinMarvin Posts: 15,322moderator
    The capability of "projecting a signal out like infrared light in a flash and then measuring the effect of that signal" that you suggest -- I wonder if that could also be used to measure distance which could be used for autofocus and other non-photography capabilities: measuring; distance calculation; gesture recognition, etc.

    That's what devices like the Kinect use:


    [VIDEO]


    This would be a different sampling method as it projects pinpoints of evenly spaced infrared light. For chroma, it would be a flash that fills the scene and then the sensor would just figure out the colors either on its own or by also using a natural light sample in the calculation.

    Having depth capability would be extremely useful though for object detection and focusing.
  • Reply 26 of 65
    mpantonempantone Posts: 2,040member
    Quote:
    Originally Posted by Dick Applebaum View Post



    The capability of "projecting a signal out like infrared light in a flash and then measuring the effect of that signal" that you suggest -- I wonder if that could also be used to measure distance which could be used for autofocus and other non-photography capabilities: measuring; distance calculation; gesture recognition, etc.

    I'm not convinced that active autofocus (infrared or ultrasound beam projection) is practical for cellphone camera modules. You need a fairly powerful beam to extend beyond a few meters, but at that distance, the cellphone camera module lens is probably set for infinity focus anyhow due to the wide-angle focal length of these cellphone lenses. (The 4.12mm lens in the iPhone 5S is an equivalent 29.7mm lens for 35mm photography.)

     

    Some SLRs (Canon EOS for example) have autofocus assist beams, built into the body as well as external flash units like the Speedlite. The first cameras to employed infrared active AF systems date from the late Seventies, it's pretty old technology. Active ultrasound AF systems were also around this time, most notably the Polaroid SX-70 and Sonar OneStep models.

     

    More recent smartphone camera modules have decent low light performance and thus focus adeptly using passive autofocus methods like phase detection.

  • Reply 27 of 65
    evilution wrote: »
    Apple, you must hire this guy! He can make your iPhones thinner just by taking 2 photos really quickly 1 after the other.

    :no:  

    As long as the subject is not moving.
  • Reply 28 of 65
    Three *cameras* would be even better -- one for each color. Having CCDs/CMOSs for each color would enhance quality.
  • Reply 29 of 65
    mpantonempantone Posts: 2,040member
    The biggest challenge would be the difficulty in registering/aligning images from each imager, and the inevitable specter of manufacturing variances between lenses.


     


    Also, there's the issue of cost and battery power needed for all the camera modules/sensors. I'm not convinced that the optical quality and lens size of cellphone camera modules would really benefit much from multiple sensors and optical groupings in terms of resolving power and final image quality.


     


    The system would also introduce the notion of parallax error, although the degree to which this might impact image quality is unknown.


     


    Of all of these, I would think increased cost/complexity with marginal image quality improvement might end up being the deal breaker.
  • Reply 30 of 65
    tzeshantzeshan Posts: 2,351member
    Quote:

    Originally Posted by Suddenly Newton View Post





    As long as the subject is not moving.



    Two cameras can not stop the object moving either.

  • Reply 31 of 65
    mpantonempantone Posts: 2,040member

    No, but both cameras will capture the scene at the same moment.

     

    Let's say you are shooting an auto race where the cars are traveling 300 km/h. That's 83.33 m/s. Let's say you can knock off two shots in a tenth of a second: the car has still traveled 8.33 meters between shots. For both NASCAR and F-1, the cars are 5m long, so the distance covered in 0.1 second is greater than a full car length.

     

    That's a problem when shooting consecutive frames.

     

    Even when shooting a nature scene like ocean waves, consecutive shots can end up causing issues for HDR, etc.

  • Reply 32 of 65
    evilutionevilution Posts: 1,399member
    Quote:

    Originally Posted by tzeshan View Post

     



    Two cameras can not stop the object moving either.


    2 cameras can take a photo at the same time and merge.

  • Reply 33 of 65
    tzeshantzeshan Posts: 2,351member
    Quote:

    Originally Posted by Evilution View Post

     

    2 cameras can take a photo at the same time and merge.




    You can merge two pictures taken with one camera too.

  • Reply 34 of 65
    tzeshantzeshan Posts: 2,351member
    Quote:

    Originally Posted by mpantone View Post

     

    No, but both cameras will capture the scene at the same moment.

     

    Let's say you are shooting an auto race where the cars are traveling 300 km/h. That's 83.33 m/s. Let's say you can knock off two shots in a tenth of a second: the car has still traveled 8.33 meters between shots. For both NASCAR and F-1, the cars are 5m long, so the distance covered in 0.1 second is greater than a full car length.

     

    That's a problem when shooting consecutive frames.

     

    Even when shooting a nature scene like ocean waves, consecutive shots can end up causing issues for HDR, etc.




    This is the same problem to using two cameras too. 

  • Reply 35 of 65
    mpantonempantone Posts: 2,040member

    No, it is not.

     

    The problem with using two cameras/lens groupings at the same time is parallax. The same event is being capture at the same moment, the perspectives are slightly different.

     

    When you use the same camera at two different times, the perspectives are the same, the events are different.

     

    Whether or not parallax is a major issue depends on the spacing between the lens, and the distance to the subject.

     

    If two people set up identical SLRs on tripods next to each other and take pictures of a mountain fifty miles away, parallax is not an issue. If they point their SLRs at a bumblebee pollinating a flower fifteen centimeters away, then perspective makes the two images vastly different.

     

    Let's say Joe has a camera focused on a tree and set to take two pictures at 15:14:07.01 and 15:14:07.03. Let's say a lightning bolt strikes at strikes the tree at 15:14:07.03. Joe has taken two different pictures.

     

    Parallax is an issue that is amplified at shorter working distances which is why the single-lens reflex system grew in popularity over the old twin-lens reflex systems and viewfinder cameras.

     

    Note that parallax isn't inherently a "bad" thing, but it often is undesirable in photography.

     

    Parallax is a key component of human vision (and many other animals), notably in the determination of depth perception. 

  • Reply 36 of 65
    tzeshantzeshan Posts: 2,351member
    Quote:

    Originally Posted by mpantone View Post

     

    No, it is not.

     

    The problem with using two cameras/lens groupings at the same time is parallax. The same event is being capture at the same moment, the perspectives are slightly different.

     

    When you use the same camera at two different times, the perspectives are the same, the events are different.

     

    Whether or not parallax is a major issue depends on the spacing between the lens, and the distance to the subject.

     

    If two people set up identical SLRs on tripods next to each other and take pictures of a mountain fifty miles away, parallax is not an issue. If they point their SLRs at a bumblebee pollinating a flower fifteen centimeters away, then perspective makes the two images vastly different.

     

    Let's say Joe has a camera focused on a tree and set to take two pictures at 15:14:07.01 and 15:14:07.03. Let's say a lightning bolt strikes at strikes the tree at 15:14:07.03. Joe has taken two different pictures.

     

    Parallax is an issue that is amplified at shorter working distances which is why the single-lens reflex system grew in popularity over the old twin-lens reflex systems and viewfinder cameras.




    I am talking about the example you gave about the race car.  Since the car moved one car length, the picture taken will still not be clear.  Your use of two cameras do not solve this problem. 

  • Reply 37 of 65
    tzeshan wrote: »

    Two cameras can not stop the object moving either.

    Two cameras can take a photo at the same time. Taking two exposures some milliseconds apart can result in blurring or multiple images of fast moving subjects.
  • Reply 38 of 65
    mpantonempantone Posts: 2,040member

    Ah, at this point we introduce the notion of exposure time and field of view. This is best covered by a photography textbook, but we'll try to explain it briefly here.

     

    Yes, the car is moving at 83.33 m/s. Let's say it's a nice sunny day at the track and my camera's shutter is set at 1/4000th second. During that brief moment, the car will have travelled about 21 centimeters. Is that too long to get a sharp shot? Well, how far am I from the subject? Is the car ten feet away or on the other side of the track? At this point, the sharpness is determined by the movement of the image relative to the field of view, not the actual speed.

     

    Let's introduce another concept, the notion of tracking. Let's say the cars are far away, but I'm using a tripod head that swivels smoothly, whether it be a ballhead or pan-and-tilt head really doesn't matter for this example. If I attempt to keep the car in the center of the frame, the motion relative to the field of view is far less.

     

    At this point, I suggest you read a basic primer on photography then go take your camera out and shoot some action scenes. It could be cars, baseball pitchers, flying seagulls, bumblebees, etc., it doesn't really matter.

  • Reply 39 of 65
    melgrossmelgross Posts: 33,510member
    tzeshan wrote: »
    You don't need two cameras to accomplish this.  You can do this by taking two pictures rapidly one after the other.

    No, this is different. I haven't read the patents yet, though I will as soon as I have time.

    But what Apple doing is breaking the image up into two components, luminescence, and Chroma, which is the LAB format. This will result in better noise, and possibly, higher sharpness as well.

    For many years, when correcting images, we would move it to the LAB format, which is exactly what this looks like. LAB allows great amounts of color and sharpness correction with an absolute minimum of quality loss. It then gets moved back to PSD, TIFF, or something else.

    I do admit though, I don't understand how this would enable thinner sensors. The patent should be an interesting read.
  • Reply 40 of 65
    MarvinMarvin Posts: 15,322moderator
    mpantone wrote: »
    The problem with using two cameras/lens groupings at the same time is parallax.

    Two sensors doesn't require two lenses though, that would add more complexity and cost than needed as they'd have to ensure each pair of lenses was within a certain tolerance of the other and always focused the same way. It would be best to just reflect the light onto each sensor from a single lens setup.
Sign In or Register to comment.