or Connect
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › Apple exploring split cameras for thinner iPhones, location-based security
New Posts  All Forums:Forum Nav:

Apple exploring split cameras for thinner iPhones, location-based security

post #1 of 66
Thread Starter 
Future iPhones could be thinner and more secure thanks to technology outlined in two unique proposed inventions from Apple, covering adaptive security profiles and a new type of split-sensor camera.

Patent


The first application published Thursday by the U.S. Patent and Trademark Office, entitled "Electronic device with two image sensors," details a camera comprised of two distinct sensors: one sensor would capture luma, or brightness, data, while the other could capture chroma, or color, information. The final image would be created by combining the data captured by each sensor.

By splitting the camera into two, each module could be made thinner than a comparable camera with dual-purpose sensors embedded into the same module. This would allow the device housing the camera to be made correspondingly thinner.

Additionally, Apple says that the split design would allow for improved signal-to-noise ratio resulting in increased image quality. It could also help alleviate color reproduction issues caused by optical filters, and the overall assembly might be less costly.

Apple credits Michael F. Culbert and Chris Ligtenberg for the invention of U.S. Patent Application No. 13/331543.

Patent


The second application, entitled "Electronic devices having adaptive security profiles and methods for selecting the same," depicts a scenario in which a mobile device could automatically enable or disable certain security protocols depending on the device's location. For example, an iPhone could require a simple four-digit passcode while in a user's home but insist on a fingerprint for authentication once it leaves that area.

For more fine-grained security, users would be able to define several different profiles that apply to individual apps and types of data. SMS data could be subject to different access requirements than email data, for instance.

Users could manually configure geofenced areas, and Apple also envisions several scenarios for dynamically determining where the device is expected to be at any given time. It could analyze location information in the user's calendar or social networks, for instance, to see if friends had checked the user in at a particular place.

Apple credits Michael I. Ingrassia, Jr. for the invention of U.S. Patent Application No. 13/100,851.
post #2 of 66
Adaptive security profiles would be a treat.
post #3 of 66
Geo-fence notifications has been a complete bust for me. If the phone is in power saver mode, it fails to alert me every time.

Proud AAPL stock owner.

 

GOA

Reply

Proud AAPL stock owner.

 

GOA

Reply
post #4 of 66
Adaptive security is right inline with one feature I want Apple's iWatch to offer my Mac, iPhone and iPad.

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply
post #5 of 66

You don't need two cameras to accomplish this.  You can do this by taking two pictures rapidly one after the other.

post #6 of 66
Quote:
Originally Posted by tzeshan View Post

You don't need two cameras to accomplish this.  You can do this by taking two pictures rapidly one after the other.

How will that allow camera HW to be thinner?

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply
post #7 of 66
While applauding the creativity behind these ideas, for them to be useful for me they should fix the battery life issue before adding in new ideas that require me to turn on a power hungry feature like Location Services. Whatever happened to the 3G's one week to ten days of battery power? Now I struggle to get to two days... and only then when all the clever new features are switched off!
post #8 of 66
Quote:
Originally Posted by SpamSandwich View Post

Geo-fence notifications has been a complete bust for me. If the phone is in power saver mode, it fails to alert me every time.

Don't notifications still pop up even in what you call power saver mode? The only way this wouldn't work is if the phone was turned off but once it turns on, the new profile should immediately be enforced.

post #9 of 66
Quote:
Originally Posted by tzeshan View Post
 

You don't need two cameras to accomplish this.  You can do this by taking two pictures rapidly one after the other.

Each camera sensor is constructed differently allowing them to be thinner. You would need both photos, which could be taken at same time (some parallax error) to construct the final photo.

post #10 of 66
Quote:
Originally Posted by tzeshan View Post
 

You don't need two cameras to accomplish this.  You can do this by taking two pictures rapidly one after the other.

Apple, you must hire this guy! He can make your iPhones thinner just by taking 2 photos really quickly 1 after the other.

 

:no: 

post #11 of 66
This is great. We're well on our way for the phone to be as thin as a sheet of paper. And once we get there the only problems left will be (1) how to pick it up, and (2) how to avoid paper cuts.
post #12 of 66
Quote:
Originally Posted by SpamSandwich View Post

Geo-fence notifications has been a complete bust for me. If the phone is in power saver mode, it fails to alert me every time.


What is power saving mode on iOS?

post #13 of 66
Maybe those cameras are not for an iPhone but for a new MacBook Air?
post #14 of 66
Quote:
Originally Posted by SwissMac2 View Post

While applauding the creativity behind these ideas, for them to be useful for me they should fix the battery life issue before adding in new ideas that require me to turn on a power hungry feature like Location Services. Whatever happened to the 3G's one week to ten days of battery power? Now I struggle to get to two days... and only then when all the clever new features are switched off!

I never got a week of battery life on the 3G.

 

One thing to consider about comparing the battery life of your old phone to your new phone is whether or not you are in the same home, office, car and have a similar usage pattern because depending on the strength of the WiFi, cell signal and the type of BT connections you have it could substantially affect the amount of battery usage. A weaker signal uses more battery. Also, to be fair you also need to consider if you are talking more than before. For example, I had about a half hour call this morning and the phone was actually very warm to the touch and had used 8% of its charge by the end of the call which I believe was due to the fact that I only had about 1 bar of signal strength.

Life is too short to drink bad coffee.

Reply

Life is too short to drink bad coffee.

Reply
post #15 of 66

Well, the only thing that changed was the phone, all other things remained the same so I am pretty sure the difference (which was marked) was due to higher power demands of the iP5 compared to the 3G (Not the 3GS I hasten to add). I can only tell you my experience, your experience will no doubt be different. However, if you went straight from a 3G to a 5 with all other things being the same, even if you never got a week of use from the 3G I would expect the ratio between the battery life of the two phones to be broadly similar to that which I experienced, ie the iP5 (as well as the ip4 and 4s from what I have been told/read, perhaps due to the OS in use) runs out of battery power sooner than did the 3G.

post #16 of 66
Tired of hearing all the battery life complaints.

Some people want really thin. Really thin. They have it.

Some people want extended battery life. Get a Mophie.

You can add battery life to the existing model.

If you have a long battery life iPhone, you can't make it thinner.
post #17 of 66
Quote:
Originally Posted by ClemyNX View Post
 

What is power saving mode on iOS?

There is no such thing according to the iOS end user documentation.

 

There is a sleep mode in iOS, which turns off the display. The device still receives messages/push notifications, takes calls, plays music, etc.

 

The only other setting is "Off."

post #18 of 66
Quote:
Originally Posted by SolipsismX View Post


How will that allow camera HW to be thinner?


Use a thinner camera of course.  The trick of using two cameras is it effectively doubles the lens, that is the ISO.

post #19 of 66
Quote:
Originally Posted by tzeshan View Post


Use a thinner camera of course.  The trick of using two cameras is it effectively doubles the lens, that is the ISO.

How does taking two photos rapidly allow for the camera HW to be thinner thereby allowing the casing to be thinner per the patent?

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply
post #20 of 66
Quote:
Originally Posted by SolipsismX View Post

Quote:
Originally Posted by tzeshan View Post

You don't need two cameras to accomplish this.  You can do this by taking two pictures rapidly one after the other.

How will that allow camera HW to be thinner?

The other benefit would be faster HDR because they don't need multiple exposures. Once you have the chroma and luma separate, the luma can be adjusted in post-production. It could also allow for HDR video as it only needs one frame. Luma left, Chroma middle, combined right:



The chroma sensor just needs to be sensitive enough to determine the correct color data. I wonder if it can do that kind of sampling by not relying solely on incoming light but by projecting a signal out like infrared light in a flash and then measuring the effect of that signal on the scene. Naturally the sky is bright enough anyway and too far for any outgoing light to affect but dark shadows could receive an outgoing signal that shows up what the colors are. The luma sensor would measure the incoming light as it has to try and recreate the light variation being seen.
post #21 of 66
Quote:
Originally Posted by SolipsismX View Post


How does taking two photos rapidly allow for the camera HW to be thinner thereby allowing the casing to be thinner per the patent?


I did not read the patent.  The question is why using two cameras can make HW thinner despite the patent claim.

post #22 of 66
Quote:
Originally Posted by tzeshan View Post


I did not read the patent.  The question is why using two cameras can make HW thinner despite the patent claim.

I believe it's because the components don't need to be stacked, but can be placed laterally thus allowing the device components to be thinner even though they may take up more volume with their new arrangement.

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply
post #23 of 66
I'm seeing an iPhone you can shave with in the near future ... 1smoking.gif
From Apple ][ - to new Mac Pro I've used them all.
Long on AAPL so biased
Google Motto "You're not the customer. You're the product."
Reply
From Apple ][ - to new Mac Pro I've used them all.
Long on AAPL so biased
Google Motto "You're not the customer. You're the product."
Reply
post #24 of 66
Quote:
Originally Posted by SolipsismX View Post


I believe it's because the components don't need to be stacked, but can be placed laterally thus allowing the device components to be thinner even though they may take up more volume with their new arrangement.

Pacing laterally will make HW thinnger?

post #25 of 66
Quote:
Originally Posted by Marvin View Post

Quote:
Originally Posted by SolipsismX View Post

Quote:
Originally Posted by tzeshan View Post

You don't need two cameras to accomplish this.  You can do this by taking two pictures rapidly one after the other.

How will that allow camera HW to be thinner?

The other benefit would be faster HDR because they don't need multiple exposures. Once you have the chroma and luma separate, the luma can be adjusted in post-production. This saves space too as the chroma can be stored in 8bpc and the luma in raw sensor data. It could also allow for HDR video as it only needs one frame. Luma, left, Chroma middle, combined right:



The chroma sensor just needs to be sensitive enough to determine the correct color data. I wonder if it can do that kind of sampling by not relying solely on incoming light but by projecting a signal out like infrared light in a flash and then measuring the effect of that signal on the scene. Naturally the sky is bright enough anyway and too far for any outgoing light to affect but dark shadows could receive an outgoing signal that shows up what the colors are. The luma sensor would measure the incoming light as it has to try and recreate the light variation being seen.

The capability of "projecting a signal out like infrared light in a flash and then measuring the effect of that signal" that you suggest -- I wonder if that could also be used to measure distance which could be used for autofocus and other non-photography capabilities: measuring; distance calculation; gesture recognition, etc.

After reading somewhere about a smart phone app that has features to assist the blind and people with poor eyesight -- I did some experimenting and found that the rear-facing camera on the iPhone 5S (and even the iPad 4) is superior to using any magnifying glass that I have ... and it has a light!
"Swift generally gets you to the right way much quicker." - auxio -

"The perfect [birth]day -- A little playtime, a good poop, and a long nap." - Tomato Greeting Cards -
Reply
"Swift generally gets you to the right way much quicker." - auxio -

"The perfect [birth]day -- A little playtime, a good poop, and a long nap." - Tomato Greeting Cards -
Reply
post #26 of 66
Quote:
Originally Posted by Dick Applebaum View Post

The capability of "projecting a signal out like infrared light in a flash and then measuring the effect of that signal" that you suggest -- I wonder if that could also be used to measure distance which could be used for autofocus and other non-photography capabilities: measuring; distance calculation; gesture recognition, etc.

That's what devices like the Kinect use:



This would be a different sampling method as it projects pinpoints of evenly spaced infrared light. For chroma, it would be a flash that fills the scene and then the sensor would just figure out the colors either on its own or by also using a natural light sample in the calculation.

Having depth capability would be extremely useful though for object detection and focusing.
post #27 of 66
Quote:
Originally Posted by Dick Applebaum View Post

The capability of "projecting a signal out like infrared light in a flash and then measuring the effect of that signal" that you suggest -- I wonder if that could also be used to measure distance which could be used for autofocus and other non-photography capabilities: measuring; distance calculation; gesture recognition, etc.

I'm not convinced that active autofocus (infrared or ultrasound beam projection) is practical for cellphone camera modules. You need a fairly powerful beam to extend beyond a few meters, but at that distance, the cellphone camera module lens is probably set for infinity focus anyhow due to the wide-angle focal length of these cellphone lenses. (The 4.12mm lens in the iPhone 5S is an equivalent 29.7mm lens for 35mm photography.)

 

Some SLRs (Canon EOS for example) have autofocus assist beams, built into the body as well as external flash units like the Speedlite. The first cameras to employed infrared active AF systems date from the late Seventies, it's pretty old technology. Active ultrasound AF systems were also around this time, most notably the Polaroid SX-70 and Sonar OneStep models.

 

More recent smartphone camera modules have decent low light performance and thus focus adeptly using passive autofocus methods like phase detection.


Edited by mpantone - 3/25/14 at 11:34am
post #28 of 66
Quote:
Originally Posted by Evilution View Post

Apple, you must hire this guy! He can make your iPhones thinner just by taking 2 photos really quickly 1 after the other.

1oyvey.gif  

As long as the subject is not moving.

"Apple should pull the plug on the iPhone."

John C. Dvorak, 2007
Reply

"Apple should pull the plug on the iPhone."

John C. Dvorak, 2007
Reply
post #29 of 66
Three *cameras* would be even better -- one for each color. Having CCDs/CMOSs for each color would enhance quality.
post #30 of 66
The biggest challenge would be the difficulty in registering/aligning images from each imager, and the inevitable specter of manufacturing variances between lenses.
 
Also, there's the issue of cost and battery power needed for all the camera modules/sensors. I'm not convinced that the optical quality and lens size of cellphone camera modules would really benefit much from multiple sensors and optical groupings in terms of resolving power and final image quality.
 
The system would also introduce the notion of parallax error, although the degree to which this might impact image quality is unknown.
 
Of all of these, I would think increased cost/complexity with marginal image quality improvement might end up being the deal breaker.

Edited by mpantone - 3/25/14 at 12:50pm
post #31 of 66
Quote:
Originally Posted by Suddenly Newton View Post


As long as the subject is not moving.


Two cameras can not stop the object moving either.

post #32 of 66

No, but both cameras will capture the scene at the same moment.

 

Let's say you are shooting an auto race where the cars are traveling 300 km/h. That's 83.33 m/s. Let's say you can knock off two shots in a tenth of a second: the car has still traveled 8.33 meters between shots. For both NASCAR and F-1, the cars are 5m long, so the distance covered in 0.1 second is greater than a full car length.

 

That's a problem when shooting consecutive frames.

 

Even when shooting a nature scene like ocean waves, consecutive shots can end up causing issues for HDR, etc.


Edited by mpantone - 3/25/14 at 1:28pm
post #33 of 66
Quote:
Originally Posted by tzeshan View Post
 


Two cameras can not stop the object moving either.

2 cameras can take a photo at the same time and merge.

post #34 of 66
Quote:
Originally Posted by Evilution View Post
 

2 cameras can take a photo at the same time and merge.


You can merge two pictures taken with one camera too.

post #35 of 66
Quote:
Originally Posted by mpantone View Post
 

No, but both cameras will capture the scene at the same moment.

 

Let's say you are shooting an auto race where the cars are traveling 300 km/h. That's 83.33 m/s. Let's say you can knock off two shots in a tenth of a second: the car has still traveled 8.33 meters between shots. For both NASCAR and F-1, the cars are 5m long, so the distance covered in 0.1 second is greater than a full car length.

 

That's a problem when shooting consecutive frames.

 

Even when shooting a nature scene like ocean waves, consecutive shots can end up causing issues for HDR, etc.


This is the same problem to using two cameras too. 

post #36 of 66

No, it is not.

 

The problem with using two cameras/lens groupings at the same time is parallax. The same event is being capture at the same moment, the perspectives are slightly different.

 

When you use the same camera at two different times, the perspectives are the same, the events are different.

 

Whether or not parallax is a major issue depends on the spacing between the lens, and the distance to the subject.

 

If two people set up identical SLRs on tripods next to each other and take pictures of a mountain fifty miles away, parallax is not an issue. If they point their SLRs at a bumblebee pollinating a flower fifteen centimeters away, then perspective makes the two images vastly different.

 

Let's say Joe has a camera focused on a tree and set to take two pictures at 15:14:07.01 and 15:14:07.03. Let's say a lightning bolt strikes at strikes the tree at 15:14:07.03. Joe has taken two different pictures.

 

Parallax is an issue that is amplified at shorter working distances which is why the single-lens reflex system grew in popularity over the old twin-lens reflex systems and viewfinder cameras.

 

Note that parallax isn't inherently a "bad" thing, but it often is undesirable in photography.

 

Parallax is a key component of human vision (and many other animals), notably in the determination of depth perception. 


Edited by mpantone - 3/25/14 at 3:37pm
post #37 of 66
Quote:
Originally Posted by mpantone View Post
 

No, it is not.

 

The problem with using two cameras/lens groupings at the same time is parallax. The same event is being capture at the same moment, the perspectives are slightly different.

 

When you use the same camera at two different times, the perspectives are the same, the events are different.

 

Whether or not parallax is a major issue depends on the spacing between the lens, and the distance to the subject.

 

If two people set up identical SLRs on tripods next to each other and take pictures of a mountain fifty miles away, parallax is not an issue. If they point their SLRs at a bumblebee pollinating a flower fifteen centimeters away, then perspective makes the two images vastly different.

 

Let's say Joe has a camera focused on a tree and set to take two pictures at 15:14:07.01 and 15:14:07.03. Let's say a lightning bolt strikes at strikes the tree at 15:14:07.03. Joe has taken two different pictures.

 

Parallax is an issue that is amplified at shorter working distances which is why the single-lens reflex system grew in popularity over the old twin-lens reflex systems and viewfinder cameras.


I am talking about the example you gave about the race car.  Since the car moved one car length, the picture taken will still not be clear.  Your use of two cameras do not solve this problem. 

post #38 of 66
Quote:
Originally Posted by tzeshan View Post


Two cameras can not stop the object moving either.

Two cameras can take a photo at the same time. Taking two exposures some milliseconds apart can result in blurring or multiple images of fast moving subjects.

"Apple should pull the plug on the iPhone."

John C. Dvorak, 2007
Reply

"Apple should pull the plug on the iPhone."

John C. Dvorak, 2007
Reply
post #39 of 66

Ah, at this point we introduce the notion of exposure time and field of view. This is best covered by a photography textbook, but we'll try to explain it briefly here.

 

Yes, the car is moving at 83.33 m/s. Let's say it's a nice sunny day at the track and my camera's shutter is set at 1/4000th second. During that brief moment, the car will have travelled about 21 centimeters. Is that too long to get a sharp shot? Well, how far am I from the subject? Is the car ten feet away or on the other side of the track? At this point, the sharpness is determined by the movement of the image relative to the field of view, not the actual speed.

 

Let's introduce another concept, the notion of tracking. Let's say the cars are far away, but I'm using a tripod head that swivels smoothly, whether it be a ballhead or pan-and-tilt head really doesn't matter for this example. If I attempt to keep the car in the center of the frame, the motion relative to the field of view is far less.

 

At this point, I suggest you read a basic primer on photography then go take your camera out and shoot some action scenes. It could be cars, baseball pitchers, flying seagulls, bumblebees, etc., it doesn't really matter.


Edited by mpantone - 3/25/14 at 4:12pm
post #40 of 66
Quote:
Originally Posted by tzeshan View Post

You don't need two cameras to accomplish this.  You can do this by taking two pictures rapidly one after the other.

No, this is different. I haven't read the patents yet, though I will as soon as I have time.

But what Apple doing is breaking the image up into two components, luminescence, and Chroma, which is the LAB format. This will result in better noise, and possibly, higher sharpness as well.

For many years, when correcting images, we would move it to the LAB format, which is exactly what this looks like. LAB allows great amounts of color and sharpness correction with an absolute minimum of quality loss. It then gets moved back to PSD, TIFF, or something else.

I do admit though, I don't understand how this would enable thinner sensors. The patent should be an interesting read.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Future Apple Hardware
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › Apple exploring split cameras for thinner iPhones, location-based security