or Connect
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › Apple exploring split cameras for thinner iPhones, location-based security
New Posts  All Forums:Forum Nav:

Apple exploring split cameras for thinner iPhones, location-based security - Page 2

post #41 of 66
Quote:
Originally Posted by mpantone View Post

The problem with using two cameras/lens groupings at the same time is parallax.

Two sensors doesn't require two lenses though, that would add more complexity and cost than needed as they'd have to ensure each pair of lenses was within a certain tolerance of the other and always focused the same way. It would be best to just reflect the light onto each sensor from a single lens setup.
post #42 of 66
Quote:
Originally Posted by tzeshan View Post

You don't need two cameras to accomplish this.  You can do this by taking two pictures rapidly one after the other.

You can't, really. While you can obtain a minor amount if noise reduction that way if the software has characterized the noise components of the sensor, that's complex, and takes processing time with specialized hardware. This is done on some DSLRs and some other cameras for long exposures. But what they do is to take a blank picture first, evaluate the noise at that moment, and then take another picture that corrects for that noise, which varies with temperature, the number of pictures that have rapidly been taken, and other factors.

Otherwise, what does taking two pictures at the same exposure get you? It can get you blur from rapidly moving objects. How does it get you a thinner camera?
post #43 of 66
Quote:
Originally Posted by tzeshan View Post


Use a thinner camera of course.  The trick of using two cameras is it effectively doubles the lens, that is the ISO.

What? You are awfully glib about this. Making a thinner camera is the problem, not the solution. And using two cameras doesn't effectly double the ISO.
post #44 of 66
Quote:
Originally Posted by Marvin View Post

The other benefit would be faster HDR because they don't need multiple exposures. Once you have the chroma and luma separate, the luma can be adjusted in post-production. This saves space too as the chroma can be stored in 8bpc and the luma in raw sensor data. It could also allow for HDR video as it only needs one frame. Luma, left, Chroma middle, combined right:



The chroma sensor just needs to be sensitive enough to determine the correct color data. I wonder if it can do that kind of sampling by not relying solely on incoming light but by projecting a signal out like infrared light in a flash and then measuring the effect of that signal on the scene. Naturally the sky is bright enough anyway and too far for any outgoing light to affect but dark shadows could receive an outgoing signal that shows up what the colors are. The luma sensor would measure the incoming light as it has to try and recreate the light variation being seen.

That's a pretty good explanation.
post #45 of 66
Quote:
Originally Posted by Marvin View Post

Two sensors doesn't require two lenses though, that would add more complexity and cost than needed as they'd have to ensure each pair of lenses was within a certain tolerance of the other and always focused the same way. It would be best to just reflect the light onto each sensor from a single lens setup.

I'm trying to figure out how they would do this. It's usually done with a prism, but there's no room for one.

For some time, I've been thinking that the camera could be built in two parts, so to speak. The lens would face forwards, of course, but a mirror could reflect the light 90 degrees. The sensor could sit at that 90 degree angle. It could also be more complex, with two mirrors, allowing the sensor to lie flat against the front or back of the camera, saving more thickness. Alignment would be harder, but it could be worth it, allowing a much bigger sensor.

They could also break the lens down at the nodal point, with a mirror, allowing the lens to also sit at a 90 degree angle for the rear of the lens, allowing a longer, more complex lens (dare I suggest this would make an optical zoom possible, yes, I do!).
post #46 of 66
Quote:
Originally Posted by tzeshan View Post


I am talking about the example you gave about the race car.  Since the car moved one car length, the picture taken will still not be clear.  Your use of two cameras do not solve this problem. 

Taking two pictures at different times vs two pictures at the same time is similar to the difference between a rolling shutter vs a global shutter. There's more motion distortion in a rolling shutter because it doesn't expose every pixel in the frame at the same time.

"Apple should pull the plug on the iPhone."

John C. Dvorak, 2007
Reply

"Apple should pull the plug on the iPhone."

John C. Dvorak, 2007
Reply
post #47 of 66
Quote:
Originally Posted by melgross View Post

I'm trying to figure out how they would do this. It's usually done with a prism, but there's no room for one.

Not quite. If you use one lens to project onto two different imaging planes, you need a mirror.

 

The prism only comes into play if you need to project the image in its correct orientation. The direct image is backwards and upside down; that's how it's displayed on the ground glass focusing screen on view cameras, and thus the film plane. That's not an issue with either photochemical film or a digital imaging sensor. The image orientation is corrected in the post-capture processing.

 

The introduction of a reflex mirror will flip the image vertically, but not laterally. That's what you see on a focusing screen on a waist-level viewfinder on some old medium-format cameras, as well as the classic camera obscura with a reflex mirror.

 

The roof pentaprism is necessary to flip the image laterally for a WYSIWYG display. This is important for SLR cameras.

 

The prism is convenient for humans who prefer to view the scene in the same orientation as the naked eye. It's not necessary for image capturing purposes, it's just a helpful tool for an optical viewfinder.

 

A digital imaging sensor flips the image vertically and laterally before it is displayed on the screen. If there is an optical viewfinder on a point-and-shoot camera, it is not using the primary optics.

post #48 of 66
Quote:
Originally Posted by mpantone View Post
 

No, it is not.

 

The problem with using two cameras/lens groupings at the same time is parallax. The same event is being capture at the same moment, the perspectives are slightly different.

 

When you use the same camera at two different times, the perspectives are the same, the events are different.

 

Whether or not parallax is a major issue depends on the spacing between the lens, and the distance to the subject.

 

If two people set up identical SLRs on tripods next to each other and take pictures of a mountain fifty miles away, parallax is not an issue. If they point their SLRs at a bumblebee pollinating a flower fifteen centimeters away, then perspective makes the two images vastly different.

 

Let's say Joe has a camera focused on a tree and set to take two pictures at 15:14:07.01 and 15:14:07.03. Let's say a lightning bolt strikes at strikes the tree at 15:14:07.03. Joe has taken two different pictures.

 

Parallax is an issue that is amplified at shorter working distances which is why the single-lens reflex system grew in popularity over the old twin-lens reflex systems and viewfinder cameras.

 

Note that parallax isn't inherently a "bad" thing, but it often is undesirable in photography.

 

Parallax is a key component of human vision (and many other animals), notably in the determination of depth perception. 

Do we not have the technology to counteract parallax now? Seems like we should.

"If the young are not initiated into the village, they will burn it down just to feel its warmth."
- African proverb
Reply
"If the young are not initiated into the village, they will burn it down just to feel its warmth."
- African proverb
Reply
post #49 of 66

Well, the photography industry decided to switch to a single lens to avoid parallax. They did this in the Fifties and Sixties. When digital imaging sensors appeared on the market at consumer prices, photography companies gained more opportunities (e.g., cameras in cellphones).

 

Parallax is a condition that happens with two or more lenses in different locations which generates different perspectives. Parallax is described as a series of mathematical formulas, as it is governed by the laws of physics. 

 

There are technologies to compensate for parallax in dual lens cameras, but they are not effective in all situations, and all of them end up adding more complexity and cost.

 

Again, reading a photography primer or consulting a reputable source on optical physics (and some photography websites) covers the topic way better than it could be addressed here at AI.

post #50 of 66
Quote:
Originally Posted by mpantone View Post

Not quite. If you use one lens to project onto two different imaging planes, you need a mirror.

The prism only comes into play if you need to project the image in its correct orientation. The direct image is backwards and upside down; that's how it's displayed on the ground glass focusing screen on view cameras, and thus the film plane. That's not an issue with either photochemical film or a digital imaging sensor. The image orientation is corrected in the post-capture processing.

The introduction of a reflex mirror will flip the image vertically, but not laterally. That's what you see on a focusing screen on a waist-level viewfinder on some old medium-format cameras, as well as the classic camera obscura with a reflex mirror.

The roof pentaprism is necessary to flip the image laterally for a WYSIWYG display. This is important for SLR cameras.

The prism is convenient for humans who prefer to view the scene in the same orientation as the naked eye. It's not necessary for image capturing purposes, it's just a helpful tool for an optical viewfinder.

A digital imaging sensor flips the image vertically and laterally before it is displayed on the screen. If there is an optical viewfinder on a point-and-shoot camera, it is not using the primary optics.

All of the cameras I've seen over the years that do this do it with a prism. Reflex mirrors have too much inefficiency. A prism allows much more light to be used. Here a half stop is critical. And I didn't say a pentaprism. I'm quite aware of the difference between an erecting prism and one that is not.
post #51 of 66
Quote:
Originally Posted by Benjamin Frost View Post

Do we not have the technology to counteract parallax now? Seems like we should.

He seems to be working very hard to show that he understands the subject. But he doesn't seem to understand what Apple is trying to do. There is no reason to believe in the first place, that two lenses would be used. Apple has said nothing about two cameras, which two lenses would imply.

All we know now is that they are seemingly using LAB for the image mode, and will use thinner sensors enabled by this sensor split. Otherwise, nothing.
post #52 of 66

Well, I understand a prism can be used to direct an image multiple directions from a single lens.

 

I still don't see any practical applications of this in a consumer camera and nothing in a smartphone either, but then again, I don't follow the camera industry on a daily basis. I have one old digital camera, and two film cameras, so I admit I'm not the most informed gadget person around.

post #53 of 66
Quote:
Originally Posted by melgross View Post

He seems to be working very hard to show that he understands the subject. But he doesn't seem to understand what Apple is trying to do. There is no reason to believe in the first place, that two lenses would be used. Apple has said nothing about two cameras, which two lenses would imply.

All we know now is that they are seemingly using LAB for the image mode, and will use thinner sensors enabled by this sensor split. Otherwise, nothing.

Yes; splitting the sensors is a different thing and I can see how that would be more straightforward, in that it avoids any problems with parallax.
"If the young are not initiated into the village, they will burn it down just to feel its warmth."
- African proverb
Reply
"If the young are not initiated into the village, they will burn it down just to feel its warmth."
- African proverb
Reply
post #54 of 66
Quote:
Originally Posted by Wings View Post

This is great. We're well on our way for the phone to be as thin as a sheet of paper. And once we get there the only problems left will be (1) how to pick it up, and (2) how to avoid paper cuts.

Actually that isn't all that funny, too thin and the phones become difficult to handle. I don't see cell phones getting a lot thinner unless they are embedded into something else (or someone)
post #55 of 66
Interesting trend with Apple expansion of Patents.

In 2014, to date [via Latestpatents.com]

Google Patents Granted for 2014, to date: 480

Apple Patents Granted for 2014, to date: 420


Note: Apple is going to have a larger patent year than Google

Patents filed, to date, for 2014:

Google: 278

Apple: 479

Apple's patent filings have only been expanding and accelerating.

They will easily pass 2,000 patents granted in 2014 making it their largest single year of granted patents.

Everyone knows they already get some of the most coveted patents in consumer products and manufacturing. It's just getting better.
post #56 of 66
Quote:
Originally Posted by mdriftmeyer View Post

Interesting trend with Apple expansion of Patents.

In 2014, to date [via Latestpatents.com]

Google Patents Granted for 2014, to date: 480

Apple Patents Granted for 2014, to date: 420


Note: Apple is going to have a larger patent year than Google

Patents filed, to date, for 2014:

Google: 278

Apple: 479

Apple's patent filings have only been expanding and accelerating.

They will easily pass 2,000 patents granted in 2014 making it their largest single year of granted patents.

Everyone knows they already get some of the most coveted patents in consumer products and manufacturing. It's just getting better.

Is there a way to see the number of filled patents for any given year easily? I'm asking because if it's not too difficult I'd like to compile them and then check that against major Apple releases to see if we can potentially predict when they might be releasing their next new product category.

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply
post #57 of 66
Quote:
Originally Posted by mpantone View Post
 

No, it is not.

 

The problem with using two cameras/lens groupings at the same time is parallax. The same event is being capture at the same moment, the perspectives are slightly different.

 

When you use the same camera at two different times, the perspectives are the same, the events are different.

 

Whether or not parallax is a major issue depends on the spacing between the lens, and the distance to the subject.

 

If two people set up identical SLRs on tripods next to each other and take pictures of a mountain fifty miles away, parallax is not an issue. If they point their SLRs at a bumblebee pollinating a flower fifteen centimeters away, then perspective makes the two images vastly different.

 

Let's say Joe has a camera focused on a tree and set to take two pictures at 15:14:07.01 and 15:14:07.03. Let's say a lightning bolt strikes at strikes the tree at 15:14:07.03. Joe has taken two different pictures.

 

Parallax is an issue that is amplified at shorter working distances which is why the single-lens reflex system grew in popularity over the old twin-lens reflex systems and viewfinder cameras.

 

Note that parallax isn't inherently a "bad" thing, but it often is undesirable in photography.

 

Parallax is a key component of human vision (and many other animals), notably in the determination of depth perception. 

 

Yaay. Actual photography and optics knowledge.  All too rare in these discussions on AI.  What are you, old?  :-D

 

Quote:

Originally Posted by mpantone View Post
 

Ah, at this point we introduce the notion of exposure time and field of view. This is best covered by a photography textbook, but we'll try to explain it briefly here.

 

Yes, the car is moving at 83.33 m/s. Let's say it's a nice sunny day at the track and my camera's shutter is set at 1/4000th second. During that brief moment, the car will have travelled about 21 centimeters. Is that too long to get a sharp shot? Well, how far am I from the subject? Is the car ten feet away or on the other side of the track? At this point, the sharpness is determined by the movement of the image relative to the field of view, not the actual speed.

 

Let's introduce another concept, the notion of tracking. Let's say the cars are far away, but I'm using a tripod head that swivels smoothly, whether it be a ballhead or pan-and-tilt head really doesn't matter for this example. If I attempt to keep the car in the center of the frame, the motion relative to the field of view is far less.

 

At this point, I suggest you read a basic primer on photography then go take your camera out and shoot some action scenes. It could be cars, baseball pitchers, flying seagulls, bumblebees, etc., it doesn't really matter.

 

Shhhh.  Your tone's getting a little pedantic here even though you're still speaking truth. And truth into the wind. :-D

Remember:  Most people whose grounding in photography began in the phonecam era - including (amazingly?) many of the reviewers for even big sites - still don't even know the OPTICAL difference between optical and digital zoom, and here you're asking them to grasp how angular velocity interacts with arc distance traversed as a function of subject distance.

Ooooooo.  Math + optics is hard.

 

Quote:
Originally Posted by melgross View Post

I'm trying to figure out how they would do this. It's usually done with a prism, but there's no room for one.

For some time, I've been thinking that the camera could be built in two parts, so to speak. The lens would face forwards, of course, but a mirror could reflect the light 90 degrees. The sensor could sit at that 90 degree angle. It could also be more complex, with two mirrors, allowing the sensor to lie flat against the front or back of the camera, saving more thickness. Alignment would be harder, but it could be worth it, allowing a much bigger sensor.

They could also break the lens down at the nodal point, with a mirror, allowing the lens to also sit at a 90 degree angle for the rear of the lens, allowing a longer, more complex lens (dare I suggest this would make an optical zoom possible, yes, I do!).

 

I've read articles arguing this is impractical in terms of optical losses and other arcane stuff I don't really remember.  But let's pooh-pooh the pooh-poohers and hope you are right, Msr. Gaberator.

Despite all the clever digital innovations - new ways of manipulating visual data that've never existed - to me - knowing the power of focal length, the holy grail of phone cam design will be achieving optical zoom.

An iPhone, a Leatherman and thou...  ...life is complete.

Reply

An iPhone, a Leatherman and thou...  ...life is complete.

Reply
post #58 of 66
Quote:
Originally Posted by melgross View Post

I'm trying to figure out how they would do this. It's usually done with a prism, but there's no room for one.

For some time, I've been thinking that the camera could be built in two parts, so to speak. The lens would face forwards, of course, but a mirror could reflect the light 90 degrees. The sensor could sit at that 90 degree angle. It could also be more complex, with two mirrors, allowing the sensor to lie flat against the front or back of the camera, saving more thickness. Alignment would be harder, but it could be worth it, allowing a much bigger sensor.

They could also break the lens down at the nodal point, with a mirror, allowing the lens to also sit at a 90 degree angle for the rear of the lens, allowing a longer, more complex lens (dare I suggest this would make an optical zoom possible, yes, I do!).

Yeah, it's not easy to measure the same signal in two different ways. If they used a trichroic prism and a separate sensor per color component like some professional video cameras, they can get rid of the bayer filter and that reduces thickness as well as gets increased color quality, no mosaic patterns, and better light sensitivity but because the color and brightness processes use the same sensors, they're limited in how they can filter the incoming light to get the best output for both. This really would depend on being able to send the light sideways and if you can do that, the sensor thickness wouldn't be a problem any more.

The patent details are here:

http://www.google.com/patents/US20120162465

Their motivation with the two sensors would be making the device thinner. This method helps reduce the camera length so they could get away with a smaller sensor.

The bayer filter (or similar) would be excluded for the luma sensor so that all photodetectors measure brightness and they aren't losing detail passing it through the filter. This gives them increased clarity on luma and they can interpolate color values on the chroma sensor to match the increased resolution of the luma sensor. What I said earlier about saving space with the chroma in 8bpc isn't correct, the chroma sensor still needs to record as wide a range of color values as possible. It should be possible to avoid multiple exposures though because the color process and luma would be filtered separately. The chroma sensor needs to have a way to even the light intensity to allow the detectors to accurately measure the color values with no regard to how those detectors actually measure the intensity as that's the job of the other sensor.

The Arri Alexa sensor does this with a dual gain setup:

http://www.arri.com/camera/digital_cameras/technology/arri_imaging_technology/alexas_sensor/

This gives them HDR output:

"Dual Gain Architecture simultaneously provides two separate read-out paths from each pixel with different amplification. The first path contains the regular, highly amplified signal. The second path contains a signal with lower amplification to capture the information that is clipped in the first path. Both paths feed into the cameras A/D converters, delivering a 14 bit image for each path. These images are then combined into a single 16 bit high dynamic range image. This method enhances low light performance and prevents the highlights from being clipped, thereby significantly extending the dynamic range of the image."

Kodak had a modification to the bayer filter adding transparent cells but there's still the issue of crosstalk to deal with:

http://hothardware.com/News/Kodak-s-New-Panchromatic-Pixel-Blaster-/

If a phone could direct the light sideways without using a mirror, it opens up a lot of possibilities for multiple lenses and optical zoom and sensor thickness is no longer a problem. I suppose ideally the lens would be on the top or side of the phone. It should fit that way as the lens diameter and sensor widths are lower than the phone width but it would probably be too weird to shoot with. Lytro is sort of like this and it would be more secure to hold but the Lytro still has the display facing you.

Perhaps they can construct a luma sensor to act like a mirror so that light hits the luma sensor directly, measures the intensity and then reflects it onto a chroma sensor. As soon as the luma sensor knows how bright each pixel is, it would adjust the per pixel gain of the chroma sensor to most accurately measure the color values across the whole image. Perhaps that would be too much signal loss for the chroma sensor but then maybe the luma sensor doesn't just reflect the light; it can amplify it.

Whatever the best way is, they're the one's getting paid for it so let them figure it out. 1wink.gif
post #59 of 66
Quote:
Originally Posted by Marvin View Post

Quote:
Originally Posted by melgross View Post

I'm trying to figure out how they would do this. It's usually done with a prism, but there's no room for one.

For some time, I've been thinking that the camera could be built in two parts, so to speak. The lens would face forwards, of course, but a mirror could reflect the light 90 degrees. The sensor could sit at that 90 degree angle. It could also be more complex, with two mirrors, allowing the sensor to lie flat against the front or back of the camera, saving more thickness. Alignment would be harder, but it could be worth it, allowing a much bigger sensor.

They could also break the lens down at the nodal point, with a mirror, allowing the lens to also sit at a 90 degree angle for the rear of the lens, allowing a longer, more complex lens (dare I suggest this would make an optical zoom possible, yes, I do!).

Yeah, it's not easy to measure the same signal in two different ways. If they used a trichroic prism and a separate sensor per color component like some professional video cameras, they can get rid of the bayer filter and that reduces thickness as well as gets increased color quality, no mosaic patterns, and better light sensitivity but because the color and brightness processes use the same sensors, they're limited in how they can filter the incoming light to get the best output for both. This really would depend on being able to send the light sideways and if you can do that, the sensor thickness wouldn't be a problem any more.

The patent details are here:

http://www.google.com/patents/US20120162465

Their motivation with the two sensors would be making the device thinner. This method helps reduce the camera length so they could get away with a smaller sensor.

The bayer filter (or similar) would be excluded for the luma sensor so that all photodetectors measure brightness and they aren't losing detail passing it through the filter. This gives them increased clarity on luma and they can interpolate color values on the chroma sensor to match the increased resolution of the luma sensor. What I said earlier about saving space with the chroma in 8bpc isn't correct, the chroma sensor still needs to record as wide a range of color values as possible. It should be possible to avoid multiple exposures though because the color process and luma would be filtered separately. The chroma sensor needs to have a way to even the light intensity to allow the detectors to accurately measure the color values with no regard to how those detectors actually measure the intensity as that's the job of the other sensor.

The Arri Alexa sensor does this with a dual gain setup:

http://www.arri.com/camera/digital_cameras/technology/arri_imaging_technology/alexas_sensor/

This gives them HDR output:

"Dual Gain Architecture simultaneously provides two separate read-out paths from each pixel with different amplification. The first path contains the regular, highly amplified signal. The second path contains a signal with lower amplification to capture the information that is clipped in the first path. Both paths feed into the cameras A/D converters, delivering a 14 bit image for each path. These images are then combined into a single 16 bit high dynamic range image. This method enhances low light performance and prevents the highlights from being clipped, thereby significantly extending the dynamic range of the image."

Kodak had a modification to the bayer filter adding transparent cells but there's still the issue of crosstalk to deal with:

http://hothardware.com/News/Kodak-s-New-Panchromatic-Pixel-Blaster-/

If a phone could direct the light sideways without using a mirror, it opens up a lot of possibilities for multiple lenses and optical zoom and sensor thickness is no longer a problem. I suppose ideally the lens would be on the top or side of the phone. It should fit that way as the lens diameter and sensor widths are lower than the phone width but it would probably be too weird to shoot with. Lytro is sort of like this and it would be more secure to hold but the Lytro still has the display facing you.

Perhaps they can construct a luma sensor to act like a mirror so that light hits the luma sensor directly, measures the intensity and then reflects it onto a chroma sensor. As soon as the luma sensor knows how bright each pixel is, it would adjust the per pixel gain of the chroma sensor to most accurately measure the color values across the whole image. Perhaps that would be too much signal loss for the chroma sensor but then maybe the luma sensor doesn't just reflect the light; it can amplify it.

Whatever the best way is, they're the one's getting paid for it so let them figure it out. 1wink.gif

As a result of the discussion on another thread:
Quote:
Originally Posted by Dick Applebaum View Post


...

From my reading, I've also learned that sapphire would also be useful in camera lenses, because is conducts all the UV-visible light spectrum and can be used to focus ...

This could mean improved iPhone camera optics (zoom, autofocus, etc.)

I'll see if I can find the references to optics of the sapphire.

TC's cornering the market on sapphire manufacturing could yield a double-whammy -- silicon on sapphire (SoS) ICs for the iWatch and fantastic iPhone camera improvements:

"Swift generally gets you to the right way much quicker." - auxio -

"The perfect [birth]day -- A little playtime, a good poop, and a long nap." - Tomato Greeting Cards -
Reply
"Swift generally gets you to the right way much quicker." - auxio -

"The perfect [birth]day -- A little playtime, a good poop, and a long nap." - Tomato Greeting Cards -
Reply
post #60 of 66
Here's one reference to sapphire being used in camera lenses and focusing:
Quote:
Sapphire Lenses

Series MPCX and MPCV optical grade sapphire lenses are manufactured from optical grade grown sapphire. Monocrystalline sapphire is slightly birefringent. The lenses are available in positive and negative configurations. Typical applications include:

IR laser Beamsteering optics
Imaging optics
Chemical & erosion resistant front surface optics.
Focusing Optics
This series of sapphire lenses is currently in limited production. Please consult the factory for availability. Other diameters and focal lengths may be special ordered. We can also supply parts with anti-reflection and other thin film coatings.


http://www.melleroptics.com/shopping/shopdisplayproducts.asp?id=19
"Swift generally gets you to the right way much quicker." - auxio -

"The perfect [birth]day -- A little playtime, a good poop, and a long nap." - Tomato Greeting Cards -
Reply
"Swift generally gets you to the right way much quicker." - auxio -

"The perfect [birth]day -- A little playtime, a good poop, and a long nap." - Tomato Greeting Cards -
Reply
post #61 of 66
Quote:
Originally Posted by Marvin View Post


The other benefit would be faster HDR because they don't need multiple exposures. Once you have the chroma and luma separate, the luma can be adjusted in post-production. This saves space too as the chroma can be stored in 8bpc and the luma in raw sensor data. It could also allow for HDR video as it only needs one frame. Luma, left, Chroma middle, combined right:



The chroma sensor just needs to be sensitive enough to determine the correct color data. I wonder if it can do that kind of sampling by not relying solely on incoming light but by projecting a signal out like infrared light in a flash and then measuring the effect of that signal on the scene. Naturally the sky is bright enough anyway and too far for any outgoing light to affect but dark shadows could receive an outgoing signal that shows up what the colors are. The luma sensor would measure the incoming light as it has to try and recreate the light variation being seen.

 

 

Are you aware  of such a sensor technology? The current ones are in fact filtered in arrays. There are gaps between pixels that are reserved for additional electronics. It's also important to note that dynamic range is based upon best case scenario. At higher ISO ratings the processing is completely different due to a worse signal to noise ratio which digitizes as noise. They clear parts of it up using various algorithms, most of which do some variation of multiply pixels to create one raster by 4 essentially rasterizing to a lower resolution, then resample at 4x the area and apply a weighted contribution of that result onto the normally rasterized image. Some of them get significantly more complicated. If you're interested I know I have a few downloaded research papers on the topic somewhere on one of my drives due to weird levels of obsessiveness with the subject (and having attempted to write a raw processor).

 

Quote:
Originally Posted by Marvin View Post
 

Two sensors doesn't require two lenses though, that would add more complexity and cost than needed as they'd have to ensure each pair of lenses was within a certain tolerance of the other and always focused the same way. It would be best to just reflect the light onto each sensor from a single lens setup.

 

How you would you align two sensors on one lens? The gaps are there for other electronics. Foveon sensors relied on layering. I'm also unsure what you would gain from the chroma sensor idea. It's interesting and all, but you might still end up beyond 8 bits as captured. Otherwise you might still end up with an overly dithered output due to color banding. It's an extremely rough approximation (due to non linear transformations), but if you open an image editing program and adjust contrast or color by color mode blending only, you'll see what i mean.

post #62 of 66
Quote:
Originally Posted by hmm View Post

How you would you align two sensors on one lens?

See the last post. Some cameras already use multiple sensors with single lenses. If they had to go with two lenses, they could have them so close as to be like a figure of 8 so I don't suppose it would be all that big of a deal. It might affect really closeup shots.
Quote:
Originally Posted by hmm View Post

I'm also unsure what you would gain from the chroma sensor idea.

Some advantages are described in the patent but luma data is where most of the detail is in an image so instead of diluting it through the color filter, you just allow a monochrome sensor to get as much detail and sharpness as possible.

There could be a better way to detect color than photodetectors with color filters in front. Tackling the two sets of data independently gives more freedom to deal with incoming light to get the best quality for both.

They could of course just forget their obsession with thin and put in a giant sensor with a telescopic lens attached but that would be too easy.

Some smartphones already do HDR video but they run at a high FPS and alternate the exposure between even and odd frames and then combine them. I think they combine the exposures after recording is done:

http://www.anandtech.com/show/6747/htc-one-review/8



"In the normal 1080p video it’s totally saturated, in the HDR video we can see clouds and sky, and the street appears different as well. There are however some recombination artifacts visible during fast panning left and right. There’s also some odd hysteresis at the beginning as you begin shooting video where the brightness of the whole scene changes and adapts for a second. This ostensibly prevents the whole scene from quickly changing exposures, but at the beginning it’s a little weird to see that fade-in effect which comes with all HDR video. HTC enables video HDR by running its video capture at twice the framerate with an interlaced high and low exposure that then get computationally recombined into a single frame which gets sent off to the encoder."

You can see the sky burns out in the normal one and you get that horrible exposure shift that makes you know it's from a phone or cheap SLR camera. The HDR one has a more film quality to it, likely because professional cameras shoot in HDR and don't have the exposure shifts. Ultimately, the aim would be to get the output as close to the expensive cameras as possible using whatever technique gives the best results:

http://vimeo.com/43741153
post #63 of 66
You guys have an interesting discussion going on here as to how Apple could possibly implant this in a cell phone. The common approach of beam splitters, prisms and the like, would be an extremely tight fit in a cell phone. The issue of parallax is a real problem but the question then becomes how close do you have to be to for that problem not to be correctible with a little electronic processing. Mind you Apple could potentially have the lenses only a few millimeters apart.

What ever Apple does here I suspect that electronic processing will play a big role in the success or failure of the technology. How that will happen depends big time on the final optical and sensor arrangement. Frankly I'm not even convinced they will have anything ready for iPhone 6. I'm still of the mind that if I need a real camera I will reach for one, if Apple produces something that out does a dedicated camera in a cell phone they could change my mind.

In any event it will be interesting if Apple does deliver a vastly improved camera with the next iPhone.
post #64 of 66
Quote:
Originally Posted by Marvin View Post


There could be a better way to detect color than photodetectors with color filters in front. Tackling the two sets of data independently gives more freedom to deal with incoming light to get the best quality for both.
 

They would need to use a different type of sensor for the color information than luminance. Assuming sensitivity is close enough within wavelengths and IR can be suitably filtered from the recorded spectrum, they could seemingly use unfiltered cmos or ccd chips. The typical RGBG bayer array is in fact made using filtered sensors, as the sensors can't otherwise  differentiate between colors. I'm mostly skeptical on the idea that a much lower depth will be almost as good when it comes to perceived chroma. Right now with hdr it's based on a normalized 0 to 1 range where superbrights are computed based on where they no longer clip. I've noticed the trend toward this in smartphones too, and you'll start to see more interesting color correction applications pop up as a result if these guys provide any access to uncompressed footage for editing purposes. I have wanted to see an uncompressed output for some time from these companies or the ability to customize the way the video is processed as it goes. If they want to keep it fairly automated, it will be some attempt to to map everything within range prior to compression. With several exposures that should be possible. I'll look for info on that HTC later, but unfortunately I don't think Apple would be so forthcoming. As of right now you can't get anything really raw from any of the iphones.


 

Quote:


They could of course just forget their obsession with thin and put in a giant sensor with a telescopic lens attached but that would be too easy.

Some smartphones already do HDR video but they run at a high FPS and alternate the exposure between even and odd frames and then combine them. I think they combine the exposures after recording is done:

http://www.anandtech.com/show/6747/htc-one-review/8

 

"In the normal 1080p video it’s totally saturated, in the HDR video we can see clouds and sky, and the street appears different as well. There are however some recombination artifacts visible during fast panning left and right. There’s also some odd hysteresis at the beginning as you begin shooting video where the brightness of the whole scene changes and adapts for a second. This ostensibly prevents the whole scene from quickly changing exposures, but at the beginning it’s a little weird to see that fade-in effect which comes with all HDR video. HTC enables video HDR by running its video capture at twice the framerate with an interlaced high and low exposure that then get computationally recombined into a single frame which gets sent off to the encoder."

You can see the sky burns out in the normal one and you get that horrible exposure shift that makes you know it's from a phone or cheap SLR camera. The HDR one has a more film quality to it, likely because professional cameras shoot in HDR and don't have the exposure shifts. Ultimately, the aim would be to get the output as close to the expensive cameras as possible using whatever technique gives the best results:

http://vimeo.com/43741153

In the first video I noted the sky. They would have needed to compress the range regardless to get that. It's just that if  plenty of information is captured, they may be able to use a feathered range to make adjustments. If you're interested in seeing something like that in practice nuke and davinci resolve have light versions. Resolve lite is especially good. The point was that you can take a certain range regardless of what the software calls it and make a fairly discrete adjustment to that range due to being able to differentiate well. Otherwise you could have information for a sky which is still remapped out of range in the target gamut. I've never seen any evidence that professional cameras specifically shoot in HDR in the sense of multiple exposures. Their dynamic range has increased over the years. At this point some sensors for still capture claim as high as 14 stops of dynamic range. What you actually get varies depending on conditions, and video grading software does a better job of making use of that range. The thing of having values beyond the typical clamp range isn't new, but more professional cameras have functional linear workflows that preserve the original data as far as possible by just rasterizing previews and maintaining the original  framebuffer values rather than gamma corrected versions. I'm tired today so I may have messed up my explanation somewhere.

 

 

The second one is my favorite, because it used a song from Ferris Bueller's day off.   The subtitles are great.
 

quote from the video

Quote:

 

The only color correction I did was to even out the color temperature discrepancies between the two cameras, and those adjustements were done on the Alexa footage since I value my sanity.

 

The Alexa was a bit more subtle. It had less noise. I've seen these differences across still and video assets.

post #65 of 66
Quote:
Originally Posted by wizard69 View Post

I'm still of the mind that if I need a real camera I will reach for one, if Apple produces something that out does a dedicated camera in a cell phone they could change my mind.

Dedicated as in DSLR? In which case I would love a full manual option in iOS.
How to enter the Apple logo  on iOS:
/Settings/Keyboard/Shortcut and paste in  which you copied from an email draft or a note. Screendump
Reply
How to enter the Apple logo  on iOS:
/Settings/Keyboard/Shortcut and paste in  which you copied from an email draft or a note. Screendump
Reply
post #66 of 66
Quote:
Originally Posted by mpantone View Post
 

There is no such thing according to the iOS end user documentation.

 

There is a sleep mode in iOS, which turns off the display. The device still receives messages/push notifications, takes calls, plays music, etc.

 

The only other setting is "Off."


I know, that's why I didn't understand his comment. Thought it was some kind of hidden routine in iOS. I still don't understand his comment.

New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Future Apple Hardware
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › Apple exploring split cameras for thinner iPhones, location-based security