Apple invents 3-sensor iPhone camera with light splitting cube for accurate colors, low-light perfor

2

Comments

  • Reply 21 of 42
    melgrossmelgross Posts: 32,991member
    It's interesting that they've been publishing a number of patents with a 90 degree camera. As I've been pushing this idea for years, I hope some form of it will be used. It gives major advantages to cameras in such thin phones. We've pretty much reached the end of what can be done on such thin cameras.
  • Reply 22 of 42
    Quote:

    Originally Posted by foggyhill View Post

     

     

    You gain sensor size, but lose light in the optics and mirrors and its more fragile , not sure it can be a good trade off. Maybe if it is coupled with some other tech they patented. Probably won't ever find its way into a phone.

     

    IF they need to do real time processing of the 3 colors in good light, it could be interesting. Maybe it would be for a supplementary camera on the phone?




    The patent states that the sensors receive Three (3) times the brightness because no filters are necessary.  So the overall light reaching the sensor should be greater than current systems.  It's a very exciting advance.

  • Reply 23 of 42
    melgrossmelgross Posts: 32,991member
    The problem with this system is that it would mean less light hits the sensor array due to the mirror and additional glass in the way. This means noisier images and slower shutter speeds let alone the fact that there are more mechanical parts to do wrong.... I wouldn't hold your breath for this to be realistically an option for a while.... Sensor tech would need to improve substantially to obviate the lack of light.

    Not really. The mirrors lose well under 1% of the light. The lenses here will be vastly better then what we get now, and an optical zoom will be possible. While it's true that some light is lost due to the prism, as the sensors themselves can have larger sensing sites, and less filtering, that more than makes up for it. In fact. It's possible to make a sensor that sensitive to just a particular spectrum, rendering filters unnecessary altogether. In addition, the lens can be made faster, as a much larger front element can now be used for more light gathering, and the rest of the elements can be of larger diameter as well.

    This is an interesting design, that was used in the last with film, and some early digital cameras, as well as analog video cameras. But modern technology could enable a much better result.
  • Reply 24 of 42
    melgrossmelgross Posts: 32,991member
    The difference is that there is no zoom lens in the iPhone at present therefore the cost will be at loss of light if compared to the standard focal length used for the current setup - bottom line with this system there will be a loss of light gathering power and that is my point. 

    And your point is incorrect. While it may seem to be correct to you, modern technology can do wonders with this. New optical coatings reduce reflections to minuscule amounts, resulting in greater contrast, and less loss of light through the system.

    With such small sensors, zooms can be made faster. The biggest advantage here is the possibility of using a much bigger front element, enabling much greater light gathering power. This couldn't be done with the very short cameras ordinarily used.

    I don't see your problem here. Do you think that Apple's researchers and designers don't know everything you think you know, plus a whole lot more?
  • Reply 25 of 42
    k2kwk2kw Posts: 2,035member
    I would love for Apple to enter the high-end consumer market for cameras. A full frame or APS sensor with IOS, optical Zoom, great autofocus, and 40 Megapixel out put.
    Get rid of all the buttons on the typical DSR and change options of screen or via Siri.
    I would happily pay 2000-$2,500 for it. Right off the back it would have superior IOS apps for editing pictures and better integration with iCloud and better more reliable communication components.
  • Reply 26 of 42
    dv8ordv8or Posts: 26member
    I am sorry Mel but I don't know how you can say I am incorrect? I am a photographer and there is always a price to pay for more glass in front of the sensor in terms of light gathering capability - precisely why we don't have 0.95 zooms.
  • Reply 27 of 42
    melgrossmelgross Posts: 32,991member
    Marvin wrote: »
    It's the global shutter that gets rid of the wobble or rolling shutter, not the prisms. CCDs usually have it but CMOS didn't. CCD wasn't suitable for phones or even cheaper cameras due to cost and power draw. Camera manufacturers likely found they could get 3 cheaper CCDs than a single large one and so split the colors onto 3 separate smaller CCDs to get similar performance to a large one. Apple should use global shutter CMOS sensors if they aren't already. FCPX corrects rolling shutter artifacts but it's better if they don't happen in the first place.
    You don't need a Bayer filter though. Right now, light shining on a sensor would get passed through a pattern of colored squares so each RGB component only gets about 1/3 of the sensor area (although it's not always split 3 ways) as well as losing light passing through the filter and the pixels are spread apart for each color. If you take away the filter and put in 3 sensors, you get 3x the light plus some saving from losing light through the filter. There's a test of this here:

    http://www.dpreview.com/forums/thread/2878305

    Quite a large loss in light. The mirror plus lenses will lose some light but these are intended to be highly transmissive materials and not absorbing anything so more efficient than a Bayer filter. I would expect at least a doubling in low light sensitivity over the current setup.

    Another benefit to the lens position shown is they can share the same sensor setup with the front-facing camera so both get high-resolution and with the mirror, they can put OIS in the iPhone 6, get rid of the camera bump and get optical zoom. I think this will be a setup that arrives in the iPhone in future, maybe not the 6S but the 7.
    It allows them to capture infrared light, which is used by the Kinect etc for depth tracking and it works for night vision. An iPhone or iPad could be used as a baby monitor at night, badger watching and so on. Depth tracking opens up huge possibilities for apps. Developers can make apps that help you measure your body for clothes e.g bust size for a comfortable bra, leg length, waist size for the right jeans, head size for hats, you can make 3D models of objects/people very easily.

    http://appleinsider.com/articles/14/07/11/apples-secret-plans-for-primesense-3d-tech-hinted-at-by-new-itseez3d-ipad-app


    [VIDEO]


    You can take 3D pictures of your food and put it on Instagram and people can look at it in 3D.

    CCDs send the data out all at once, reading the entire sensor singly, so to speak, while CMOS sensors need a number of lines read out at a time, in sequence, which is why, in video, in particular, we get that jelly effect, when quickly panning the scene, or recording a fast moving object. canon is using 8 lanes to read out their latest chips in their still cameras for the video.

    Of course, in the "old days" when the first still digital cameras came out, the only sensors readily available were those from consumer analog video cameras, at 640:480, and so these were used. It took some time until CMOS sensors became good enough to use. I remember that the first digital camera to use one might have been a Mamiya, but I'm not sure. The performance was disappointing.

    I'm wondering how they're going to handle the anti aliasing filtering.
  • Reply 28 of 42

    But how do they want to fit it in "5 mm" thin iPhone 7? Is it possible to miniaturize such system so much?

  • Reply 29 of 42
    melgrossmelgross Posts: 32,991member
    dv8or wrote: »
    I am sorry Mel but I don't know how you can say I am incorrect? I am a photographer and there is always a price to pay for more glass in front of the sensor in terms of light gathering capability - precisely why we don't have 0.95 zooms.

    I've been in the photographic business since 1969, working for Mcann Ericsson in fashion and Tv commercial photography (I used to shoot the Summer Blond Tv commercials during the early and mid 1970's). In addition, I've had four years of physics and several courses in optical design as part of my solid state physics interest. Later, for 28 years, I ran a commercial photo lab in NYC, where we developed our professional Kodachrome developing processor, in conjunction with Kodak; the only professional Kodachrome line in the world.

    Canon and Nikon use to send me cameras and lenses to test, and I, along with a client, and friend of mine, use to beta test Leaf digital backs for them.

    I think I understand a bit of this, as a result..
  • Reply 30 of 42
    melgrossmelgross Posts: 32,991member
    frantisek wrote: »
    But how do they want to fit it in "5 mm" thin iPhone 7? Is it possible to miniaturize such system so much?

    Microscope lenses can be made very small, and yet, are some of the most sophisticated lenses being made.

    An important part of lens theory, and something that is present in real lenses is diffraction. An ideal lens is at its best wide open. In reality, most lenses are best closed down a couple of stops. But if a lens doesn't need to be stopped down, it can be perfected at its native aperture. That's why smartphone lenses are better than I would otherwise expect.

    In fact, theory states that the faster the lens, the sharper it can be. I don't want to get into the theory, but with modern computer lens design programs, lenses are so much better than ever. I can see Apple designing a lens that's faster, and sharper, if they need to. Even newer optical plastics than we've seen before are approaching, and even bettering some traditional optical glasses. This allows better lenses for smartphones. In fact, most of the major lens makers have been using plastic aspherical elements for some years now. Some of these lenses cost thousands, I know, because I have them.

    While Leica is very forwards in announcing apochromatic lenses in their line, and sometimes Zeiss does as well, both Canon and Nikon have apochromatic lenses as well, but they don't label them as such. Other makers do as well. Sigma has been making a big push into industry leading lenses in the past few years.

    I bring this up because these days, anyone can design an apo lens, and the technology to actually make them is now available to everyone. While I can't claim to know what Apple is doing, I wouldn't be surprised if they have been making semi-plan lenses for a couple of years. In examining the images from my 6+, I see very little chromatic abberation, a sign of at least a semi-plan, partly corrected apo lens. This is really pretty exciting to me.

    If their new system we're seeing here actually doesn't block infrared, but allows its transmission, and blocking through changeable means, then that would be exciting as well. It could mean that Apple is looking into a super achromat. While a standard multi element lens is an achromat; a lens that focuses two colors on the same plane, usually red and green, and an apo lens focusses all three visible colors, a super achromat focusses four colors, allowing infrared to focus on that plane as well (or, if the lens allows its passage, ultraviolet).

    This would lead to a very sharp image, and some interesting uses, such as astral photography. Both Canon and Nikon have versions of their cameras that eliminate the infrared filter, allowing astral photography.

    The more I think about the possibilities here, the more excited I get about whether Apple is actually going to make this.
  • Reply 31 of 42
    dv8ordv8or Posts: 26member
    Thanks for the resume Mel - I won't bore you with mine but you haven't refuted my point so I am at a loss as to what you are arguing about? I am getting on a plane right now so won't be able to respond sadly..
  • Reply 32 of 42
    melgrossmelgross Posts: 32,991member
    dv8or wrote: »
    Thanks for the resume Mel - I won't bore you with mine but you haven't refuted my point so I am at a loss as to what you are arguing about? I am getting on a plane right now so won't be able to respond sadly..

    I'm not sure I understand your post. Where did I respond to you here? I can't find it. In fact, the only other post of yours here, under this screen name, is the one where you stated that you didn't know why I said you were incorrect. That is post 25.

    Are you posting under two screen names, with two accounts? Are you also Marc Rogoff? Because that's who I responded to in that way.

    If so, that's really against the rules.

    Otherwise, please point out the post to which you are referring.

    According to the information I see here, you are still online.
  • Reply 33 of 42
    They shouldn't have been granted the patent. Three-Chip cameras have been around for a long time already. The only novel thing is putting it into a smartphone. They would be better off using a Foveon sensor that doesn't need a prism.
  • Reply 34 of 42
    desuserigndesuserign Posts: 1,316member
    dv8or wrote: »
    I am sorry Mel but I don't know how you can say I am incorrect? I am a photographer and there is always a price to pay for more glass in front of the sensor in terms of light gathering capability - precisely why we don't have 0.95 zooms.

    It's often good to remember that The universe doesn't read resumes.
    • Just an example that contradicts your assertion that ". . . there is always a price to pay for more glass in front of the sensor in terms of light gathering capability . . . "
    We use lenses rather than pinholes on cameras because there is so much to be gained by puting that glass in front of the sensor (including the ability to gather far more light.) Perhaps your assertion is too simplistic to apply without considering a wider range of factors?
  • Reply 35 of 42
    desuserigndesuserign Posts: 1,316member
    thinkknot wrote: »
    They shouldn't have been granted the patent. Three-Chip cameras have been around for a long time already. The only novel thing is putting it into a smartphone. They would be better off using a Foveon sensor that doesn't need a prism.

    I have not read the patent, so I don't know what their claim is. But I doubt the claim is for using 3 chips. To me it looks like the method of splitting is different from other arrangements I've seen.

    My brother in law used to have a little point and shoot (I think is was made by Fuji c1999) that used a 90° turn like this with a conventional sensor. Is was a tiny camera (about like 3/4 of a deck of cards) and very low res. But it had great zoom capability. the optics were not too hot otherwise though (distortion and CA--which could be addressed with post processing now.) Of course optics etc. have come a long way since then.
  • Reply 36 of 42
    rtdunhamrtdunham Posts: 428member
    Quote:
    Originally Posted by rob53 View Post

     

    Give me a true optical zoom of 5-10X (as well as macro) 

     


     

    YES.  Surely zoom will be the next big thing with iphone.

  • Reply 37 of 42
    staticx57staticx57 Posts: 403member
    Quote:

    Originally Posted by K2kW View Post



    I would love for Apple to enter the high-end consumer market for cameras. A full frame or APS sensor with IOS, optical Zoom, great autofocus, and 40 Megapixel out put.

    Get rid of all the buttons on the typical DSR and change options of screen or via Siri.

    I would happily pay 2000-$2,500 for it. Right off the back it would have superior IOS apps for editing pictures and better integration with iCloud and better more reliable communication components.

    No No No No, I lament the day they take away the buttons. There are so many advantages of dedicated buttons such as changing a setting without looking, not having to dig through menus to find something, and SPEED of changing settings. Can you imagine having to say "Hey Siri..wait a second... change ISO to sixty four hundred rather than just a hold a button and spin a wheel?

     

    You really want to use iOS to edit pictures? I wouldn't settle for anything less than a full Mac with a large screen and a powerful editing program such as Lightroom. If you are doing it casually, most of the stuff you mentioned is totally unneed and you could just get away with a consumer camera or phone.

  • Reply 38 of 42
    ahmlcoahmlco Posts: 432member
    Quote:

    Originally Posted by K2kW View Post



    I would love for Apple to enter the high-end consumer market for cameras. A full frame or APS sensor with IOS, optical Zoom, great autofocus, and 40 Megapixel out put.

     

    Apple isn't going to enter the market with a dedicated camera because it's a steadily shrinking market. Pro's won't use it, and enthusiasts are happier with buttons and knobs. (Actually, sales aren't "shrinking" so much as they are falling off a 15,000-ft cliff.)

     

    If Apple dumps this tech into a phone and gains decent low-light capabilities with any kind of an optical zoom, then in 3-4 years practically no one is going to carry or even own a dedicated P&S camera. (Okay, you MIGHT have one with a longer zoom for shooting little Jimmy playing soccer or Sara's piano recital (or vice-versa) but that's it.) 

  • Reply 39 of 42
    MarvinMarvin Posts: 14,509moderator
    dv8or wrote: »
    I am sorry Mel but I don't know how you can say I am incorrect? I am a photographer and there is always a price to pay for more glass in front of the sensor in terms of light gathering capability - precisely why we don't have 0.95 zooms.

    The problem lies with the overall light loss you're assuming. You are reaching the conclusion that introducing lenses and mirrors will bring about a cumulative net loss in light compared to the current setup. However, let's say the mirror introduced a 25% loss and the lenses and prisms a 10% loss of what goes into those (it shouldn't be anywhere near those losses), then you would get roughly 1/3 of the light lost leaving 2/3. However, you got rid of the Bayer filter, which was creating a huge loss because each color is blocking 2/3 of the other colors:

    http://www.dpreview.com/forums/thread/2878305

    On top of this, you get multiple sensors so more area to capture the light coming in. More sensors doesn't boost the incoming light intensity but it helps and being monochromatic, they can be setup as dual gain so that some of the photodetectors have a higher sensitivity and do a better job at picking up lower light levels and this gives you single-pass HDR as well as HDR video.

    The light loss from the mirror, lenses and prisms shouldn't come close to the light loss from a Bayer filter so the cumulative effect of this setup would be an improvement in light intensity, while still having all the added features.

    This isn't like taking a mirrorless camera with a fixed lens and just adding a mirror and variable lenses to it because with the standard camera, you aren't removing the Bayer filter and increasing the number of sensors - that scenario does lead to a cumulative loss.
  • Reply 40 of 42
    melgrossmelgross Posts: 32,991member
    Marvin wrote: »
    The problem lies with the overall light loss you're assuming. You are reaching the conclusion that introducing lenses and mirrors will bring about a cumulative net loss in light compared to the current setup. However, let's say the mirror introduced a 25% loss and the lenses and prisms a 10% loss of what goes into those (it shouldn't be anywhere near those losses), then you would get roughly 1/3 of the light lost leaving 2/3. However, you got rid of the Bayer filter, which was creating a huge loss because each color is blocking 2/3 of the other colors:

    http://www.dpreview.com/forums/thread/2878305

    On top of this, you get multiple sensors so more area to capture the light coming in. More sensors doesn't boost the incoming light intensity but it helps and being monochromatic, they can be setup as dual gain so that some of the photodetectors have a higher sensitivity and do a better job at picking up lower light levels and this gives you single-pass HDR as well as HDR video.

    The light loss from the mirror, lenses and prisms shouldn't come close to the light loss from a Bayer filter so the cumulative effect of this setup would be an improvement in light intensity, while still having all the added features.

    This isn't like taking a mirrorless camera with a fixed lens and just adding a mirror and variable lenses to it because with the standard camera, you aren't removing the Bayer filter and increasing the number of sensors - that scenario does lead to a cumulative loss.

    This is correct. In fact, an individual mirror incurs about a 1-3% loss if it's a rear surface mirror, and can be under 1% for the front surface mirrors used in optical pathways. Apple doesn't seem to be using a prism, so losses will be less there as well. Lenses can lose up to a third stop though. But that would be true, no matter what. We can tell, if we know the "T" stop, which is the true transmission number used by movie and Tv lenses.
Sign In or Register to comment.