Sony's next-gen camera sensor could allow Apple to make thinner iPhones

124»

Comments

  • Reply 61 of 70
    Quote:
    Originally Posted by PhilBoogie View Post


    Too bad for Slurpy, Galbi and the lot: Lytro software is Mac only for now, Windows is in development.



    And apparently Steve met with their CEO, so we might yet see this stuff in iDevices.



    http://9to5mac.com/2012/01/23/steve-...field-sensors/
  • Reply 62 of 70
    muppetrymuppetry Posts: 3,331member
    Quote:
    Originally Posted by NormM View Post


    There have been cellphone cameras for more than five years that have used wavefront coding for extended depth of field (no need to focus, everything is always in focus using post-processing of the sensor's light field information). In fact, there were rumors that Apple would use one of Omnivisions "TrueFocus" wavefront coding sensors in 2007 in the original iPhone, but Apple used a different Omnivision chip instead. I'm not sure why the new light field stuff is getting people excited -- I think you can do similar refocusing tricks with the wavefront coding sensors.



    I think there is one particular difference between the two techniques that may matter in terms of the final image.



    Light field recording preserves all information on subject distances, allowing reconstruction of images with arbitrary focus plane and depth of field. Very expensive on resolution of course.



    Wavefront coding produces a raw image that is approximately equally blurred over a very large depth of field, allowing digital signal processing to produce a one final image that is in focus over that large depth of field, but no control over depth of field or focus plane - i.e. mimics a very small aperture without a reduction in light gathering.
  • Reply 63 of 70
    stelligentstelligent Posts: 2,680member
    Quote:
    Originally Posted by Tallest Skil View Post


    Screw pixels. I want the best sensor and lens on the freaking planet. There's no reason for a 12 MP camera.



    While I agree increasing pixel count is not the way to improve photography, one cannot simply make this statement on its own. One can say there's no reason to opt for a 12 MP sensor that has the same size, dynamic range, etc. One say can it would be better to opt for a larger sensor without increasing pixel count. But it would be wrong to dismiss 12 MP on its own merit, just as it is wrong now for the digital camera industry to pretend that pixel count is the most important spec to keep increasing.
  • Reply 64 of 70
    Quote:
    Originally Posted by lightknight View Post


    This is politics, not Apple talk. However, since you started: America has for 150 years, every decade, invaded a country. Yellow journalism created wars as far back as the 1880's. The USA have NO CREDIBILITY WHATSOEVER talking about morals.



    Iran, on the other hand, is a 6000 years old country, that gave birth to monotheism with Zoroastrism, that heavily modified Islam and organised its elite, and more importantly that has been shunned by the international community since it rebelled against the exploitation of its oil by anglosaxon companies in the thirties.



    Moreover, Iran wants nukes to defend against another aggressive country that has nukes (see the Vanini affair to understand), and that has bombed their neighbors repeatedly (just check the reactor "Osirak"s fate).



    Pretending that Iran is the bad guy may or may not be true. Pretending that the USA have ANY moral right in playing tough with Iran is, simply put, playing gunman for a mafia boss. Guess who.



    Some good points. It's indeed hypocritical for nations armed with nuclear weapons to tell another nation it cannot pursue the same *protection*.
  • Reply 65 of 70
    Please...if one chooses to make comments regarding America's foreign policy, it would be nice if you were willing to state your country of origin / nationality for context.



    Thanks.
  • Reply 66 of 70
    normmnormm Posts: 653member
    Quote:
    Originally Posted by muppetry View Post


    I think there is one particular difference between the two techniques that may matter in terms of the final image.



    Light field recording preserves all information on subject distances, allowing reconstruction of images with arbitrary focus plane and depth of field. Very expensive on resolution of course.



    Wavefront coding produces a raw image that is approximately equally blurred over a very large depth of field, allowing digital signal processing to produce a one final image that is in focus over that large depth of field, but no control over depth of field or focus plane - i.e. mimics a very small aperture without a reduction in light gathering.



    As long as there is enough information in the captured pixels to produce a crude depth-map for an all-in-focus image (i.e., about how far away is the thing captured by each pixel), then a blur filter applied to parts of the in-focus image can do a pretty good simulation of controlled depth of field. Computational imaging techniques seem to just be getting started, and I doubt if the Lytro idea is efficient enough in its use of available light to be the future.
  • Reply 67 of 70
    muppetrymuppetry Posts: 3,331member
    Quote:
    Originally Posted by NormM View Post


    As long as there is enough information in the captured pixels to produce a crude depth-map for an all-in-focus image (i.e., about how far away is the thing captured by each pixel), then a blur filter applied to parts of the in-focus image can do a pretty good simulation of controlled depth of field. Computational imaging techniques seem to just be getting started, and I doubt if the Lytro idea is efficient enough in its use of available light to be the future.



    That's a fascinating paper - thanks. It's still not apparent to me though that wavefront coding alone captures distance data even enough for a crude depth map. I thought the whole point of the aspheric lens in the transform plane was to make the degree of defocus a very insensitive function of object distance, which surely amounts to discarding the distance data.



    To quote from page 2 of the Levin et al. paper you referenced:
    "We note, however, that many approaches such as coded apertures and our new lattice-focal lens involve a depth-dependent PSF φd and require a challenging depth identification stage. On the positive side, such systems output a coarse depth map of the scene in addition to the all-focused image. In contrast, designs like wavefront coding and focus sweep have an important advantage: their blur kernel is invariant to depth."
    Am I missing something fundamental there?



    I think the bigger problem with the Lytro (light field) technique is that all those extra data have to be captured. With the image resolution now defined by the microlens array rather than the sensor, and requiring multiple pixels under each microlens to resolve the ray directions, you need a huge sensor pixel count to maintain the image resolution that we are accustomed to. On the other hand, since you can still sum and average the intensities across those sub-lens pixel sets to construct the dynamic range of the image, the loss of light efficiency should less of a problem.
  • Reply 68 of 70
    bigpicsbigpics Posts: 1,397member
    Quote:

    Originally Posted by bigpics

    Quote:

    Like that new 4g chipset I'm betting will be crammed in there by next fall - and even if there's a smaller, more efficient part available then compared to today, it will still in all likelihood be a bigger power drain than the current one.



    Quote:
    Originally Posted by Firefly7475 View Post


    Q2/Q3 this year will see "single chip" LTE @ 28nm.



    That means 4G phones with similar battery life/size as current 3G phones.



    Good news - but with a caveat. Let's cross our fingers as the article also points out:



    Quote:

    As you may have heard however, the move to 28nm at both TSMC and Global Foundries isn't really going all that smoothly. The jump from 4x-nm to 28nm is a very big one, so it's not unexpected to have pretty serious teething problems as the process ramps up. I suspect that an aggressive 28nm roadmap that didn't pan out probably caught a lot of SoC and smartphone vendors in a position where they couldn't ship what they wanted to in 2011.



  • Reply 69 of 70
    However, the benefits of the iPhone 4 in many cases the design and adds a bit more than I thickness. The phone of the material, I'm not afraid to directly on it but I like a little extra thin for my good.
  • Reply 70 of 70
    solipsismxsolipsismx Posts: 19,566member
    Quote:
    Originally Posted by Tallest Skil View Post


    And apparently Steve met with their CEO, so we might yet see this stuff in iDevices.



    http://9to5mac.com/2012/01/23/steve-...field-sensors/



    I wouldn't hold your breath. Looking at the camera and sample images they offer both HW and the the file sizes are huge.
Sign In or Register to comment.