Report: Apple's next iPhone to sport 3.2-megapixel camera

145679

Comments

  • Reply 161 of 189
    maedocmaedoc Posts: 11member
    Quote:
    Originally Posted by solipsism View Post


    With this expected increase in megapixels and processing power with the next iPhone, are we to expect a better and bigger lens or will Apple most likely keep the small size lens it currently uses?



    Will video recording be able to use a lower megapixel than 3.2 in order to allow for more framerates while maintaining adequate power usage? Something that jailbroken video recording apps can't do.



    Are there any phones that are currently using this CMOS so we can get a real idea of what to expect from the next iPhone's camera?



    See my post #154

    TrueFocus™ camera systems consist of a Wavefront Coded® lens; an OV3632 3.2 Megapixel CMOS image sensor; and an OV630 TrueFocus signal processor. Modules are available in a 8.0 x 8.0 x 6.0 mm footprint, which is smaller than the AF modules currently available. It's that 6.0 mm dimension that is critical for the iPhone.



    If indeed they are going to use the TrueFocus system, the lens can be very small, but encased in the proprietary filter that 'unfocuses' the light. The whole idea of intentionally blurring and then using a signal processor to do all of the focusing seems hard to believe.



    I have found no information on the video capabilities of this sensor package; not much on the OV630. The new iPhone version will likely have an updated signal processor that handles video well; 15fps is 2006 technology so I would hope 30fps @ 720p.



    As far as I can tell, no one uses this product yet. This technology was expected to be on the market in 2007 but has been constantly delayed. OmniVision did show off a sample camera at a trade show but that is it so far.
  • Reply 162 of 189
    maedocmaedoc Posts: 11member
  • Reply 163 of 189
    solipsismsolipsism Posts: 25,726member
    Quote:
    Originally Posted by Máedóc View Post


    See my post #154

    TrueFocus? camera systems consist of a Wavefront Coded® lens; an OV3632 3.2 Megapixel CMOS image sensor; and an OV630 TrueFocus signal processor. Modules are available in a 8.0 x 8.0 x 6.0 mm footprint, which is smaller than the AF modules currently available. It's that 6.0 mm dimension that is critical for the iPhone.



    If indeed they are going to use the TrueFocus system, the lens can be very small, but encased in the proprietary filter that 'unfocuses' the light. The whole idea of intentionally blurring and then using a signal processor to do all of the focusing seems hard to believe.

    I have found no information on the video capabilities of this sensor package; not much on the OV630. The new iPhone version will likely have an updated signal processor that handles video well; 15fps is 2006 technology so I would hope 30fps @ 720p.



    As far as I can tell, no one uses this product yet. This technology was expected to be on the market in 2007 but has been constantly delayed. OmniVision did show off a sample camera at a trade show but that is it so far.



    Thanks for the detailed info.
  • Reply 164 of 189
    MacProMacPro Posts: 19,728member
    Quote:
    Originally Posted by Máedóc View Post


    see pic



    Work there?
  • Reply 165 of 189
    pxtpxt Posts: 683member
    Quote:
    Originally Posted by melgross View Post


    If he went away for a while, then it worked.



    I just got a post, as a mod, reporting my post!



    I tried to explain that it's a way of dealing with him when he gets too far out.



    The constant personal arguments between various posters and techstud are no pleasure to read for those of us who are interested in the Apple news. Discussion sites are full of that kind of nonsense and AppleInsider is one of the few refuges for some interesting debate. I hope he doesn't continue to dilute the experience.
  • Reply 166 of 189
    tenobelltenobell Posts: 7,014member
    65mm run through the camera horizontally is the shooting format. 70mm is a projection format.



    Quote:
    Originally Posted by gmcalpin View Post


    Also, IMAX isn't shot digitally; it's shot on 70mm film.



  • Reply 167 of 189
    tenobelltenobell Posts: 7,014member
    I would have to respectfully disagree. RED is actually a good place for misinformtion at best half truth.



    The 3:1:1 color sampled HD camera is the first Sony 1080 HD format HDCam. It's from 10 years ago. Sony has introduced better HD formats in that time. HDCam is not currently the common format.



    RED does not really record 4K. A 4K frame contains 36 megapixels not 12. RED interpolates 36 megapixels from 12 and compresses the image. The resulting image has roughly 2.8K of real visual information.



    Quote:
    Originally Posted by digitalclips View Post




    The Red info is a good indicator of sensors quality, drool. I quote ..



    "Typical high-end HD camcorders have 2.1M pixel sensors and record with 3:1:1 color sub-sampled video at up to 30fps. RED offers the Mysterium ™ Super 35mm cine sized (24.4×13.7mm) sensor, which provides 4K (up to 30 fps), 3K (up to 60 fps) and 2K (up to 120 fps) capture, and all this with wide dynamic range and color space in 12 bit native RAW. At 4K, that’s more than 5 times the amount of information available every second and a vastly superior recording quality. In addition, you get the same breathtaking Depth of Field and selective focus as found in film cameras using equivalent 35mm P/L mount lenses. Mysterium ™ boasts greater than 66db Dynamic Range thanks to its large 29 sq. micron pixels. And 12,065,000 pixels deliver resolution that can only be called Ultra High Definition."



  • Reply 168 of 189
    tenobelltenobell Posts: 7,014member
    I don't think any of us are arguing that more pixels aren't better. To get real benefit from more pixels requires the enitire camera system to improve.



    RED uses 35mm lens that cost 10's of thousands of dollars each. It uses advaced sensor technology, advanced processing, and advanced compression algorithms. All of this is needed to render a better image.



    Quote:
    Originally Posted by digitalclips View Post


    The point of mentioning it is that I always try to argue with the 'We don't need more' brigade.



  • Reply 169 of 189
    maedocmaedoc Posts: 11member
    Quote:
    Originally Posted by PXT View Post


    The constant personal arguments between various posters and techstud are no pleasure to read for those of us who are interested in the Apple news. Discussion sites are full of that kind of nonsense and AppleInsider is one of the few refuges for some interesting debate. I hope he doesn't continue to dilute the experience.



    I agree wholeheartedly. Free speech should not be stifled but these personal arguments are bringing me down, making it a chore to read through these posts in order to find the valuable arguments. The moderators need to take some initiative and do a better job or they risk losing good posters.
  • Reply 170 of 189
    jeffdmjeffdm Posts: 12,951member
    Quote:
    Originally Posted by digitalclips View Post


    Can't argue there lol.



    I dream of it most nights.



    The point of mentioning it is that I always try to argue with the 'We don't need more' brigade.



    We would never need more than 1MB RAM and a 100 MB hard drive etc ...



    One day ... not too far away the iPhone (or its descendant) will surpass the RED of today ... and be 3D



    What you're missing is that we're not saying more pixels is never better. We're talking about a device with a given set of constraints in terms of size and cost, not entire field of cameras in general, no one is going to carry a RED ONE in their pocket any time soon. The problem is that number of pixels is just one factor in the equation, and in optics, like many other things, you need to address the weakest link. There is also a diminishing return as well, when you're talking about a given set of constraints. When you need something as small and light as a phone, you have to make major trade-offs. As it is, my old 3MP PowerShot takes nicer pictures than the 10MP phone images that I've seen lately. Beyond a certain point, diminishing returns kick in and at best, you're just making bigger files without getting much in return.



    I'm sure we'll get 3D in phones eventually, but a lot of what RED gets you depends very much on sheer physical size. Unless you're going to lower your standards by a shocking amount, you aren't going to get a cinematic look out of any lens that's 1mm in diameter and a sensor that's the size of a pin. Optical physics simply won't allow for that.
  • Reply 171 of 189
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by Máedóc View Post


    Why is nobody here discussing the very real possibility that the camera Apple has chosen could be using the Omnivision Truefocus technology as mentioned by retroneo in the 8th post on this thread?



    Truefocus (aka: Wavefront Coding) is a paradigm shift in the history of imaging and optics. Originialy developed by the Imaging Systems Laboratory at the University of Colorado, the technology was privatized by CDM Optics in order to develop products for the scientific and medical industry. CDM was aquired by Omnivision in 2005 opening the door for consumer application of Wavefront Coding technology.



    From the Omnivision website:

    ?A revolutionary technology called Wavefront Coding? is set to fundamentally change the nature of digital imaging. Wavefront Coding is the joint optimization of non-traditional, specialized optics and sophisticated signal processing. Unlike conventional imaging systems, with Wavefront Coded systems optics and signal processing are closely tied together to facilitate a new and improved image formation process that produces images with significantly increased depth of field without sacrificing light gathering. This close relationship between optics (encoding) and image processing (decoding) enables digital cameras to capture images that are sharp and clear at any time and throughout the object field without the need to physically move optics to focus. Wavefront Coding enables true 'point-and-shoot' digital cameras. Because Wavefront Coded systems shift a large portion of traditional optics and the auto focus mechanics into novel fixed optics and silicon, Wavefront Coded camera systems enable reduction in overall camera module size as well as continued systems cost reductions that follow Moore's Law.



    Wavefront Coding is now for the first time available in commercial applications, more specifically mobile-phone cameras where proximity focusing, low height optics and low tolerance assembly are combined with high target yields. Marketed under the name TrueFocus?, Wavefront Coding is used to optimize optics, sensors,and processors to maximize imaging performance and minimize costs. Such advances have not been possible until now. Using TrueFocus technology offers real and instant one-click photography without delay in image capture or spoiled pictures for the end user, effectively paving the way for the next wave of affordable camera phones offering superior imaging capability at higher resolutions without the need for auto focus. ?




    Although Omnivision also makes traditional image sensors, I think it's highly likely that Apple will use their TrueFocus?. If thats the case then all this talk of depth of field, sensor size, number & quality of lens elements, etc. will need to be reconsidered.





    http://www.ovt.com/products/truefocus.php



    http://www.cdm-optics.com/?section=Tutorials



    If they use it, I surely hope they can disable it. This sounds like a photographers nightmare!



    Good for some scientific imaging, and snapshots, but for pure photography, terrible!!!
  • Reply 172 of 189
    mdriftmeyermdriftmeyer Posts: 7,503member
    This could easily compete with Verizon's new product, once their backbone is fully lit.



    These video conference options via video phones at businesses that can interface with projectors for conferencing while someone is in the field could be another option for the iPhone OS.
  • Reply 173 of 189
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by Máedóc View Post


    See my post #154

    TrueFocus? camera systems consist of a Wavefront Coded® lens; an OV3632 3.2 Megapixel CMOS image sensor; and an OV630 TrueFocus signal processor. Modules are available in a 8.0 x 8.0 x 6.0 mm footprint, which is smaller than the AF modules currently available. It's that 6.0 mm dimension that is critical for the iPhone.



    If indeed they are going to use the TrueFocus system, the lens can be very small, but encased in the proprietary filter that 'unfocuses' the light. The whole idea of intentionally blurring and then using a signal processor to do all of the focusing seems hard to believe.



    I have found no information on the video capabilities of this sensor package; not much on the OV630. The new iPhone version will likely have an updated signal processor that handles video well; 15fps is 2006 technology so I would hope 30fps @ 720p.



    As far as I can tell, no one uses this product yet. This technology was expected to be on the market in 2007 but has been constantly delayed. OmniVision did show off a sample camera at a trade show but that is it so far.



    Here:



    http://www.prnewswire.co.uk/cgi/news/release?id=190344



    They aren't "blurring" the image. They are defocussing it in a particular way. What happens is that the light bundles come out of the "lens" in a way that is peculiar to their system, in that objects near and far have a particular relationship to each other that they can decode, thus bringing the entire image, near and far, into focus.



    They are deconstructing the wavefront. I don't know if they use Fourier analysis or something else. But that breaks up complex waveforms into their sine wave equivalents for analysis. Then you can reconstruct it in a different way. We use this in audio all the time. It's also used in many other industries.
  • Reply 174 of 189
    maedocmaedoc Posts: 11member
    Quote:
    Originally Posted by melgross View Post


    If they use it, I surely hope they can disable it. This sounds like a photographers nightmare!



    Good for some scientific imaging, and snapshots, but for pure photography, terrible!!!



    Granted, in quality photography and cinematography a limited depth of field is not just a consequence of the lenses, but also a desired creative effect. But in the case of a mobile phone camera, I would think you would be willing to confess some trade-offs have to be made not the least of which is a lens and sensor package that is less than a cm thick. I think the TrueFocus system has more advantages than disadvantages for this form factor. And you couldn't turn it off because it's a system that uses a specialized aspheric lens.



    Quote:
    Originally Posted by melgross View Post


    They aren't "blurring" the image. They are defocussing it in a particular way.



    I initially used the made-up word 'unfocus' and then secondarily 'blurred' because I thought that would give most people a visual idea of what was going on. Your term, ?defocussing? is probably more proper.



    Quote:
    Originally Posted by melgross View Post


    They are deconstructing the wavefront. I don't know if they use Fourier analysis or something else. But that breaks up complex waveforms into their sine wave equivalents for analysis.



    Although I don't know much about Fourier analysis, I think you are on the right track. A search of CDM's domain brings up several research papers that mention Fourier. Here's a link that shows a simplistic comparison of what a traditional lens vs TrueFocus does:



    http://www.cdm-optics.com/?id=17



    The wavefront coding does focus the light but not to a traditional focal point.
  • Reply 175 of 189
    maedocmaedoc Posts: 11member
    I'm channeling Nostradamus. If DigiTimes is correct, and Omnivision is going to be the camera supplier then I predict an updated version of this:



    http://www.ovt.com/uploads/newsrelea...ease_Final.pdf



    ...will be the 3.2 megapixel CMOS sensor. (OV3642 is no longer listed amongst the current products on the OmniVision website.)



    And this:



    http://www.ovt.com/products/part_detail.php?id=36



    ...will be the 5 megapixel CMOS sensor. This is one serious chip for its size! Nothing else like it in the 1/4? format as far as I know. Check out these specs for video framerate & resolution: 60fps at 720p, 30fps at 1080p, 15fps at full resolution (QVGA, VGA, QSXGA)



    The marketing materials say these are SOCs (System On a Chip) so perhaps they have eliminated the need for a signal processor altogether; still not clear on that. Regardless, even if a signal or video processor is needed, it is still one less chip than most CMOS system architectures. This equates to power conservation and Apple has to be looking for that in every component of the iPhone.



    These chips have embedded TrueFocus(post #154) so I don't see why Apple would not use them unless they think it is unproven technology. However, I think this fits Apple's iPhone roadmap of updating to cutting edged, market differentiating hardware that will reinforce its market-leading software.
  • Reply 176 of 189
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by Máedóc View Post


    Granted, in quality photography and cinematography a limited depth of field is not just a consequence of the lenses, but also a desired creative effect. But in the case of a mobile phone camera, I would think you would be willing to confess some trade-offs have to be made not the least of which is a lens and sensor package that is less than a cm thick. I think the TrueFocus system has more advantages than disadvantages for this form factor. And you couldn't turn it off because it's a system that uses a specialized aspheric lens.



    The thing is that over the past 2.5 years, ever since the iPhone was first announced, people have been complaining about the 2 MP sensor, and saying how mch better the various 3, 4, 5, 7, and even 8 MP sensor phones were, because they could take "real" pictures, just as good as those from compact cameras (not that they're so great).



    Well, if people want higher MP's in their phone cameras because they want to take "real" pictures that they can enlarge, they are going to want the features that "real" cameras have.



    We're not going to get it with a phone camera, unless major changes take place, and this technology, while good for some things, is NOT good for photography, even though, as I said, it's good for snapshots, for which we don't need great phone cameras anyway.



    Quote:

    I initially used the made-up word 'unfocus' and then secondarily 'blurred' because I thought that would give most people a visual idea of what was going on. Your term, “defocussing” is probably more proper.



    It's ok. I just wanted to make the point that this was a controlled procedure, whereas "blur" tends to make people think of something random, and mixed up, which this isn't.



    Quote:

    Although I don't know much about Fourier analysis, I think you are on the right track. A search of CDM's domain brings up several research papers that mention Fourier. Here's a link that shows a simplistic comparison of what a traditional lens vs TrueFocus does:



    http://www.cdm-optics.com/?id=17



    The wavefront coding does focus the light but not to a traditional focal point.



    Yeah, that's what I thought.



    It's not really focussing it. It's difficult to explain, but some methods leave you with no actual point of focus, and this is even more complex.



    You can get some idea of what they're doing. They're creating interference patterns, which can be analyzed using the Fast Fourier Transform, broken down into it's simplest representative components, and reassembled again in the way they need it.



    It's really clever.
  • Reply 177 of 189
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by Máedóc View Post


    I'm channeling Nostradamus. If DigiTimes is correct, and Omnivision is going to be the camera supplier then I predict an updated version of this:



    http://www.ovt.com/uploads/newsrelea...ease_Final.pdf



    ...will be the 3.2 megapixel CMOS sensor. (OV3642 is no longer listed amongst the current products on the OmniVision website.)



    And this:



    http://www.ovt.com/products/part_detail.php?id=36



    ...will be the 5 megapixel CMOS sensor. This is one serious chip for its size! Nothing else like it in the 1/4? format as far as I know. Check out these specs for video framerate & resolution: 60fps at 720p, 30fps at 1080p, 15fps at full resolution (QVGA, VGA, QSXGA)



    The marketing materials say these are SOCs (System On a Chip) so perhaps they have eliminated the need for a signal processor altogether; still not clear on that. Regardless, even if a signal or video processor is needed, it is still one less chip than most CMOS system architectures. This equates to power conservation and Apple has to be looking for that in every component of the iPhone.



    These chips have embedded TrueFocus(post #154) so I don't see why Apple would not use them unless they think it is unproven technology. However, I think this fits Apple's iPhone roadmap of updating to cutting edged, market differentiating hardware that will reinforce its market-leading software.



    It's certainly interesting.



    It's been said that moving beyond 65 nm would yield new design ideas that weren't possible before. Looks as though some innovative companies are coming out with plenty.
  • Reply 178 of 189
    mdriftmeyermdriftmeyer Posts: 7,503member
    Quote:
    Originally Posted by melgross View Post


    Here:



    http://www.prnewswire.co.uk/cgi/news/release?id=190344



    They aren't "blurring" the image. They are defocussing it in a particular way. What happens is that the light bundles come out of the "lens" in a way that is peculiar to their system, in that objects near and far have a particular relationship to each other that they can decode, thus bringing the entire image, near and far, into focus.



    They are deconstructing the wavefront. I don't know if they use Fourier analysis or something else. But that breaks up complex waveforms into their sine wave equivalents for analysis. Then you can reconstruct it in a different way. We use this in audio all the time. It's also used in many other industries.



    From my perspective they already know the entire color spectrum visible to the human eye, they know the range of their optics in that spectrum, the luminosity is recorded, they'll definitely be using Fourier Transforms and they will most certainly be using Fractal Imaging to build out patterned surfaces of self-replicating curves combined and be able to digitally enhance the range of focus based upon the basic laws of Optics.
  • Reply 179 of 189
    jowie74jowie74 Posts: 540member
    Quote:
    Originally Posted by ltcommander.data View Post


    Maybe the 5-megapixel camera is the new iSight for Macs.



    Why would anyone need a 5MP iSight camera? Why would you need to have an iChat conversation with a resolution several times higher than HD?



    5MP is only useful if you are taking shots with a camera. Putting 5MP into a camera whose only use is to take face video and send them over a broadband connection in real-time is a waste of time, resolution, bandwidth... Basically everything. Don't buy that at all.
  • Reply 180 of 189
    doroteadorotea Posts: 323member
    Quote:
    Originally Posted by chadisawesome View Post


    there is a typo in your image. they wrote "traditional" but they meant to write "doctored"



    I think the point of the traditional picture is that in traditional photography you have to select focus on foreground or background or something else. Something is out of focus. It looks like the truefocus picture has everything in focus.
Sign In or Register to comment.