Looking at Apple's new camera system on the iPhone XS and iPhone XS Max

Posted:
in iPhone
The iPhone XS and iPhone XS Max have an externally similar camera assembly to the iPhone X, but under the lens is a new system. AppleInsider delves into the new camera, and what it will do for you.

IPhone XS Cameras


In terms of camera specifications, Apple's latest flagship iPhones are very much like past models, but are at the same time very different. Calling it a "new dual-camera system," Apple carries over the twin 12-megapixel shooter layout introduced with iPhone X -- a wide-angle lens and a 2X telephoto lens stacked one atop the other.

In fact, looking at the raw specs, the iPhone XS and the iPhone X appear to have largely unchanged cameras. They share identical megapixel counts, F numbers, and other key aspects, but that is only painting part of the picture. Behind those lenses are larger sensors, a faster processor and an improved image signal processor (ISP). Those, combined with several other new features make these the best cameras to ever grace iPhone.

IPhone camera comparison


The main wide-angle camera has a new, larger sensor. The module boasts a 1.4-micrometer pixel pitch, up from the 1.22-micrometer pixel pitch found on the iPhone X. This nearly 20-percent increase in pixel depth should greatly help with light sensitivity, and indeed Apple SVP of Worldwide Marketing Phil Schiller said as much onstage on Wednesday.

Photo layers


Apple has also tightly integrated the ISP with the newly upgraded Neural Engine found in its A12 Bionic processor. The A12 Bionic is an extremely powerful chip that incorporates a number of upgrades over its A11 Bionic predecessor, including faster and more efficient processing cores, and a beefed up GPU. More importantly, the now 8-core Neural Engine plays a larger role in capturing and processing images.

For example, the Neural Engine assists with facial recognition, facial landmarks and image segmentation. Image segmentation helps separate the subject from the background and is likely what is being used to simulate Portrait Mode photos on the iPhone XR, but with a single-lens setup.

The additional speed allows more information to be captured, which has enabled Apple to build in a new Smart HDR feature. High Dynamic Range photos have been available on iPhone for some time, but Smart HDR is more accurate and snappy thanks to the faster processor capturing even more data.

Depth Control on iPhone XS


Portrait Mode has quickly become one of the most popular features on iPhones, and it gets even better with iPhone XS. A new feature allows for advanced depth control, which allows users to better handle bokeh.

After shooting a Portrait Mode photo, users can edit the image and adjust the simulated aperture, increasing or decreasing the amount of background blur. The lower the on-screen F value, the wider the "aperture," presenting more blur and larger bokeh in the background.

Lastly, a new and improved TrueTone flash helps brighten low-light photos.

Turning to video, there are less, but still important improvements. When shooting at 30 frames per second rather than 60, the dynamic range of the captured video is extended. This lends itself to more detailed clips, which is especially noticable on the HDR10 enabled displays. As demoed in Apple's keynote, lowlight performance is also significantly improved with less grain and noise noticable.

Cameras are always important to iPhones, and never more so than during the S-model years where the phones tend to lack any large physical changes to the design. AppleInsider will be going hands-on with the latest round of iPhones soon to truly put the cameras to the test.

Comments

  • Reply 1 of 16
    ...I wish they would give angle of view or 35mm focal length equivalents for the zoom(s) - as a photography enthusiast the features seem very compelling, yet are such basics as focal length a core starting point...?  I shoot 21mm equivalent now - to me that is 'wide angle' yet to others 28 or even 35 are just fine...  The unfortunately discontinued Zeiss ExoLens Pro 'lens system' add on offered 18mm equivalent, with excellent reviews (sharpness, etc) and (to me) indiscernible barrel distortion.

    Anyone...?
    edited September 2018
  • Reply 2 of 16
    In photography, there are combinations of shutter speed and f/stop that yield the same amount of light. Generally speaking, the smaller the f/stop (larger number) the greater the depth of field. However, if you change one, the other must as well. For example, f/8 at 1/500 second yields the same amount of light as f/11 and 1/250 second. The depth of field of f/11 is greater than the depth of field of f/8. Apple's trick must be using the depth information stored with the image, not actually changing the f/stop.
    watto_cobra
  • Reply 3 of 16
    In photography, there are combinations of shutter speed and f/stop that yield the same amount of light. Generally speaking, the smaller the f/stop (larger number) the greater the depth of field. However, if you change one, the other must as well. For example, f/8 at 1/500 second yields the same amount of light as f/11 and 1/250 second. The depth of field of f/11 is greater than the depth of field of f/8. Apple's trick must be using the depth information stored with the image, not actually changing the f/stop.
    They clearly say it's a simulated f/stop; you're not actually change it. Assuming you have your camera set to aperture priority, with all the shutter speed and iso auto adjusting, changing the f/stop affects the depth of field, which most people understand as background blur or bokeh.

    This is just simulating that by using depth mapping to dynamically change the blur of the background with post processing, at least how I understand it. Portrait mode has done this for a while (simulating bokeh), but these seems both better and more customizable. 
    bb-15watto_cobra
  • Reply 4 of 16
    tmaytmay Posts: 6,328member
    ...I wish they would give angle of view or 35mm focal length equivalents for the zoom(s) - as a photography enthusiast the features seem very compelling, yet are such basics as focal length a core starting point...?  I shoot 21mm equivalent now - to me that is 'wide angle' yet to others 28 or even 35 are just fine...  The unfortunately discontinued Zeiss ExoLens Pro 'lens system' add on offered 18mm equivalent, with excellent reviews (sharpness, etc) and (to me) indiscernible barrel distortion.

    Anyone...?
    Last year's X has a 28 mm equivalent "normal" lens, and a 56 mm equivalent "telephoto" lens. Assuming that Apple hasn't reconfigured the lenses, which they likely have, a larger imager points towards a wider FOV.
    watto_cobra
  • Reply 5 of 16
    bb-15bb-15 Posts: 283member
    In photography, there are combinations of shutter speed and f/stop that yield the same amount of light. Generally speaking, the smaller the f/stop (larger number) the greater the depth of field. However, if you change one, the other must as well. For example, f/8 at 1/500 second yields the same amount of light as f/11 and 1/250 second. The depth of field of f/11 is greater than the depth of field of f/8. Apple's trick must be using the depth information stored with the image, not actually changing the f/stop.
    Of course it’s a simulated f/stop since the photo has already been taken.
    This is not the same as looking through a camera viewfinder, changing the f/stop (closing down/opening up the lens) with the same shutter speed and noticing the change in exposure for the camera before a picture is taken.
    - Apple’s Bokeh control simulates what it would look like if the f/stop had been changed with the shutter speed and light exposure being constant. 
    This done in AI where one variable can be changed.
    watto_cobra
  • Reply 6 of 16
    I wish everyone using the term “bokeh” would actually learn what it means...

    In photographybokeh is the aesthetic quality of the blur produced in the out-of-focus parts of an image produced by a lens. Bokeh has been defined as "the way the lens renders out-of-focus points of light". Differences in lens aberrations and aperture shape cause some lens designs to blur the image in a way that is pleasing to the eye, while others produce blurring that is unpleasant or distracting ("good" and "bad" bokeh, respectively).

    Source: Wikipedia 
    edited September 2018 watto_cobra
  • Reply 7 of 16
    hentaiboy said:
    I wish everyone using the term “bokeh” would actually learn what it means...

    In photographybokeh is the aesthetic quality of the blur produced in the out-of-focus parts of an image produced by a lens. Bokeh has been defined as "the way the lens renders out-of-focus points of light". Differences in lens aberrations and aperture shape cause some lens designs to blur the image in a way that is pleasing to the eye, while others produce blurring that is unpleasant or distracting ("good" and "bad" bokeh, respectively).

    Source: Wikipedia 
    Not sure what you are talking about, but I've been following the discussion and everyone here seems to understand what Bokeh is.
    bb-15uraharajbdragonwatto_cobra
  • Reply 8 of 16
    Since you can retroactively control the blur, seems like there's no need to ever take a photo without Portrait Mode.
    edited September 2018 watto_cobra
  • Reply 9 of 16
    igorsky said:
    Since you can retroactively control the blur, seems like there's no need to ever take a photo without Portrait Mode.
    There are many good reasons to take a photo without Portrait Mode, here's a few. Taking photo in regular, non-portrait-mode is more responsive, and the camera locks on focus faster. Non-portrait mode allows to take close-up photos. Portrait Mode always requires the subject to be at certain distance from the camera to activate the Depth Effect, requiring more time to take the shot. Photos in Portrait Mode appear to be grainier/noisier than regular photos, especially in low light. The focal length of Portrait Mode may not be desirable for group shots or sceneries. The cutaway effect of Portrait Mode sometimes is annoying when it couldn't cleanly separate hair from the background, making the edges/transitions look fake & ugly.
    watto_cobra
  • Reply 10 of 16
    Since the Xr only has a wide angle lens, does the portrait mode do a digital zoom or does the portrait mode work at wide angle?
    watto_cobra
  • Reply 11 of 16
    bb-15 said:
    In photography, there are combinations of shutter speed and f/stop that yield the same amount of light. Generally speaking, the smaller the f/stop (larger number) the greater the depth of field. However, if you change one, the other must as well. For example, f/8 at 1/500 second yields the same amount of light as f/11 and 1/250 second. The depth of field of f/11 is greater than the depth of field of f/8. Apple's trick must be using the depth information stored with the image, not actually changing the f/stop.
    Of course it’s a simulated f/stop since the photo has already been taken.
    This is not the same as looking through a camera viewfinder, changing the f/stop (closing down/opening up the lens) with the same shutter speed and noticing the change in exposure for the camera before a picture is taken.
    - Apple’s Bokeh control simulates what it would look like if the f/stop had been changed with the shutter speed and light exposure being constant. 
    This done in AI where one variable can be changed.
    I know. But the demo showed a change if f/stop. Should have used something else to explain the change in depth of field.
    watto_cobra
  • Reply 12 of 16
    igorsky said:
    Since you can retroactively control the blur, seems like there's no need to ever take a photo without Portrait Mode.
    Can you control it until blur is gone? Maybe it’s like controlling from high blur to low blur?
    watto_cobra
  • Reply 13 of 16
    christy_p said:
    igorsky said:
    Since you can retroactively control the blur, seems like there's no need to ever take a photo without Portrait Mode.
    There are many good reasons to take a photo without Portrait Mode, here's a few. Taking photo in regular, non-portrait-mode is more responsive, and the camera locks on focus faster. Non-portrait mode allows to take close-up photos. Portrait Mode always requires the subject to be at certain distance from the camera to activate the Depth Effect, requiring more time to take the shot. Photos in Portrait Mode appear to be grainier/noisier than regular photos, especially in low light. The focal length of Portrait Mode may not be desirable for group shots or sceneries. The cutaway effect of Portrait Mode sometimes is annoying when it couldn't cleanly separate hair from the background, making the edges/transitions look fake & ugly.
    I totally hate when you see the mistakes around the edges of the subject when the blur couldn’t figure out what to blur exactly.
    watto_cobra
  • Reply 14 of 16

    I've got a few questions in my mind about the improvements to the camera system. Based on what Phil said during the presentation, it looks like the iPhone potentially stores 6-12 shots for each photo taken. That would potentially add to the size of each photo.

    What happens to these additional shots when you transfer the photos to the Mac? Will Photos on the Mac also be updated to allow for the new options, like setting the DoF for shots?

    What happens when I choose to export the unmodified originals from Photos on the Mac? Will it export each of the shots as a separate photo?

    watto_cobra
  • Reply 15 of 16
    igorsky said:
    Since you can retroactively control the blur, seems like there's no need to ever take a photo without Portrait Mode.
    Can you control it until blur is gone? Maybe it’s like controlling from high blur to low blur?
    In the demo it looked like you can completely remove it. But Christy_p made some good points as to why Portrait is not always ideal. 
    watto_cobra
  • Reply 16 of 16

    I've got a few questions in my mind about the improvements to the camera system. Based on what Phil said during the presentation, it looks like the iPhone potentially stores 6-12 shots for each photo taken. That would potentially add to the size of each photo.

    It computationally combines the over, under, and correctly exposed images to produce the final result. It's one image, not 6-12.
Sign In or Register to comment.