Inside Apple's Deep Fusion, the iPhone 11 and Pro's computational photography feature

Posted:
in iPhone edited December 2019
Apple is including a computational photography feature called 'Deep Fusion' in the iOS 13.2 beta, which can help produce highly detailed images from the iPhone 11 and iPhone 11 Pro cameras. AppleInsider explains how the groundbreaking feature functions.




Teased during the launch of the iPhone 11 and iPhone 11 Pro, Deep Fusion is an addition to the iOS camera app that combines multiple photographs into a single shot, one that boasts high levels of detail that would normally be produced via a long exposure. Apple's system is capable of optimizing the texture, details, and noise throughout a photograph, performing it all in just a second.

To do this, Apple takes advantage of the time before the photograph is taken, as well as the processing power offered by the A13 Bionic chip's Neural Engine to use "advanced machine learning" techniques on the task.

The on-stage explainer

As explained onstage during September's special event, Apple SVP of Worldwide Marketing Phil Schiller says the iPhone takes a total of nine images in order to produce one. Before the user triggers the shutter, the iPhone has already taken "four short images and four secondary images," then takes "one long exposure" when the shutter is actually triggered.

The Neural Engine then combines the long exposure with the short images, and picks the best combination from the collection by going "pixel by pixel" through 24 million pixels. It then goes through the resulting image to optimize for detail and low noise.

Apple's Phil Schiller on stage highlighting the sweater detail in a photograph taken using the technique
Apple's Phil Schiller on stage highlighting the sweater detail in a photograph taken using the technique


The result is an image that Schiller calls "Computational Photography Mad Science," and the first time the Neural Engine is being used to determine the output from a photography process.

HDR, but not quite

Apple's decision to take multiple shots in a quick succession makes sense for a number of reasons. While Schiller's description of "four short images" along with "four secondary images" sounds like eight normal photographs, they actually serve a few different purposes.

The four "short" shots refer to photographs taken with a lower exposure value than would usually be used. The lower value is on purpose, as this can enable the iPhone to get images with sharpness values it can use as part of its later processing. The "secondary" images are taken at a normal exposure, while the long exposure is done with a higher exposure value.

In effect this is similar in some ways to High Dynamic Range photography, where multiple images are taken either at the same time or in quick succession, with an under-exposed image combined with one at a higher exposure to capture a wider range of detail. However HDR photography doesn't usually rely on a combination of long and short exposure shots.

Compiling it all together

After selecting the best combination of short exposures and the long exposure, Deep Fusion processes the two for noise, then runs through the two images on a per-pixel basis to refine the shot, passing through four different steps. Each step analyzes a different portion of the image, such as sky and scenery or fabrics and skin, with each type of subject matter being given a different level of processing.

This ranking of elements also informs the system of how to combine the short and long exposures together to make the final image. While it may take the color tone and luminosity data from one image and the detail from the other, the ranking determines how much the system should err towards detail or other data for each element.

In theory, this computational photography method should turn out shots with higher levels of detail in areas that users appreciate, such as the skin or hair, and for the edges of subjects to be cleaner than a standard photo could normally produce.

Easy to explain, harder to test

Deep Fusion is presently in testing, with it hitting the developer community on Wednesday with the first beta of iOS 13.2. AppleInsider has started looking at it, but it is a complex test, with many variables to isolate and evaluate.

In short, so far we've found only very minute differences in photos when using the tele lens between an updated and unupdated iPhone 11 Pro Max. The feature only kicks in in medium and low-light, but not for Dark Mode photos at all.

It's also not clear when the feature is kicking in, and when it isn't. But, like we said, the tests are only now just beginning, and there is still a bit of time before iOS 13.2 ships to the public.

We'll keep testing, and get back to you shortly in regards to what this means in day-to-day use.
jahblade

Comments

  • Reply 2 of 7
    However HDR photography doesn't usually rely on a combination of long and short exposure shots.”

    Yes it does. Changing the shutter speed is the primary means of capturing multiple exposures for HDR.
    SpamSandwichfastasleepretrogustojony0
  • Reply 3 of 7
    sflocalsflocal Posts: 6,136member
    In effect this is similar in some ways to High Dynamic Range photography, where multiple images are taken either at the same time or in quick succession, with an under-exposed image combined with one at a higher exposure to capture a wider range of detail. However HDR photography doesn't usually rely on a combination of long and short exposure shots.
    I'm confused by this.  The first portion you describe HDR as using multiple images, but the last sentence you say HDR photography doesn't usually rely on that.
  • Reply 4 of 7
    fastasleepfastasleep Posts: 6,453member
    sflocal said:
    In effect this is similar in some ways to High Dynamic Range photography, where multiple images are taken either at the same time or in quick succession, with an under-exposed image combined with one at a higher exposure to capture a wider range of detail. However HDR photography doesn't usually rely on a combination of long and short exposure shots.
    I'm confused by this.  The first portion you describe HDR as using multiple images, but the last sentence you say HDR photography doesn't usually rely on that.
    I think they meant that yes they use multiple images, but utilizing other forms of exposure bracketing are usually used like varying F-stop exposures, but I'm not sure that's true given changing the aperture also affects the depth of field. 
    watto_cobra
  • Reply 5 of 7
    sflocal said:
    In effect this is similar in some ways to High Dynamic Range photography, where multiple images are taken either at the same time or in quick succession, with an under-exposed image combined with one at a higher exposure to capture a wider range of detail. However HDR photography doesn't usually rely on a combination of long and short exposure shots.
    I'm confused by this.  The first portion you describe HDR as using multiple images, but the last sentence you say HDR photography doesn't usually rely on that.
    You should. It’s the same principle with different goal. High Dynamic Range is a way to get into zones that’s supposed to be out of reach for normal camera, while Deep Fusion will get you much better details on those zones. Deep Fusion will be much needed in low light situations where normally noise reduction is applied but now you’ll be able to see details in those affected area as if the lighting condition is good. Deep Fusion will be as if your lens is better than it actually is while HDR will be as if your sensor is better than it actually is. 
    edited October 2019
  • Reply 6 of 7
    coolfactorcoolfactor Posts: 2,338member
    But does this make the photos better than those coming out of mobile cameras with similar hardware?

    We'll see...


  • Reply 7 of 7
    kevin keekevin kee Posts: 1,289member
    From the article all I get is Deep Fusion requires deep testing.

    Okay...
    watto_cobra
Sign In or Register to comment.