Portrait Mode on iPhone SE relies only on machine learning

Posted:
in General Discussion
Apple's new iPhone SE is the company's first -- and thus far, only -- iPhone to solely rely on machine learning for Portrait Mode depth estimation.

The iPhone SE can create depth maps from a single 2D image using machine learning. Credit: Halide
The iPhone SE can create depth maps from a single 2D image using machine learning. Credit: Halide


The iPhone SE, released in April, appears to be largely a copy of the iPhone 8, right down to the single-lens monocular camera. But, under the hood, there's much more going on for depth estimation than any other iPhone before it.

According to a blog post from the makers of camera app Halide, the iPhone SE is the first in Apple's lineup to use "Single Image Monocular Depth Estimation." That means it's the first iPhone that can create a portrait blur effect using just a single 2D image.

In past iPhones, Portrait Mode has required at least two cameras. That's because the best source of depth information has long been comparing two images coming from two slightly different places. Once the system compares those images, it can separate the subject of a photo from the background, allowing for the blurred or "bokeh effect."

The iPhone XR changed that, introducing Portrait Mode support through the use of sensor "focus pixels," which could produce a rough depth map. But while the new iPhone SE has focus pixels, its older hardware lacks the coverage requisite for depth-mapping purposes.

"The new iPhone SE can't use focus pixels, because its older sensor doesn't have enough coverage," Halide's Ben Sandofsky wrote. An iFixit teardown revealed on Monday that the iPhone SE's camera sensor is basically interchangeable with the iPhone 8's.

Instead, the entry-level iPhone produces depth maps entirely through machine learning. That also means that it can produce Portrait Mode photos from both its front- and rear-facing cameras. That's something undoubtedly made possible by the top-of-the-line A13 Bionic chipset underneath its hood.

The depth information isn't perfect, Halide points out, but it's an impressive feat given the relative hardware limitations of a three-year-old, single-sensor camera setup. Similarly, Portrait Mode on the iPhone SE only works on people, but Halide says the new version of its app allows bokeh effects on non-human subjects on the iPhone SE.

Comments

  • Reply 1 of 6
    KITAKITA Posts: 409member
    Instead, the entry-level iPhone produces depth maps entirely through machine learning. That also means that it can produce Portrait Mode photos from both its front- and rear-facing cameras. That's something undoubtedly made possible by the top-of-the-line A13 Bionic chipset underneath its hood.
    Is this any different from Google's single lens portrait mode (no split pixels either) that's able to run on even mid-range Snapdragon SoCs from years ago? I don't see why you would buff the A13 Bionic otherwise, seemingly older iPhones, such as the iPhone 8, should be able to process this one would presume.
    MplsPgatorguymuthuk_vanalingam
  • Reply 2 of 6
    MplsPMplsP Posts: 4,007member
    KITA said:
    Instead, the entry-level iPhone produces depth maps entirely through machine learning. That also means that it can produce Portrait Mode photos from both its front- and rear-facing cameras. That's something undoubtedly made possible by the top-of-the-line A13 Bionic chipset underneath its hood.
    Is this any different from Google's single lens portrait mode (no split pixels either) that's able to run on even mid-range Snapdragon SoCs from years ago? I don't see why you would buff the A13 Bionic otherwise, seemingly older iPhones, such as the iPhone 8, should be able to process this one would presume.
    Agreed. There's nothing wrong with the A13, but this is image processing that can occur after the photo is taken; it doesn't need to occur in real time and thus shouldn't need a top of the line processor.

    Still, it's cool that they added the feature; since it's software based, I'm hoping they can add it to other older phones as well.
  • Reply 3 of 6
    XedXed Posts: 2,836member
    MplsP said:
    KITA said:
    Instead, the entry-level iPhone produces depth maps entirely through machine learning. That also means that it can produce Portrait Mode photos from both its front- and rear-facing cameras. That's something undoubtedly made possible by the top-of-the-line A13 Bionic chipset underneath its hood.
    Is this any different from Google's single lens portrait mode (no split pixels either) that's able to run on even mid-range Snapdragon SoCs from years ago? I don't see why you would buff the A13 Bionic otherwise, seemingly older iPhones, such as the iPhone 8, should be able to process this one would presume.
    Agreed. There's nothing wrong with the A13, but this is image processing that can occur after the photo is taken; it doesn't need to occur in real time and thus shouldn't need a top of the line processor.

    Still, it's cool that they added the feature; since it's software based, I'm hoping they can add it to other older phones as well.
    I have yet to test the feature on the new SE but I assumed that it does occur in real time so you can see how it will look before you take the shot.
  • Reply 4 of 6
    gatorguygatorguy Posts: 24,612member
    Xed said:
    MplsP said:
    KITA said:
    Instead, the entry-level iPhone produces depth maps entirely through machine learning. That also means that it can produce Portrait Mode photos from both its front- and rear-facing cameras. That's something undoubtedly made possible by the top-of-the-line A13 Bionic chipset underneath its hood.
    Is this any different from Google's single lens portrait mode (no split pixels either) that's able to run on even mid-range Snapdragon SoCs from years ago? I don't see why you would buff the A13 Bionic otherwise, seemingly older iPhones, such as the iPhone 8, should be able to process this one would presume.
    Agreed. There's nothing wrong with the A13, but this is image processing that can occur after the photo is taken; it doesn't need to occur in real time and thus shouldn't need a top of the line processor.

    Still, it's cool that they added the feature; since it's software based, I'm hoping they can add it to other older phones as well.
    I have yet to test the feature on the new SE but I assumed that it does occur in real time so you can see how it will look before you take the shot.
    I believe the Pixels have done so in realtime since 2017, and it's been open-sourced for some time now for other smartphones. Of course the SE may accomplish it differently and the results could be even better. I've not seen any reviews on the feature TBH. 
  • Reply 5 of 6
    EsquireCatsEsquireCats Posts: 1,268member
    The core difference between this and Google's implementation is that the iPhone doesn't rely on a HDR+ shot nor the subtle variation in the background across the different areas of the same lens. What this means is that, with an iPhone SE, you can take a photo of an old printed photograph and still get the portrait effect - that's where the A13 becomes useful in keeping this process usably fast and power efficient.

    However that said, all smartphone portrait modes have mixed results, sometimes they're good, sometimes they suck, and there are somethings that can't be properly simulated (e.g. when photographing objects that have lensed the background or a curved reflection.)
    lolliver
  • Reply 6 of 6
    gatorguygatorguy Posts: 24,612member
    The core difference between this and Google's implementation is that the iPhone doesn't rely on a HDR+ shot nor the subtle variation in the background across the different areas of the same lens. What this means is that, with an iPhone SE, you can take a photo of an old printed photograph and still get the portrait effect - that's where the A13 becomes useful in keeping this process usably fast and power efficient.
    Ah, so with an SE you can take a photograph of a photograph and as long as it's a picture of an identifiable person (a dog was used in one example but not successfully) you can apply a faux-bokeh effect to that new photo that already had bokeh because it was taken with a film camera and lens. Well I suppose there might be some use case but personally I don't think that's one of them.

    I did take a couple photos of old framed pictures last year for another family member, but as they already have realistic bokeh from the time they were taken over 60 years ago adding more would result in something fake looking.  I can't imagine ever wanting to use it in that manner myself but it's there if someone might want to do it anyway. 

    Now as far as Apple using machine learning only (out of necessity) for applying the portrait effect to three-dimensional scenes using just the one lens that's cool, and very well done too IMO, but I'd be shocked if it requires the processing power of an A13 to do so. It's become a relatively common feature on recent lower-priced smartphones with lesser processors and resources such as even a Qualcomm 215 processor and just 1GB ram.
    edited April 2020
Sign In or Register to comment.