iPhone vs Android: Two different photography and machine learning approaches

Posted:
in iPhone
A controversy with Samsung's phone cameras has renewed the conversation surrounding computational photography, and highlights the difference between it, and Apple's approach in iOS.

Apple's computational photography aims for realism
Apple's computational photography aims for realism


It isn't a big secret that Apple relies upon advanced algorithms and computational photography for nearly all of its iPhone camera features. However, users are beginning to ask where to draw the line between these algorithms and something more intrusive, like post-capture pixel alteration.

In this piece, we will examine the controversy surrounding Samsung's moon photos, how the company addresses computational photography, and what this means for Apple and its competitors going forward.

Computational photography

Computational photography isn't a new concept. It became necessary as people wanted more performance from their tiny smartphone cameras.

The basic idea is that computers can perform billions of operations in a moment, like after a camera shutter press, to replace the need for basic edits or apply more advanced corrections. The more we can program the computer to do after the shutter press, the better the photo can be.

This started with Apple's dual camera system on iPhone 7. Other photographic innovations before then, like Live Photos, could be considered computational photography, but Portrait Mode was the turning point for Apple.

Apple introduced Portrait Mode in 2016, which took depth data from the two cameras on the iPhone 7 Plus to create an artificial bokeh. The company claimed it was possible thanks to the dual camera system and advanced image signal processor, which conducted 100 billion operations per photo.

Needless to say, this wasn't perfect, but it was a step into the future of photography. Camera technology would continue to adapt to the smartphone form factor, chips would get faster, and image sensors would get more powerful per square inch.

Portrait mode uses computational photography to separate the foreground
Portrait mode uses computational photography to separate the foreground


In 2023, it isn't unheard of to shoot cinematically blurred video using advanced computation engines with mixed results. Computational photography is everywhere, from the Photonic Engine to Photographic Styles -- an algorithm processes every photo taken on iPhone. Yes, even ProRAW.

This was all necessitated by people's desire to capture their life with the device they had on hand -- their iPhone. Dedicated cameras have physics on their side with large sensors and giant lenses, but the average person doesn't want to spend hundreds or thousands of dollars on a dedicated rig.

So, computational photography has stepped in to enhance what smartphones' tiny sensors can do. Advanced algorithms built on large databases inform the image signal processor how to capture the ideal image, process noise, and expose a subject.

However, there is a big difference between using computational photography to enhance the camera's capabilities and altering an image based on data that the sensor never captured.

Samsung's moonshot

To be clear: Apple is using machine learning models -- or "AI, Artificial Intelligence" for those using the poorly coined popular new buzzword -- for computational photography. The algorithms provide information about controlling multi-image captures to produce the best results or create depth-of-field profiles.

The image processor analyzes skin tone, skies, plants, pets, and more to provide proper coloration and exposure, not pixel replacement. It isn't looking for objects, like the moon, to provide specific enhancements based on information outside of the camera sensor.

We're pointing this out because those debating Samsung's moon photos have used Apple's computational photography as an example of how other companies perform these photographic alterations. That simply isn't the case.

Samsung's moon algorithm in action. Credit: u/ibreakphotos on Reddit
Samsung's moon algorithm in action. Credit: u/ibreakphotos on Reddit


Samsung has documented how Samsung phones, since the Galaxy S10, have processed images using object recognition and alteration. The Scene Optimizer began recognizing the moon with the Galaxy S21.

As the recently-published document describes, "AI" recognizes the moon through learned data, and the detail improvement engine function is applied to make the photo clearer with multi-frame synthesis and machine learning.

Basically, Samsung devices will recognize an unobscured moon and then use other high-resolution images and data about the moon to synthesize a better output. The result isn't an image captured by the device's camera but something new and fabricated.

Overall, this system is clever because the moon looks the same no matter where it is viewed on earth. The only thing that changes is the color of the light reflected from its surface and the phase of the moon itself. Enhancing the moon in a photo will always be a straightforward calculation.

Both Samsung and Apple devices take a multi-photo exposure for advanced computations. Both analyze multiple captured images for the best portion of each and fuse them into one superior image. However, Samsung adds an additional step for recognized objects like the moon, which introduces new data from other high-resolution moon images to correct the moon in the final captured image.

Samsung's moon algorithm explained. Credit: Samsung
Samsung's moon algorithm explained. Credit: Samsung


This isn't necessarily a bad thing. It just isn't something Samsung hasn't made clear in its advertising or product marketing, which may lead to customer confusion.

The problem with this process, and the reason a debate exists, is how this affects the future of photography.

Long story short, the final image doesn't represent what the sensor detected and the algorithm processed. It represents an idealized version of what might be possible but isn't because the camera sensor and lens are too small.

The impending battle for realism

From our point of view, the key tenet of iPhone photography has always been realism and accuracy. If there is a perfect middle in saturation, sharpness, and exposure, Apple has trended close to center over the past decade, even if it hasn't always remained perfectly consistent.

We acknowledge that photography is incredibly subjective, but it seems that Android photography, namely Samsung, has leaned away from realism. Again, not necessarily a negative, but an opinionated choice made by Samsung that customers have to address.

For the matter of this discussion, Samsung and Pixel devices have slowly tilted away from that ideal realistic representational center. They are vying for more saturation, sharpness, or day-like exposure at night.



The example above shows how the Galaxy S22 Ultra favored more exposure and saturation, which led to a loss of detail. Innocent and opinionated choices, but the iPhone 13 Pro, in this case, goes home with a more detailed photo that can be edited later.

This difference in how photos are captured is set in the opinionated algorithms used by each device. As these algorithms advance, future photography decisions could lead to more opinionated choices that cannot be reversed later.

For example, by changing how the moon appears using advanced algorithms without alerting the user, that image is forever altered to fit what Samsung thinks is ideal. Sure, if users know to turn the feature off, they could, but they likely won't.

We're excited about the future of photography, but as photography enthusiasts, we hope it isn't so invisible. Like Apple's Portrait Mode, Live Photos, and other processing techniques -- make it opt-in with obvious toggles. Also, make it reversible.

Tapping the shutter in a device's main camera app should take a representative photo of what the sensor sees. If the user wants more, let them choose to add it via toggles before or editing after.

For now, try taking photos of the night sky with nothing but your iPhone and a tripod. It works.

Why this matters

It is important to stress that there isn't any problem with replacing the ugly glowing ball in the sky with a proper moon, nor is there a problem with removing people or garbage (or garbage people) from a photo. However, it needs to be a controllable, toggle-able, and visible process to the user.

Computational photography is the future, for better or worse
Computational photography is the future, for better or worse


As algorithms advance, we will see more idealized and processed images from Android smartphones. The worst offenders will outright remove or replace objects without notice.

Apple will inevitably improve its on-device image processing and algorithms. But, based on how the company has approached photography so far, we expect it will do so with respect to the user's desire for realism.

Tribalism in the tech community has always caused debates to break out among users. Those have included Mac or PC, iPhone or Android, and soon, real or ideal photos.

We hope Apple continues to choose realism and user control over photos going forward. Giving a company complete opinionated control over what the user captures in a camera, down to altering images to match an ideal, doesn't seem like a future we want to be a part of.

Read on AppleInsider
alexsaunders790

Comments

  • Reply 1 of 20
    retrogustoretrogusto Posts: 1,109member
    I wonder if they also use past pictures of your friends and family to optimize new pictures of the same people.

    I have to admit, in the comparison photo, the iPhone shot might be closer to reality but the Samsung shot looks more natural to me, and I think most people would prefer it, even if the average participant in the AI forum might not. It’s even less burned out on the tops of the barrels. I think a person’s eyes would adjust to the weird white balance in that room, and you’d end up seeing it more like the Samsung rendition.
  • Reply 2 of 20
    You’re not quite understanding the line that’s being crossed here:  Samsung is introducing foreign imagery to the shots you are taking.  Apple is not.  With Samsung, you cannot be sure that what you’re seeing is what YOU captured.  This is a completely different class of modification vs what’s going on with Apple’s computational photography.  Apple is enhancing or otherwise applying modifications to image features that already exist.  Samsung is introducing image features your image MAY NEVER HAVE HAD. 
    techconclollivermike1aaplfanboywilliamlondonwatto_cobraalexsaunders790
  • Reply 3 of 20
    avon b7avon b7 Posts: 7,621member
    You’re not quite understanding the line that’s being crossed here:  Samsung is introducing foreign imagery to the shots you are taking.  Apple is not.  With Samsung, you cannot be sure that what you’re seeing is what YOU captured.  This is a completely different class of modification vs what’s going on with Apple’s computational photography.  Apple is enhancing or otherwise applying modifications to image features that already exist.  Samsung is introducing image features your image MAY NEVER HAVE HAD. 
    You can be completely sure of what you get by turning off the option of scene optimisation. 

    On Huawei phones you can toggle the AI enhancements on/off even from the finished image in the Gallery.

    In the specific case of the moon, I'm sure that the vast majority of people who see a great moonscape in real life but end up with a blurry white blob in their photo, would take the 'enhanced' version every time.

    It might not be what the 'sensor/ISP' saw but it would look more like what the user actually saw. 
    ctt_zhlkrupp
  • Reply 4 of 20
    charlesncharlesn Posts: 820member
    I have to admit, in the comparison photo, the iPhone shot might be closer to reality but the Samsung shot looks more natural to me, and I think most people would prefer it, 
    It's always funny to read when someone who hasn't spoken to "most people" is sure their personal preference is what "most people would prefer." 

    Neither approach is the "right" answer here. And even though I prefer Apple's approach, it's important to note that computational photography is still making countless decisions for me to deliver an image of what I "really" saw. 
    edited March 2023 lolliverwatto_cobra
  • Reply 5 of 20
    techconctechconc Posts: 275member
    I think people would be less concerned if the scene optimization were to be applied after the image was taken and presented as what the camera itself was capable of.  Also, as others mentioned, there is a difference between enhancing an image with filters and replacing the content of an image.  Again, if the system retained both versions and the enhanced version was done in post processing, I don't think this would be an issue.  As it stands, it really misrepresents what the camera system in the Samsung phones is actually capable of. 
    lolliverwilliamlondonwatto_cobra
  • Reply 6 of 20
    DAalsethDAalseth Posts: 2,783member
    When I heard about what Samsung was doing the very first thing that came to mind was the scene in Deep Space 9 (In the Pale Moonlight) when the Romulan Ambassador Vreenak finds the data from Sisko was not real. He holds it up and loudly proclaims FAKE!

    We are dancing around the word, but it needs to be said. There is a difference between an image that has has the colour balance, or whatever adjusted, and one that has had data added from an outside source. One is enhanced and one is FAKE. It’s just computerized Photoshopping. Utter crap. I take real images and I want my camera to take real images. We need to step up and be honest enough to call what Samsung has done is to make FAKE images.

    We are rapidly approaching the point where nothing you don’t see in person, with your own eyes will be believable, and soon half of what you do see with your own eyes will be up for question as well. The growth of computer driven photography, video, writing, art, and all the rest is going to do severe damage to the credibility of everything, from corporate statements, to government proclamations, to scientific reports, to school work, to phone calls from family members. Nothing will be above suspician.

    And that will be a very sick society to live in.
    edited March 2023 techconcwatto_cobraalexsaunders790
  • Reply 7 of 20
    Shocking and completely unexpected consider they are two different operating systems. Doh…
    williamlondon
  • Reply 8 of 20
    retrogustoretrogusto Posts: 1,109member
    charlesn said:
    I have to admit, in the comparison photo, the iPhone shot might be closer to reality but the Samsung shot looks more natural to me, and I think most people would prefer it, 
    It's always funny to read when someone who hasn't spoken to "most people" is sure their personal preference is what "most people would prefer." 

    Neither approach is the "right" answer here. And even though I prefer Apple's approach, it's important to note that computational photography is still making countless decisions for me to deliver an image of what I "really" saw. 
    You don’t have to speak to “most people” to have an informed opinion about what most people might prefer, and given the condescension of your comment, I don’t mind saying it’s idiotic to think otherwise. I’ve been photographing for almost 50 years, studied art, photography and cinematography in multiple well-regarded universities, including a seminar with Vittorio Storaro in Italy, won a few awards for my photography and filmmaking, etc. That’s enough to have a valid opinion about what people tend to like.
    radarthekatwilliamlondonwatto_cobra
  • Reply 9 of 20
    radarthekatradarthekat Posts: 3,842moderator
    I wonder if they also use past pictures of your friends and family to optimize new pictures of the same people.

    I have to admit, in the comparison photo, the iPhone shot might be closer to reality but the Samsung shot looks more natural to me, and I think most people would prefer it, even if the average participant in the AI forum might not. It’s even less burned out on the tops of the barrels. I think a person’s eyes would adjust to the weird white balance in that room, and you’d end up seeing it more like the Samsung rendition.
    Our eyes have a great ability to adjust to the lower light so I'm confident a person with normal vision would see the detail in the brickwork in the lower right (slide back and forth a few times to really notice how much detail the Samsung pic loses).  The post on the left, with the high-mounted light switch also loses a lot in the Samsung photo, as one more of several easy to spot examples. 
    edited March 2023 watto_cobra
  • Reply 10 of 20
    radarthekatradarthekat Posts: 3,842moderator
    charlesn said:
    I have to admit, in the comparison photo, the iPhone shot might be closer to reality but the Samsung shot looks more natural to me, and I think most people would prefer it, 
    It's always funny to read when someone who hasn't spoken to "most people" is sure their personal preference is what "most people would prefer." 

    Neither approach is the "right" answer here. And even though I prefer Apple's approach, it's important to note that computational photography is still making countless decisions for me to deliver an image of what I "really" saw. 
    The difference is that Samsung is leading the user to believe that its camera sensor and adjustment algorithms are actually Better than the competition that isn't pulling this 'cheap' trick.  And then the user brags that Samsung is "so much better than Apple" and they aren't corrected on that misinformed opinion.  The moon shot is being used to represent, in the minds of those who aren't aware of the trick, the entire Samsung handset photographic capability.  And that's deceptive and unethical.  I'm pretty sure, though I don't 'know,' that most people would disapprove of being deceived. 
    edited March 2023 williamlondonwatto_cobra
  • Reply 11 of 20
    avon b7 said:
    You’re not quite understanding the line that’s being crossed here:  Samsung is introducing foreign imagery to the shots you are taking.  Apple is not.  With Samsung, you cannot be sure that what you’re seeing is what YOU captured.  This is a completely different class of modification vs what’s going on with Apple’s computational photography.  Apple is enhancing or otherwise applying modifications to image features that already exist.  Samsung is introducing image features your image MAY NEVER HAVE HAD. 
    You can be completely sure of what you get by turning off the option of scene optimisation. 

    On Huawei phones you can toggle the AI enhancements on/off even from the finished image in the Gallery.

    In the specific case of the moon, I'm sure that the vast majority of people who see a great moonscape in real life but end up with a blurry white blob in their photo, would take the 'enhanced' version every time.

    It might not be what the 'sensor/ISP' saw but it would look more like what the user actually saw. 

    Not nearly good enough. Not even close. “Scene optimization”?  Sure. That’s whats going on. 

    This shouldn’t be on by default. It should be explained. It frankly shouldn’t exist as a feature without a secure paper trail to go with it on every photo that’s taken. “You’re sure the vast majority”. “Might not bel” what the sensor saw. “Would look more like” what the user actually saw. 

    These qualified statements should scare the living sh*t out of everyone.  
    DAalsethwilliamlondonradarthekatwatto_cobra
  • Reply 12 of 20
    gatorguygatorguy Posts: 24,176member
    avon b7 said:
    You’re not quite understanding the line that’s being crossed here:  Samsung is introducing foreign imagery to the shots you are taking.  Apple is not.  With Samsung, you cannot be sure that what you’re seeing is what YOU captured.  This is a completely different class of modification vs what’s going on with Apple’s computational photography.  Apple is enhancing or otherwise applying modifications to image features that already exist.  Samsung is introducing image features your image MAY NEVER HAVE HAD. 
    You can be completely sure of what you get by turning off the option of scene optimisation. 

    On Huawei phones you can toggle the AI enhancements on/off even from the finished image in the Gallery.

    In the specific case of the moon, I'm sure that the vast majority of people who see a great moonscape in real life but end up with a blurry white blob in their photo, would take the 'enhanced' version every time.

    It might not be what the 'sensor/ISP' saw but it would look more like what the user actually saw. 

    Not nearly good enough. Not even close. “Scene optimization”?  Sure. That’s whats going on. 

    This shouldn’t be on by default. It should be explained. It frankly shouldn’t exist as a feature without a secure paper trail to go with it on every photo that’s taken. “You’re sure the vast majority”. “Might not bel” what the sensor saw. “Would look more like” what the user actually saw. 

    These qualified statements should scare the living sh*t out of everyone.  
    Oh geez, get over it. These aren't courtroom evidence photos.

    The 15-year old girl shooting Android or iPhone photos only wants fast, easy, and saturated. They'e going in Snapchat or on Facebook. Her 45-year-old mother wants soft,glowing skin and no eye bags. She wants to look like her 30's. Neither one cares if it was exactly what the scene looked like as long as it's what they PREFER it looks like. 

    I made the mistake on a paid shoot to present an elderly woman just as she looked as that was what she had indicated as her preference. She specifically told me she didn't want to look fake, like she was in her 60's.

    Perfect studio background and a few in a shaded and landscaped patio, natural light. The lighting was very good, poses were assorted and proper, the whole shot went without much trouble. She was a pleasure to work with.

    She hated just about every photo.

    Too many face lines.  Shadowing under the loose skin on her neck. Spots on her arms that she swore aren't there. So forget what she told me, she obviously had a different view of how she looked and it was not what the camera captured. It took a few hours of post work to come up with a portfolio that met with her approval. 

    If your iPhone was delivering exactly what the hardware was capturing and no more, no less, you would not like it.  At all. You can't turn off all the algorithmic optimizations, color, shadow and highlight adjustments, skin smoothing, and overall contrast and selective sharpening your phone is doing to find out what it actually did "see", but I can guarantee it's hot garbage compared to a large-sensor camera with premium lenses. The only reason you can mention then iPhone and "professional" in the same sentence is because Applesoftware is stepping in to replace and/or augment the missing parts their camera could not capture.
    edited March 2023 muthuk_vanalingammacplusplus
  • Reply 13 of 20
    avon b7avon b7 Posts: 7,621member
    avon b7 said:
    You’re not quite understanding the line that’s being crossed here:  Samsung is introducing foreign imagery to the shots you are taking.  Apple is not.  With Samsung, you cannot be sure that what you’re seeing is what YOU captured.  This is a completely different class of modification vs what’s going on with Apple’s computational photography.  Apple is enhancing or otherwise applying modifications to image features that already exist.  Samsung is introducing image features your image MAY NEVER HAVE HAD. 
    You can be completely sure of what you get by turning off the option of scene optimisation. 

    On Huawei phones you can toggle the AI enhancements on/off even from the finished image in the Gallery.

    In the specific case of the moon, I'm sure that the vast majority of people who see a great moonscape in real life but end up with a blurry white blob in their photo, would take the 'enhanced' version every time.

    It might not be what the 'sensor/ISP' saw but it would look more like what the user actually saw. 

    Not nearly good enough. Not even close. “Scene optimization”?  Sure. That’s whats going on. 

    This shouldn’t be on by default. It should be explained. It frankly shouldn’t exist as a feature without a secure paper trail to go with it on every photo that’s taken. “You’re sure the vast majority”. “Might not bel” what the sensor saw. “Would look more like” what the user actually saw. 

    These qualified statements should scare the living sh*t out of everyone.  
    Storm in a teacup material. 

    I don't know if it's on by default or not. They can be criticised for not being more up front about what is going on (that is reasonable) but I can guarantee you that the vast majority of users (iPhone users included) will take the finished result over the same photo of the moon without the optimisation. 

    Why? Because for most people, the non-optimised version would get deleted simply because it didn't represent what they actually saw.

    At least in these cases the argument for the optimisation is precisely because it is nudging the photo closer to what the user saw.

    Some people are claiming the feature is actually replacing the moon photo with another. It is actually quite a bit more complex than that. I saw someone repeat the situation that led to this scandal but they pasted an image of their cat onto the moon. The finished photo clearly retained the image of the cat even if craters had been incrusted into it.

    Anyone with an interest in photography to any degree will not be using the point and shoot default settings on the phone. They will go into 'pro' mode and get things to their liking. At least on Android phones where the 'pro' mode is normally available. 

    And like I said above, if Samsung's approach is anything like Huawei's, every photo that has been AI enhanced can have the changes removed by simply clicking on the the blue 'AI' label on the photo in the Gallery. Toggle on. Toggle off. 

    Apple is digitally redirecting eyes to make it look like the person is looking at 'you' and not into the camera which is invisible to you. It's manipulating the image and it's fine because the reasons behind it are warranted. At least to most people. 
    edited March 2023
  • Reply 14 of 20
    I took an image of a 🍐& 🍉 and got these ߍamp;amp; ߌᠡ.

    edited March 2023
  • Reply 15 of 20
    JP234 said:
    I took an image of a 🍐& 🍉 and got these ߍamp;amp; ߌᠡ.

    Who ASCII'ed you anyway?
    Renderman! 4 Pixar!
    JP234watto_cobra
  • Reply 16 of 20
    MacProMacPro Posts: 19,718member
    I agree 100% with the article and said the same thing in the debate about this in the earlier blog. It all boils down to simply telling the public what is really happening and not fibbing.

    I'll add, I love the new AI tools available these days, especially in Photoshop and its plug-ins, but I know how and when to use them under my control with images from my pro-full-frame cameras.  When I use my iPhone Pro Max, I love that I don't have to think about any of that for my vacation and family photos, it just works.
    watto_cobra
  • Reply 17 of 20
    I use iPhone for all my newspaper photos. If Mapple opted to take path of overly jiggering photos, I would be forced to procure and tote a fancy, separate camera. 
    radarthekatwatto_cobra
  • Reply 18 of 20
    danoxdanox Posts: 2,799member
    What you see is what you get when you take a picture, however downloading a picture of the Moon, Yellowstone, Yosemite half dome, Mount Hood, should not happen when you are taking a picture (snapshot). You should get the best highest resolution raw picture at the camera and nothing else, if you want to manipulate the picture later, that’s a different story and should be up to the individual user at a later time.
    watto_cobra
  • Reply 19 of 20
    avon b7avon b7 Posts: 7,621member
    danox said:
    What you see is what you get when you take a picture, however downloading a picture of the Moon, Yellowstone, Yosemite half dome, Mount Hood, should not happen when you are taking a picture (snapshot). You should get the best highest resolution raw picture at the camera and nothing else, if you want to manipulate the picture later, that’s a different story and should be up to the individual user at a later time.
    Yes and no. 

    Point your phone at a beautiful moon scene. Take the photo. 

    Are you getting what you saw? 

    Very probably no. Not even close. 

    The most important thing for point and shoot, and especially in the case of the moon, is actually getting what you saw.

    Any 'purist' doubts are resolved by simply toggling the feature off. 

    Surely it's a win-win? 
    gatorguy
Sign In or Register to comment.