Apple's new Photos app will utilize generative AI for image editing

Posted:
in iPadOS edited May 14

A new teaser on Apple's website could be indicative of some of the company's upcoming software plans, namely a new version of its ubiquitous Photos app that will tap generative AI to deliver Photoshop-grade editing capabilities for the average consumer, AppleInsider has learned.

Colorful flower-like icon with overlapping petals next to text 'AI Photo Clean Up' and an adhesive bandage graphic.
The new Clean Up feature will make removing objects significantly easier



The logo promoting Tuesday's event on Apple's website suddenly turned interactive earlier on Monday, allowing users to erase some or all of the logo with their mouse. While this was initially believed to be a nod towards an improved Apple Pencil, it could also be in reference to an improved editing feature Apple plans to unleash later this year.

People familiar with Apple's next-gen operating systems have told AppleInsider that the iPad maker is internally testing an enhanced feature for its built-in Photos application that would make use of generative AI for photo editing. The feature is dubbed "Clean Up" in pre-release versions of Apple's macOS 15, and is located inside the edit menu of a new version of the Photos application alongside existing options for adjustments, filters, and cropping.

The feature appears to replace Apple's Retouch tool available on macOS versions of the Photos app. Unlike the Retouch tool, however, the Clean Up feature is expected to offer improved editing capabilities and the option to remove larger objects within a photo.

With Clean Up, users will be able to select an area of a photo via a brush tool and remove specific objects from an image. In internal versions of the app, testers can also adjust the brush size to allow for easier removal of smaller or larger objects.

While the feature itself is being tested on Apple's next-generation operating systems, the company could also decide to preview or announce it early, as a way of marketing its new iPad models.

Back view of a dog with a polka-dot bandana looking up at an editing software interface with a 'Clean-Up' brush selected.
The Clean Up feature will replace Apple's current Retouch tool



During the "Let Loose" iPad-centric event, Apple is expected to unveil two new models of the iPad Air and iPad Pro, the latter of which is rumored to feature the company's next-generation M4 chip. The M4 could introduce greater AI capabilities via an enhanced Neural Engine, with at the least, an increase in cores.

An earlier rumor claimed that there was a strong possibility of Apple's new iPad Pro receiving the M4 system-on-chip. Apple is also expected to market the new tablet an AI-enhanced device, after branding the M3 MacBook Air the best portable for AI.

While Apple has been working on its in-house large language model (LLM) for quite some time, it is unlikely that we will see any text-related AI features make their debut during the "Let Loose" event. The Clean Up feature, however, would provide a way of showcasing new iPad-related AI capabilities.

Should it choose to leverage the new Clean Up feature ahead of its annual developers conference in June, Apple would have the opportunity to promote its new iPads as AI-equipped devices. Giving users the option to remove an object from a photo with their Apple Pencil would be a good way of showcasing the practical benefits of artificial intelligence.

By demonstrating the real-world use cases for AI, the company likely aims to gain a leg up on existing third-party AI solutions, many of which only utilize artificial intelligence to offer short-term entertainment value in the form of chatbots.

Abstract blue and black swirling shapes with a gradient effect on a white background.
Apple teases its event with a new Apple Pencil erase feature



Although the feature provides some insight into what an AI-powered iPad might look like, it remains to be seen exactly when Apple will announce the Clean Up feature. Apple could instead opt to preview the feature at its Worldwide Developer's Conference (WWDC) in June.

Users of Adobe's Photoshop for iPad have had access to a similar feature called "Content-Aware Fill" since 2022. It allows users to remove objects from an image by leveraging generative AI, making it as though the objects were never there to begin with.

The "Content-Aware Fill" feature gradually evolved into "Generative Fill," which offers additional functionality and is available across various Adobe products. In addition to Photoshop, the feature can be found in Adobe Express and Adobe Firefly.

With Generative Fill, users of Adobe's applications simply brush an area of a photo to remove objects of their choosing. Adobe's apps even offer the option to adjust brush size. Apple's new Clean Up feature bears some resemblance to Adobe's.

Clean Up is expected to make its debut alongside Apple's new operating systems in June, though there is always the possibility a mention could slip into Tuesday's iPad media event. Apple also has plans to upgrade Notes, Calculator, iOS 18a href="https://appleinsider.com/articles/24/04/30/apple-to-unveil-ai-enabled-safari-browser-alongside-new-operating-systems">Calendar, and Spotlight with iOS 18.



Read on AppleInsider

jellybelly
«1

Comments

  • Reply 1 of 27
    Android has had this feature already. 
    Interesting to see if this Apple one will be superior.
    ctt_zhpulseimagesgrandact73
  • Reply 2 of 27
    Access the website on the iPad, and use a finger to erase the logo.
  • Reply 3 of 27
    Wesley HilliardWesley Hilliard Posts: 205member, administrator, moderator, editor
    Android has had this feature already. 
    Interesting to see if this Apple one will be superior.
    To be clear, iPhone has this ability too. It's not an operating system level thing. Apps can provide the feature and even Apple's Photos app for Mac has a basic repair tool already.

    The Clean Up feature will be a more advanced model based on generative image processing -- which I believe is only available via apps like Adobe's and is not a part of Android's photo editing tools.
    watto_cobra40domi
  • Reply 4 of 27
    gatorguygatorguy Posts: 24,278member
    Android has had this feature already. 
    Interesting to see if this Apple one will be superior.
    To be clear, iPhone has this ability too. It's not an operating system level thing. Apps can provide the feature and even Apple's Photos app for Mac has a basic repair tool already.

    The Clean Up feature will be a more advanced model based on generative image processing -- which I believe is only available via apps like Adobe's and is not a part of Android's photo editing tools.
    The Google Pixel 8 and Pixel 8 Pro smartphones use a generative AI feature called Magic Editor in the Google Photos app.  It uses generative AI to make edits like repositioning and/or resizing a subject, erasing elements, changing lighting and backgrounds, repair/replace gaps that occur during editing, and changing sky tones and color. 
    ctt_zhmuthuk_vanalingam40domi
  • Reply 5 of 27
     Seems like Apple is finally getting off its keister and adding basic functions that should have been there for years.  At least all this AI activity in tech  is making Apple responsive again. 
    40domigrandact73
  • Reply 6 of 27
    Wesley HilliardWesley Hilliard Posts: 205member, administrator, moderator, editor
    gatorguy said:
    Android has had this feature already. 
    Interesting to see if this Apple one will be superior.
    To be clear, iPhone has this ability too. It's not an operating system level thing. Apps can provide the feature and even Apple's Photos app for Mac has a basic repair tool already.

    The Clean Up feature will be a more advanced model based on generative image processing -- which I believe is only available via apps like Adobe's and is not a part of Android's photo editing tools.
    The Google Pixel 8 and Pixel 8 Pro smartphones use a generative AI feature called Magic Editor in the Google Photos app.  It uses generative AI to make edits like repositioning and/or resizing a subject, erasing elements, changing lighting and backgrounds, repair/replace gaps that occur during editing, and changing sky tones and color. 
    This is why words are important. Google might call those features AI but their base is ML, not generative image models. Google's ML for image processing may be superior to Apple's and that's a fair argument, but let's not confuse technologies.

    Google "AI" features are brand names applied to ML that has existed for over a decade. That ML has just gotten better. When I say AI, I don't mean the blanket marketing term, I mean local or large language models with generative capabilities, which were pioneered by OpenAI and used for LLMs like Bard.

    These are very separate, but Google has done consumers a terrible service by calling everything AI. It makes talking about this stuff very confusing because it makes differentiating between technologies more difficult and it serves to make Google look ahead at something and Apple behind -- which is the entire point of the misnomer.
    radarthekatbadmonk40domi
  • Reply 7 of 27
    ralphieralphie Posts: 107member
    And still can’t get Touch ID back.
  • Reply 8 of 27
    michelb76michelb76 Posts: 636member
     Seems like Apple is finally getting off its keister and adding basic functions that should have been there for years.  At least all this AI activity in tech  is making Apple responsive again. 
    Pretty sure if these were "major" features we would have seen more platform switching.
  • Reply 9 of 27
    40domi40domi Posts: 99member
    gatorguy said:
    Android has had this feature already. 
    Interesting to see if this Apple one will be superior.
    To be clear, iPhone has this ability too. It's not an operating system level thing. Apps can provide the feature and even Apple's Photos app for Mac has a basic repair tool already.

    The Clean Up feature will be a more advanced model based on generative image processing -- which I believe is only available via apps like Adobe's and is not a part of Android's photo editing tools.
    The Google Pixel 8 and Pixel 8 Pro smartphones use a generative AI feature called Magic Editor in the Google Photos app.  It uses generative AI to make edits like repositioning and/or resizing a subject, erasing elements, changing lighting and backgrounds, repair/replace gaps that occur during editing, and changing sky tones and color. 
    This is why words are important. Google might call those features AI but their base is ML, not generative image models. Google's ML for image processing may be superior to Apple's and that's a fair argument, but let's not confuse technologies.

    Google "AI" features are brand names applied to ML that has existed for over a decade. That ML has just gotten better. When I say AI, I don't mean the blanket marketing term, I mean local or large language models with generative capabilities, which were pioneered by OpenAI and used for LLMs like Bard.

    These are very separate, but Google has done consumers a terrible service by calling everything AI. It makes talking about this stuff very confusing because it makes differentiating between technologies more difficult and it serves to make Google look ahead at something and Apple behind -- which is the entire point of the misnomer.
    Regardless, Pixel photo editing is brilliant, particularly the magic eraser & best take, well over due on Apple products!
  • Reply 10 of 27
    40domi40domi Posts: 99member
    Well overdue by Apple, however just removing objects is not enough!
    Pixel's best take, magic eraser & repositioning is the very minimal Apple needs to give us!
  • Reply 11 of 27
    gatorguygatorguy Posts: 24,278member
    gatorguy said:
    Android has had this feature already. 
    Interesting to see if this Apple one will be superior.
    To be clear, iPhone has this ability too. It's not an operating system level thing. Apps can provide the feature and even Apple's Photos app for Mac has a basic repair tool already.

    The Clean Up feature will be a more advanced model based on generative image processing -- which I believe is only available via apps like Adobe's and is not a part of Android's photo editing tools.
    The Google Pixel 8 and Pixel 8 Pro smartphones use a generative AI feature called Magic Editor in the Google Photos app.  It uses generative AI to make edits like repositioning and/or resizing a subject, erasing elements, changing lighting and backgrounds, repair/replace gaps that occur during editing, and changing sky tones and color. 
    This is why words are important. Google might call those features AI but their base is ML, not generative image models. Google's ML for image processing may be superior to Apple's and that's a fair argument, but let's not confuse technologies.

    Google "AI" features are brand names applied to ML that has existed for over a decade. That ML has just gotten better. When I say AI, I don't mean the blanket marketing term, I mean local or large language models with generative capabilities, which were pioneered by OpenAI and used for LLMs like Bard.

    These are very separate, but Google has done consumers a terrible service by calling everything AI. It makes talking about this stuff very confusing because it makes differentiating between technologies more difficult and it serves to make Google look ahead at something and Apple behind -- which is the entire point of the misnomer.
    I disagree.

    Magic Editor seems to be using what you describe as Generative AI in Apple's Mac Photo Clean-up feature.
    https://pixel.withgoogle.com/Pixel_8_Pro/use-magic-editor?hl=en&country=US ;

    What we agree on is that words matter. Generative AI works by utilizing an ML model to learn the patterns and relationships in a dataset of human-created content. GenAI then uses those patterns learned to generate new content. So are Clean-up on a Mac and Magic Editor on an Android smartphone both using simple machine learning or Generative AI?  


    edited May 7 ctt_zh
  • Reply 12 of 27
    Wesley HilliardWesley Hilliard Posts: 205member, administrator, moderator, editor
    gatorguy said:
    I disagree.

    Magic Editor seems to be using what you describe as Generative AI in Apple's Mac Photo Clean-up feature.
    https://pixel.withgoogle.com/Pixel_8_Pro/use-magic-editor?hl=en&country=US ;

    What we agree on is that words matter. Generative AI works by utilizing an ML model to learn the patterns and relationships in a dataset of human-created content. GenAI then uses those patterns learned to generate new content. So are Clean-up on a Mac and Magic Editor on an Android smartphone both using simple machine learning or Generative AI?  
    I'm not sure what linking to Google's Pixel page does. Nothing in here explains the tech. It's a tutorial website.

    You're just mixing terms together. They have distinct meanings. Generative AI doesn't work by using an ML model. That's a contradictory statement.

    There's a big difference between ML and generative AI. ML has a pre-determined set of outcomes based on its decision tree. Generative AI doesn't have a set of predetermined outcomes, it can generate as many different endpoints as it wants.

    Magic Editor on Android functions similar to existing repair tools in many photo editing apps, even ones on iPhone. There's nothing special about it other than it being Google software with a big data set available for the ML decision tree.

    Generative AI like what will be used in Clean Up won't just shift pixels around based on available nearest-pixel data, which is what an ML model does. Instead, it'll determine the most logical thing that should exist behind an object using its vast understanding of images it has trained on. It could even know that the user is at a specific location and determine what should be behind the user because it knows what it should look like based on other photos in its database. ML isn't doing that with repair tools like Magic Editor.

    To put it even more simply: existing editing tools like Magic Editor rely on an algorithm that runs within an application to make decisions based on a preexisting number of factors. A model based on generative AI will be able to reference everything it has trained on to make a decision, not just the photo you're editing.

    These terms are important and have meaning on their own, despite how much google tries to unify them all under the AI umbrella to disguise what is what. It provides google a competitive advantage because people that don't know better will think "Oh, Google does that already and Apple is behind" hence this conversation. Google's marketing worked.
  • Reply 13 of 27
    gatorguygatorguy Posts: 24,278member
    gatorguy said:
    I disagree.

    Magic Editor seems to be using what you describe as Generative AI in Apple's Mac Photo Clean-up feature.
    https://pixel.withgoogle.com/Pixel_8_Pro/use-magic-editor?hl=en&country=US ;

    What we agree on is that words matter. Generative AI works by utilizing an ML model to learn the patterns and relationships in a dataset of human-created content. GenAI then uses those patterns learned to generate new content. So are Clean-up on a Mac and Magic Editor on an Android smartphone both using simple machine learning or Generative AI?  
    I'm not sure what linking to Google's Pixel page does. Nothing in here explains the tech. It's a tutorial website.

    You're just mixing terms together. They have distinct meanings. Generative AI doesn't work by using an ML model. That's a contradictory statement.
    It most surely does. It learns from it and then builds on it.
    https://news.mit.edu/2023/explained-generative-ai-1109. ;

    Magic Editor uses Generative AI just as surely or not as Clean Up does based on what your wrote:
    Quote:" With Clean Up, users will be able to select an area of a photo via a brush tool and remove specific objects from an image. In internal versions of the app, testers can also adjust the brush size to allow for easier removal of smaller or larger objects." And that differs from Magic Editor in what way?   
    edited May 7 ctt_zh
  • Reply 14 of 27
    Frankly, disappointed. Is that all?

    Still no improvement in photos management, metadata management, multi-photo-library management, external storage management, photo backup management, video and live photo management?

    On iOS, you still can’t create smart albums  though you can use smart album created from macOS, really?
  • Reply 15 of 27
    Wesley HilliardWesley Hilliard Posts: 205member, administrator, moderator, editor
    gatorguy said:
    It most surely does. It learns from it and then builds on it.
    https://news.mit.edu/2023/explained-generative-ai-1109

    Magic Editor uses Generative AI just as surely or not as Clean Up does based on what your wrote:
    Quote:" With Clean Up, users will be able to select an area of a photo via a brush tool and remove specific objects from an image. In internal versions of the app, testers can also adjust the brush size to allow for easier removal of smaller or larger objects." And that differs from Magic Editor in what way?   
    I didn't write this one. But you have a fundamental misunderstanding of what's happening here. These are all hammers, they're going to hit a nail and push it in. But how it hits the nail is different based on the underlying implementation.

    Any photo repair tool is going to function by selecting an object and having it remove the object with a brush size selector. The question is, how will it accomplish that? Will it determine how to remove the object with ML or generative AI? ML is going to do a worse job because the algorithm will only have a limited understanding of the photo. Generative AI will do an uncanny job because it "understands" on some level what is in the photo and what you're removing so it can be very precise without leaving artifacts behind.

    Even Google says it's an ML tool on its original press release. They just changed it to AI last year to fit in with the times. How that tool works didn't change. On Google's page describing the feature it says that not all objects can be selected. That wouldn't be the case if it were generative AI.

    "It learns from it and then builds on it" Clarify that statement because that isn't how Magic Editor works.

    You can believe what you like. I'm just telling you there is a distinction here. Google has you all messed up. 
  • Reply 16 of 27
    Wesley HilliardWesley Hilliard Posts: 205member, administrator, moderator, editor
    Frankly, disappointed. Is that all?

    Still no improvement in photos management, metadata management, multi-photo-library management, external storage management, photo backup management, video and live photo management?

    On iOS, you still can’t create smart albums  though you can use smart album created from macOS, really?
    This is a post about a feature, not a summary of all the features coming in iOS 18. We'll have to wait and see what Apple does here.
  • Reply 17 of 27
    This is a post about a feature, not a summary of all the features coming in iOS 18. We'll have to wait and see what Apple does here.

    Looking forward. Exif management at Album and library levels, please, Apple.

    btw, Generative AI for Photos, please, not just algorithms (rebranded as AI). 
    edited May 7
  • Reply 18 of 27
    blastdoorblastdoor Posts: 3,338member
    many of which only utilize artificial intelligence to offer short-term entertainment value in the form of chatbots.
    That’s not really an accurate characterization of the usefulness of those tools. I use Microsoft copilot almost every day for real work purposes. 
    gatorguy
  • Reply 19 of 27
    gatorguygatorguy Posts: 24,278member
    gatorguy said:
    It most surely does. It learns from it and then builds on it.
    https://news.mit.edu/2023/explained-generative-ai-1109

    Magic Editor uses Generative AI just as surely or not as Clean Up does based on what your wrote:
    Quote:" With Clean Up, users will be able to select an area of a photo via a brush tool and remove specific objects from an image. In internal versions of the app, testers can also adjust the brush size to allow for easier removal of smaller or larger objects." And that differs from Magic Editor in what way?   
    I didn't write this one. But you have a fundamental misunderstanding of what's happening here...
    Even Google says it's an ML tool on its original press release. They just changed it to AI last year to fit in with the times. How that tool works didn't change. On Google's page describing the feature it says that not all objects can be selected. That wouldn't be the case if it were generative AI.
    Google said on the Day-one announcement of it that Magic Editor uses Generative AI.
    https://blog.google/products/photos/google-photos-magic-editor-pixel-io-2023/
    I think you've been confusing things with the old Magic Eraser from the Pixel 6 which wasn't using Google's GenAI.

    As for why certain things can't be selected for editing, it's not because it's not using GenAI:
    Google Photos' Magic Editor may not be able to select all objects because it isn't allowed to edit certain types of photos or elements in them, such as ID cards, receipts, and other documents that violate Google's GenAI terms. Magic Editor also can't edit faces, parts of people, or large selections. When a user tries to edit one of these items, an error message will appear.

    We can wait to see what Apple builds into this year's iPhone for on-device photo editing. You say it will be GenAI-driven, and thus it will do more than Magic Editor can. We can come back in a few months and revisit it. I'll let it rest until Apple actually announces something.  
    edited May 7 ctt_zh
  • Reply 20 of 27
    Wesley HilliardWesley Hilliard Posts: 205member, administrator, moderator, editor
    gatorguy said:
    gatorguy said:
    It most surely does. It learns from it and then builds on it.
    https://news.mit.edu/2023/explained-generative-ai-1109

    Magic Editor uses Generative AI just as surely or not as Clean Up does based on what your wrote:
    Quote:" With Clean Up, users will be able to select an area of a photo via a brush tool and remove specific objects from an image. In internal versions of the app, testers can also adjust the brush size to allow for easier removal of smaller or larger objects." And that differs from Magic Editor in what way?   
    I didn't write this one. But you have a fundamental misunderstanding of what's happening here...
    Even Google says it's an ML tool on its original press release. They just changed it to AI last year to fit in with the times. How that tool works didn't change. On Google's page describing the feature it says that not all objects can be selected. That wouldn't be the case if it were generative AI.
    Google said on the Day-one announcement of it that Magic Editor uses Generative AI.
    https://blog.google/products/photos/google-photos-magic-editor-pixel-io-2023/

    As for why certain things can't be selected, it's not because it's not using GenAI:
    Google Photos' Magic Editor may not be able to select all objects because it won't be allowed to edit certain types of or elements in photos, such as ID cards, receipts, and other documents that violate Google's GenAI terms. Magic Editor also can't edit faces, parts of people, or large selections. When a user tries to edit one of these items, an error message will appear.

    We can wait to see what Apple builds into this year's iPhone for on-device photo editing. You say it will be GenAI-driven, and thus it will do more than Magic Editor can. We can come back in a few months and revisit it. I'll let it rest until Apple actually announces something.  
    The thing you linked is from Google announcing Magic Eraser again in 2023. It originally launched in 2021, where they called it ML before AI was the cool term.

    https://blog.google/products/photos/magic-eraser/

    It's not generative AI. I don't know why you're so caught up on this but you do you. Google's marketing convinced you really well I guess.

    i'm interested in seeing what it does too. But I don't really care who does what better. The point of this conversation is that words have meaning and Google calls things AI even when it's ML. It's so bad Apple had to start doing it in its press so it didn't appear behind to idiots who didn't know better.
Sign In or Register to comment.