Adobe demonstrates future Photoshop tool that uses machine learning to select image subjec...
Adobe has teased one of the AI-based features coming to Photoshop in the future, with the 'Select Subject' function simplifying the selection of an object, person, or animal in a scene, using machine learning to determine what is likely to be the main focus of an image.
The video, published on Monday to the official Adobe Photoshop channel, demonstrates Select Subject as a quick and simple way for users to make an initial selection of an element within an image. Compared in the video to existing methods including the quick select tool's edge detection feature, selecting the negative space with the magic wand, or the pen and magnetic lasso tools, Select Subject is shown to highlight the subject in a single click.
The Select Subject tool is also able to work with multiple subjects in a single scene, such as a group of people on a beach. The tool is also available in the Select and Mask workspace, so users can refine the selected area's edge detection, improving the section for more complicated elements like fur on an animal.
For complex scenes that include complicated detail around a subject or an object-filled backdrop, Adobe suggests this isn't an issue, as the tool uses machine learning to work out what visible elements belong to the subject or the background.
Adobe advises that Select Subject should be used as a starting point for selection-related image editing tasks. While the video does show some areas are not detected fully by the tool, like the clothing of one dog walker, it largely appears to correctly select the intended elements, leaving relatively little areas in need of further work.
The Select Subject tool is powered by Adobe Sensei, the firm's AI and machine learning technology used to automate tasks throughout the Adobe Cloud platform. Introduced in November 2016, Sensei has been used to help users find appropriate stock images, powers the Match Font tool, and Photoshop's Face-Aware Liquify operations.
It is unclear when exactly Select Subject will be available to Creative Cloud users, but the video voiceover by Photoshop Product Manager Meredith Payne Stotzner advises it will be implemented in an "upcoming release of Photoshop CC."
Adobe has put considerable effort into its AI-based services, and has previously hinted at a future where more of the tasks could be performed just by machine learning systems. A concept video released in January depicts a tablet-based app that can perform basic edits to an image, including cropping and sharing the result to Facebook, with the app following only "natural language" verbal commands.
The video, published on Monday to the official Adobe Photoshop channel, demonstrates Select Subject as a quick and simple way for users to make an initial selection of an element within an image. Compared in the video to existing methods including the quick select tool's edge detection feature, selecting the negative space with the magic wand, or the pen and magnetic lasso tools, Select Subject is shown to highlight the subject in a single click.
The Select Subject tool is also able to work with multiple subjects in a single scene, such as a group of people on a beach. The tool is also available in the Select and Mask workspace, so users can refine the selected area's edge detection, improving the section for more complicated elements like fur on an animal.
For complex scenes that include complicated detail around a subject or an object-filled backdrop, Adobe suggests this isn't an issue, as the tool uses machine learning to work out what visible elements belong to the subject or the background.
Adobe advises that Select Subject should be used as a starting point for selection-related image editing tasks. While the video does show some areas are not detected fully by the tool, like the clothing of one dog walker, it largely appears to correctly select the intended elements, leaving relatively little areas in need of further work.
The Select Subject tool is powered by Adobe Sensei, the firm's AI and machine learning technology used to automate tasks throughout the Adobe Cloud platform. Introduced in November 2016, Sensei has been used to help users find appropriate stock images, powers the Match Font tool, and Photoshop's Face-Aware Liquify operations.
It is unclear when exactly Select Subject will be available to Creative Cloud users, but the video voiceover by Photoshop Product Manager Meredith Payne Stotzner advises it will be implemented in an "upcoming release of Photoshop CC."
Adobe has put considerable effort into its AI-based services, and has previously hinted at a future where more of the tasks could be performed just by machine learning systems. A concept video released in January depicts a tablet-based app that can perform basic edits to an image, including cropping and sharing the result to Facebook, with the app following only "natural language" verbal commands.
Comments
Such a stupid millennial term
Or, you know, the model’s left arm in that photo.
There's zero artificial intelligence in existence. All that exists are clever algorithms and slightly human-oriented interfaces. Calling all this crap AI or "machine learning" is utter bollocks. It's absolutely a misuse of the terms.
However, it has nothing to do with millennials. Please stop hating on a whole generation of people.
The terms can imply human qualities like curiosity and spontaneity, which machines don't have and in that sense they are misleading but I think they are close to what they describe. They could use other terms like adaptive semantic mapping but it's just a label to give people an understanding of what's going on and from a marketing point of view, it fits quite well to convey the idea that the computer can identify objects the way that people can.
The functionality is great and something that has been sorely needed in media. Cropping and masking with even advanced tools is tedious because the computer has had no understanding of the objects in the image, all the pixels were the same. Being able to identify contiguous objects will be a huge benefit for editing and metadata. It will get to a point soon, if not already, where someone will be able to talk to the computer and say things like 'remove my ex from this photo' and it will just do it because it will know who their ex is from facial analysis. To a typical user, that will appear to be intelligent or cognisant of the user's intentions in a way that computers haven't before.
Hey, but it correctly identified the pee-stain under the model!
I wonder what this future means for my expensive Nikon glass. You think they’ll figure out a way to capture depth through existing lenses?