Adobe concept demonstrates editing images on iPad using just your voice

Posted:
in iPad
Photographs may be edited by voice commands alone in the future, creative software producer Adobe hopes, showcasing a new proof-of-concept video that shows a Siri-like digital assistant on an iPad being used to make changes to an image.




The video starts with the user viewing an image on an iPad, tapping a microphone icon to trigger a digital assistant. With voice alone, the app makes changes to the photograph, cropping the image to a square, flipping then reversing the flip, confirming changes to be made to the image, and posting the result to Facebook.

The majority of verbal commands used natural language structures, such as "I would like to reframe this picture" and "make it square" for cropping. Following each command, the assistant speaks, confirming each action it performs and advising of how it interprets the vocal instruction.





Notes for the video advise that the Adobe Research team's system is able to accept and interpret the verbal commands "either locally through on-device computing or through a cloud-based Natural Language understanding service."

Despite the demonstration in the proof of concept video, Adobe admitted that it will be a long time before users can take advantage of such an assistant.

"This is our first step towards a robust multimodal voice-based interface which allows our creative customers to search and edit images in an easy and engaging way using Adobe mobile applications," the company said.

Adobe has been working on introducing machine learning and AI to its Adobe Cloud Platform for some time, and announced in November that it was introducing a framework to automate tasks and to offer extra assistance. Named Adobe Sensei, the service would assist in Stock Visual Search and Match Font, for example, as well as making the Liquify tool in Photoshop "face-aware."

Apple, meanwhile, opened up its voice-driven personal assistant Siri to third-party developers last year with the launch of iOS 10, potentially paving the way for Adobe's concept on iPhones and iPads.

Comments

  • Reply 1 of 5
    MacProMacPro Posts: 19,821member
    This would be useful as a time saver on the Mac too, the potential for Siri in macOS hasn't started to be explored yet. As an aside,  I have to say the face aware feature in PS 2017 works very well.  I own Portrait Professional too which does far more but for a quick minor tweak the new filter is all that's needed and obviously far faster.
    edited January 2017
  • Reply 2 of 5
    Because voice commands are quicker than tactile keyboard shortcuts and the fine motor skills of using a mouse. 

    Proof of concept is one thing but real world utility and application is another. 
    frankieyojimbo007
  • Reply 3 of 5
    73dray said:
    Because voice commands are quicker than tactile keyboard shortcuts and the fine motor skills of using a mouse. 
    Depends on the complexity of the action, now doesn't it? I can tell Siri, "Schedule an appointment for me tomorrow with the Dentist at 3:00" and she will do it much, much faster than I can manually by opening the calendar, creating a new event, setting the time, entering the description, and saving it.

    So, if the image editing app is "smart" enough, I could say, "Make the sky about 5% darker." And the AI could analyze the scene, find and mask the sky, and darken it, all faster than I could do the same. Or even a simple, "remove the red eye and then post it to Facebook" could still be faster than me locating the tool, fixing the eyes, and then posting it myself.
    anthony pattonradarthekat
  • Reply 4 of 5
    I can see something similar being useful in the Adobe Suite of applications. Since their menus are now absurdly over populated, it would be nice if menu items and panels could appear with just a voice instruction - it would save regularly hunting while looking for items which have no keyboard shortcut.
  • Reply 5 of 5
    dysamoriadysamoria Posts: 3,430member
    "Uncrop"
Sign In or Register to comment.