Google Lens visual search lands on Google Photos for all Android devices, update for iOS a...
Google has started to roll out its visual search tool Google Lens to more mobile devices, expanding from its previous exclusivity to the company's Pixel smartphones, making the feature available to Google Photos users on Android before it arrives on the iOS version of the image management app.

Announced at Mobile World Congress, Google Lens is being made available in an Android app update for Google Photos, released in batches. Just as with the version included in the Google Pixel, Lens will also be accessible through the Google Assistant, though not all Android devices will be able to use the function in that way.
The search giant did confirm Lens will be added to Google Photos for iOS during Mobile World Congress, but did not specify when to expect it's arrival, aside from stating it is "coming soon" in a Twitter post.
Google Lens is a visual search feature following on from the company's earlier effort Google Goggles, which uses image recognition to provide more information about items in a photograph. If it spots a landmark, like a building or a store, it will offer up further details usually found on a card brought up in regular Google searches, including opening hours and a brief description of the structure.
The tool can also perform text detection on an image, which can be extracted and used for other purposes, like copying the visible words into a document in another app. Google Lens even includes the ability to create a contact from a photograph of a business card, automatically filling out details displayed on the card.
While Apple has yet to offer image-based searches in Siri in this way, the iPhone producer has included some similar elements in its own Photos app. Both the macOS and iOS versions of Photos feature object recognition technology, which can search for locations, identified people, and even objects not previously identified by the user.

Announced at Mobile World Congress, Google Lens is being made available in an Android app update for Google Photos, released in batches. Just as with the version included in the Google Pixel, Lens will also be accessible through the Google Assistant, though not all Android devices will be able to use the function in that way.
The search giant did confirm Lens will be added to Google Photos for iOS during Mobile World Congress, but did not specify when to expect it's arrival, aside from stating it is "coming soon" in a Twitter post.
Google Lens is a visual search feature following on from the company's earlier effort Google Goggles, which uses image recognition to provide more information about items in a photograph. If it spots a landmark, like a building or a store, it will offer up further details usually found on a card brought up in regular Google searches, including opening hours and a brief description of the structure.
The tool can also perform text detection on an image, which can be extracted and used for other purposes, like copying the visible words into a document in another app. Google Lens even includes the ability to create a contact from a photograph of a business card, automatically filling out details displayed on the card.
While Apple has yet to offer image-based searches in Siri in this way, the iPhone producer has included some similar elements in its own Photos app. Both the macOS and iOS versions of Photos feature object recognition technology, which can search for locations, identified people, and even objects not previously identified by the user.
Comments
"It’s a jack-of-all-trades camera app capable of everything from scanning a router code to auto-log you in, to identifying landmarks in your vacation photos, to connecting with Google Assistant in real time to pull up information on a business in front of you.
And just last week Google’s VP of VR and AR Amit Singh showed how Lens is incorporating their ARCore technology to unlock even more capabilities, such as add real-size furniture to your living room to see how they’d fit, or see how a car would look with a different coat of paint."
Will it be tagged using the Google AI? That means it is not yours any longer.
Google are not in this out of the goodness of their heart...
Cynical? You bet.
Most of the rest that were identified were wrong and many could not be tagged.
Decided not to bother after that.
FWIW I had always done pretty much the same as you, import into LR, tag'em, do some basic adjustments or cropping and then identify the ones I want to further edit in Photoshop or On1. The tagging part and organization for several thousand images is a pain as you know.
Originally I only used Google Photos as a storage bucket for my cell-phone pics. Sometime last year I began using Google Photos as a backup for every image I added to LR or initially processed as RAW in DxO while continuing the traditional LR cataloging. In the months since I've realized the auto-tagging and search that Google provides actually saving me a lot of headaches, time, and mistagging with it's much more forgiving search functions. As a result my quick goto when looking for a particular shot has become Google rather than LR. "Woman with yellow dress", "Brown dog", or "Red flowers in Clearwater" works quite well and much faster than trying to remember a specific LR tag.
I currently have some 3.4TB of Images going back to 2001.
If you have an Amazon Prime membership, it includes Prime Photos which does preserve your images as you upload them. The image file you upload is what you get back if you download it later. But Amazon's service is slow as molasses compared to Google. Ironically given Google's business strategy of sucking up every bit of data they can, Amazon does a little better job of making use of the metatdata already in your photos when you upload them. Google seems to ignore most of that info and instead rely on it's image recognition technology. (Neither does a stellar job of reading all the metadata and keywords I've added before uploading my photos and making use of it for searching or creating smart albums.)
Since I already had an Amazon Prime membership, I upload my original, master copies of images to Amazon after metadata has been added. I know I can get that exact image file back from Amazon should I need to. Then after any edits (still using Aperture) they are uploaded to Google as optimized fall-back copies (as opposed to backup copies) and for sharing across my iDevices. Between these two independent online services, local Time Machine and remote (aka, the office) clones of my drive, it would take a spectacular series of failures to lose my pictures.
If I didn't already have Prime Photos for free with my Amazon Prime subscription, I probably would have just paid for the extra storage with Google to keep original quality. But I do like that I have both my originals (usually RAW) and edited copies (JPG) stored on two separate clouds. And it's not costing me any extra money.