Google has a new “multisearch” feature which makes it possible to search with test and images through Google Lens at the same time. Back at the Search On event which took place in September last year, Google had announced this feature would be available after they ran several tests. This functionality is now accessible in the United States as a beta feature.
This feature could prove to be helpful mostly in the view of shopping, Google’s search director Lou Wang adds that it would be beneficial in a situation where “you could imagine you have something broken in front of you, don’t have the words to describe it, but you want to fix it... you can just type ‘how to fix,’”.
Mostly dependent on artificial intelligence for its working, computer vision deciphers the object in the image taken alongside the usual laughable processing to get the meaning of the words typed. Results taken from these are then put together.
Google looks forward to AI models being the beginning of a new era of search. Though this experiment is just a limited one and doesn’t put the latest MUM AI models into use, it looks like an innovative and helpful feature for Google Search, which could potentially take it to great places.