During a livestreamed event this afternoon, Google detailed the ways it’s applying AI and machine learning to improve the Google Search experience.
Soon, Google says users will be able to see how busy places are directly in Google Maps without having to search for a specific business, an expansion of the existing busyness metrics. The company also said it’s adding COVID- 19 safety information to business profiles across Search and Maps, revealing whether they’re using safety precautions like temperature checks and more.
An algorithmic improvement to Did You Mean, Google’s spell-checking feature for Search, will enable more accurate and precise spelling suggestions. Google says the new model contains 680 million parameters and runs in less than three milliseconds.
Beyond this, Google says it can now index individual passages from webpages as opposed to whole pages. When this rolls out fully, it’ll improve roughly 7% of search queries across all languages, the company claims. A complementary AI component will also help Search to capture the nuances of what webpages are about, leading to a wider range of results for particular search queries.
Google’s also bringing Data Commons, its open knowledge repository that combines data from public datasets (e.g., COVID-19 stats from the U.S. Centers for Disease Control and Prevention), using mapped common entities, to web search results. In the near future, users will be able to search for topics like “employment in Chicago” on Search to see information in context.
On the e-commerce and shopping front, Google says it’s built cloud streaming technology that enables users to see products in augmented reality (AR). With a car, for example, they’ll be able to zoom in to view the steering wheel and other details in a driveway, to scale, or on their phones. Separately, Google Lens and Google Images will let shoppers discover similar products by tapping on elements like knits, ruffles sleeves, and more.
In another addition to Search, Google says it will deploy a feature that automatically highlights points in videos that, for example, compare different products or show steps in a recipe. And Live View in Maps, a feature that taps AR to provide turn-by-turn walking directions, will enable users to quickly see information about restaurants including how busy they tend to get and their star ratings.
Lastly, Google says it’ll let users search for songs by simply humming or whistling their melodies, initially in English on iOS and more than 20 languages on Android. On a smartphone, opening the latest version of the Google app or Search widget, tapping the mic icon, and saying “What’s this song?” or selecting the “Search a song” button will launch the feature, which requires 10 to 15 seconds of humming or whistling.
“After you’re finished humming, our machine learning algorithm helps identify potential song matches,” Google writes in a blog post. “And don’t worry, you don’t need perfect pitch to use this feature. We’ll show you the most likely options based on the tune. Then you can select the best match and explore information on the song and artist, view any accompanying music videos or listen to the song on your favorite music app, find the lyrics, read analysis and even check out other recordings of the song when available.”
Google says that melodies hummed into Search are transformed by machine learning algorithms into a number-based sequence representing the song’s melody. The models are trained to identify songs based on a variety of sources, including humans singing, whistling or humming, as well as studio recordings. They also take away all the other details, like accompanying instruments and the voice’s timbre and tone. This leaves a fingerprint, which Google compares with thousands of songs from around the world and identify potential matches in real time.
“From new technologies to new opportunities, I’m really excited about the future of search and all of the ways that it can help us make sense of the world,” Prabhakar Raghavan, head of search at Google, said.
Last month, Google announced it will begin showing quick facts related to photos in Google Images, enabled by AI. Starting this week in the U.S. in English, users who search for images on mobile might see information from Google’s Knowledge Graph — Google’s database of billions of facts — including people, places, or things germane to specific pictures. The new feature, which appears on some photos within Google Images, is intended to provide context around both images and the webpages hosting them.
Google also recently revealed it’s using AI and machine learning techniques to more quickly detect breaking news around crises like natural disasters. In a related development, Google said it launched an update using language models to improve the matching between news stories and available fact checks.
Last year, Google similarly set out to solve query ambiguities with an AI technique called Bidirectional Encoder Representations from Transformers, or BERT for short. BERT, which emerged from the tech giant’s research on Transformers, forces models to consider the context of a word by looking at the words that come before and after it. According to Google, BERT helped Google Search better understand 10% of queries in the U.S. in English — particularly longer, more conversational searches where prepositions like “for” and “to” matter a lot to the meaning.
BERT is now used in every English search, Google says. Moreover, it’s deployed across langauges including Spanish, Portugese, Hindi, Arabic, and German.
For instance, Google’s previous search algorithm wouldn’t understand that “2019 brazil traveler to usa need a visa” is about a Brazilian traveling to the U.S. and not the other way around. With BERT, which realizes the importance of the word “to” in context, Google Search provides more relevant results for the query.
Credit: Google News