TechTech newsTechnology

Google Hopes AI Can Turn Search Into a Conversation

Google often uses its annual developer conference, I/O, to showcase artificial intelligence with a wow factor. In 2016, it introduced the Google Home smart speaker with Google Assistant. In 2018, Duplex debuted to answer calls and schedule appointments for businesses. In keeping with that tradition, last month CEO Sundar Pichai introduced LaMDA, AI “designed to have a conversation on any topic.”

In an onstage demo, Pichai demonstrated what it’s like to converse with a paper airplane and the celestial body Pluto. For each query, LaMDA responded with three or four sentences meant to resemble a natural conversation between two people. Over time, Pichai said, LaMDA could be incorporated into Google products including Assistant, Workspace, and most crucially, search.

“We believe LaMDA’s natural conversation capabilities have the potential to make information and computing radically more accessible and easier to use,” Pichai said.

The LaMDA demonstration offers a window into Google’s vision for search that goes beyond a list of links and could change how billions of people search the web. That vision centers on AI that can infer meaning from human language, engage in conversation, and answer multifaceted questions like an expert.

Also at I/O, Google introduced another AI tool, dubbed Multitask Unified Model (MUM), which can consider searches with text and images. VP Prabhakar Raghavan said users someday could take a picture of a pair of shoes and ask the search engine whether the shoes would be good to wear while climbing Mount Fuji.

MUM generates results across 75 languages, which Google claims gives it a more comprehensive understanding of the world. A demo onstage showed how MUM would respond to the search query “I’ve hiked Mt. Adams and now want to hike Mt. Fuji next fall, what should I do differently?” That search query is phrased differently than you probably search Google today because MUM is meant to reduce the number of searches needed to find an answer. MUM can both summarize and generate text; it will know to compare Mount Adams to Mount Fuji and that trip prep may require search results for fitness training, hiking gear recommendations, and weather forecasts.

In a paper titled “Rethinking Search: Making Experts Out of Dilettantes,” published last month, four engineers from Google Research envisioned search as a conversation with human experts. An example in the paper considers the search “What are the health benefits and risks of red wine?” Today, Google replies with a list of bullet points. The paper suggests a future response might look more like a paragraph saying red wine promotes cardiovascular health but stains your teeth, complete with mentions of—and links to—the sources for the information. The paper shows the reply as text, but it’s easy to imagine oral responses as well, like the experience today with Google Assistant.

But relying more on AI to decipher text also carries risks, because computers still struggle to understand language in all its complexity. The most advanced AI for tasks such as generating text or answering questions, known as large language models, have shown a propensity to amplify bias and to generate unpredictable or toxic text. One such model, OpenAI’s GPT-3, has been used to create interactive stories for animated characters but also has generated text about sex scenes involving children in an online game.

As part of a paper and demo posted online last year, researchers from MIT, Intel, and Facebook found that large language models exhibit biases based on stereotypes about race, gender, religion, and profession.

Rachael Tatman, a linguist with a PhD in the ethics of natural language processing, says that as the text generated by these models grows more convincing, it can lead people to believe they’re speaking with AI that understands the meaning of the words that it’s generating—when in fact it has no common-sense understanding of the world. That can be a problem when it generates text that’s toxic to people with disabilities or Muslims or tells people to commit suicide. Growing up, Tatman recalls being taught by a librarian how to judge the validity of Google search results. If Google combines large language models with search, she says, users will have to learn how to evaluate conversations with expert AI.


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button