Glue pizza? Gasoline spaghetti? Google explains what happened with its wonky AI search results

Published on:

If you were on social media over the past week, you probably saw them. Screenshots of Google’s new AI-powered search summaries went viral, mainly because Google was allegedly making wild recommendations like adding glue to your pizza, cooking spaghetti with gasoline, or suggesting that you should eat rocks for optimal health. 

That was just the beginning.

Other particularly egregious examples also went viral, seemingly of the rogue AI feature suggesting mixing bleach and vinegar to clean a washing machine, which would produce potentially deadly chlorine gas, or jumping off the Golden Gate Bridge in response to a query of “I’m feeling depressed.”

- Advertisement -

So what happened, and why did Google’s AI Overview recommend those things?

First, Google says, the majority of what went viral wasn’t real.

Many screenshots were simply fake: “Some of these faked results have been obvious and silly. Others have implied that we returned dangerous results for topics like leaving dogs in cars, smoking while pregnant, and depression.” Those AI Overviews never appeared, Google says.

- Advertisement -

Second, numerous screenshots were from people intending to get silly search results — like ones about eating rocks. “Prior to these screenshots going viral,” Google said, “practically no one asked Google that question.” If nobody is googling a given topic, it probably means there’s not a lot of information available about it, or a data void. In such cases, there was only satirical content the AI interpreted as accurate.

Google admits that a few odd or inaccurate results did appear. Even those were for unusual queries, but they did expose some areas that need improvement. The company was able to determine a pattern of things that didn’t go right and made more than a dozen technical improvements, including:

  • Better detection for nonsensical queries that shouldn’t show an AI Overview and limited inclusion of satire and humor content

  • Limited use of user-generated content in responses that could offer misleading advice

  • Triggering restrictions for queries where AI Overviews were not proving to be helpful

  • Not showing AI Overviews for hard news topics where freshness and factuality are important and for most health topics

See also  Are you behind when it comes to generative AI?

With billions of queries coming in every day, Google says, things will get weird sometimes. The company says it’s learning from the errors, and promises to keep working to strengthen AI Overviews.

- Advertisement -

- Advertisment -


- Advertisment -

Leave a Reply

Please enter your comment!
Please enter your name here