Google Admits Its AI Overviews Search Feature Made Mistakes

Google’s Head of Search, acknowledged in a blog post that the company has made adjustments to its new AI search feature following viral screenshots highlighting its errors.

When strange and misleading answers generated by Google’s new AI Overview feature went viral on social media last week, the company initially downplayed the issues. However, late Thursday, Liz Reid, Google’s Head of Search, admitted that the errors underscored areas needing improvement. “We wanted to explain what happened and the steps we’ve taken,” Reid wrote.

Reid’s post specifically addressed two of the most widely shared and incorrect AI Overview results. One suggested that eating rocks “can be good for you,” and another recommended using nontoxic glue to thicken pizza sauce.

The Rock-Eating Error

Reid explained that the AI ​​tool misinterpreted a satirical article from The Onion, reposted by a software company, as factual information. The lack of reliable sources on the topic led to this bizarre recommendation.

The Glue-on-Pizza Blunder

The mistake regarding glue on pizza stemmed from the AI ​​misinterpreting sarcastic or troll content from discussion forums as genuine advice. “Forums often provide authentic, first-hand information but can sometimes lead to less helpful suggestions, like using glue to get cheese to stick to pizza,” Reid noted.

Caution Against AI-Generated Advice

It’s wise not to take AI-generated dinner suggestions at face value without careful review.

Defending AI Overviews

Reid suggested that judging Google’s new search feature based on viral screenshots was unfair. She asserted that extensive testing was conducted before the feature’s launch, and user data indicates that people value AI Overviews, often staying longer on pages discovered this way.

Reasons for the Failures

Reid attributed the high-profile mistakes to an internet-wide audit not always well-intentioned. “There’s nothing quite like having millions of people using the feature with many novel searches. We’ve also seen nonsensical new searches, seemingly aimed at producing erroneous results,” she said.

Addressing Fake Screenshots

Google claims that some of the viral screenshots of AI Overviews were fake. WIRED’s testing supported this, with examples like the false claim about cockroaches living in human anatomy proving to be fabricated. Additionally, the New York Times corrected its reporting that erroneously attributed dangerous advice about the Golden Gate Bridge to Google’s AI Overviews.

Technical Improvements

Reid’s post also outlined that Google has made “more than a dozen technical improvements” to AI Overviews. Among the changes are better detection of nonsensical queries, reduced reliance on user-generated content from sites like Reddit, less frequent AI Overviews in unhelpful situations, and stronger safeguards against providing AI summaries on critical topics like health.

Future Monitoring and Adjustments

Google will continue to monitor user feedback and adjust the AI ​​features as needed, although there was no mention of significantly reducing the AI ​​summaries in Reid’s blog post.

Leave a Comment