Google AI Under Fire for Bizarre Search Suggestions
Google’s experimental artificial intelligence (AI) search feature, known as “AI Overviews,” is attracting widespread criticism after providing some users with bizarre and inaccurate advice.
One user searching for tips on making cheese stick to pizza better was advised to use “non-toxic glue,” while another query about geologists led to the suggestion that humans should eat one rock per day. These erratic responses have sparked mockery across social media platforms.
A Google spokesperson downplayed the incidents, describing them as “isolated examples” and stating that the majority of AI-generated responses are accurate and useful. “The examples we’ve seen are generally very uncommon queries and aren’t representative of most people’s experiences,” Google said in a statement to the BBC. “The vast majority of AI overviews provide high-quality information, with links to dig deeper on the web.”
Google acknowledged that some of the erroneous answers may have been sourced from Reddit comments or satirical articles from sites like The Onion. The company assured that it has taken action where “policy violations” were identified and is refining its systems accordingly.
This is not Google’s first misstep with AI-powered products. In February, the company had to pause its chatbot Gemini due to criticisms of its “woke” responses. Similarly, Gemini’s predecessor, Bard, also experienced a troubled launch.
Google began testing AI overviews in search results for a select group of UK users in April, and expanded the feature to all US users during its annual developer showcase in mid-May. The tool is designed to summarize search results, saving users from having to sift through numerous websites.
Despite the current issues, many industry experts see AI-driven search as the future. Google’s dominance in the search engine market—with over 90% global market share according to Statcounter—means that its AI innovations are under significant scrutiny. Trust in these tools is crucial, as generative AI tools are prone to “hallucinations,” where they produce incorrect or nonsensical information.
In one particularly baffling instance, a user was advised that while gasoline should not be used to cook spaghetti faster, it could be used to make a “spicy spaghetti dish,” complete with a recipe.
Other tech giants are facing similar challenges as they integrate AI into their products. The UK’s data watchdog is investigating Microsoft for a feature that continuously takes screenshots of users’ online activity, and actress Scarlett Johansson criticized ChatGPT-maker OpenAI for using a voice similar to hers after she declined to lend her voice to the chatbot.
As AI continues to evolve and become more integrated into everyday tools, ensuring the accuracy and reliability of these technologies remains a critical challenge for companies like Google.