Use Glue To Stick Cheese To Your Pizza: Says Google AI Overview
- Several users have shared screenshots that show Google AI overview displaying incorrect results to search queries.
- A Google spokesperson has called it a deliberate attempt from users to sabotage the feature.
- A particular user has built a website that produces old ‘Web’ style Google search results without AI answers.
The Google AI overview feature doesn’t seem to be going as planned. Users have shared more than one instance where the AI overview feature has produced incorrect answers to search queries.
When a user searched, ‘cheese is not sticking to pizza’, Google AI overview suggested using glue to solve the issue. Interestingly, the source was an 11-year-old Reddit comment.
Although Google has removed this source from its AI overview, it is still the top result in Google Search.
Another instance was when a user searched ‘how many feet does an elephant have’. To this, the Google AI overview answered that elephants have 2 feet with five toes on the front and four on the back.
The tool was also found to be politically incorrect in some instances. For example, when a user searched ‘how many Muslim presidents in US’, Google AI overview said that Barack Hussain Obama is considered the first Muslim president of the United States. Even Mr. Obama wouldn’t believe this.
Google’s Insensitive Response
As expected, Google has come down all guns blazing to defend its AI overview feature.
‘The examples we’ve seen are generally very uncommon queries and aren’t representative of most people‘s experience using Search.’ – Google Spokesperson
However, I think it is just a futile attempt at defending a malfunctioning AI system. With Google processing around 99,000 search queries per second, it is very difficult to say which query is ‘uncommon’. After all, there may be more than one user whose cheese might not be sticking to the pizza.
The spokesperson even went on to say that the users are deliberately attempting to trip the technology by asking uncommon questions. This is again a very irresponsible statement coming from a Google representative. After all, you cannot blame the user for a faulty product.
Let’s look at another search instance. When a user searched about tobacco health benefits, the AI Overview went down to promote tobacco, saying that it increases relaxation, euphoria, and alertness. It also recommended nicotine to improve concentration and memory. However, there was no warning about using a dangerous product like tobacco. Neither does the answer win the user on tobacco’s adverse effects.
Now calling such a common search query, a deliberate attempt is an act of distrust from a tech giant like Google. It is quite possible that a person trying to quit smoking might read this, which will motivate him instead of helping him quit. However, instead of taking responsibility for such mishaps, Google is blaming the users.
Read More: Google restricts AI chatbot Gemini from responding to election-related queries
Frustrated Users
Ernie Smith, a journalist and a writer, seems to have found a way around these irrelevant AI suggestions. Smith has built a website that reroutes all the searches made through Google to avoid any AI-generated answers. This website has gained a lot of attention and has even surpassed the traffic of Smith’s 10-year-old blog.
However, it is not just Google that is acting irresponsibly in this matter. Users too have jumped on the opportunity to create fake screenshots of AI Overview. Popular artists like Lil Nas X have also shared fake AI overview results on depression.
The trend seems to be moving towards a new meme format. In such a situation, it is not possible even for a tech giant like Google to inspect each screenshot.
At a time when Google is pressing down the accelerator on AI features, such hiccups are expected. We hope Google fixes the error soon and comes up with a much improved AI Overview.
Our Editorial Process
The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.