sb.scorecardresearch

Published 20:47 IST, February 24th 2024

Google says Gemini AI not reliable on political topics after flak on ‘biased’ PM Modi description

The IT Ministry took cognisance of a journalist’s post, which asked Google’s GenAI chatbot to describe the Prime Minister

Reported by: Business Desk
Follow: Google News Icon
  • share
Google Gemini
Google Gemini | Image: Google

Biased response: Search engine giant Google is rapidly working to address concerns around Gemini AI’s response on Prime Minister Narendra Modi, after the IT Ministry took cognisance of its biased response. 

The chatbot “may not always be reliable” in responding to certain prompts about current events and political topics, Google said on Saturday, adding that it is built as a creativity and productivity tool.

“We’ve worked quickly to address this issue…Gemini is built as a creativity and productivity tool and “may not always be reliable, especially when it comes to responding to some prompts about current events, political topics, or evolving news,” a Google spokesperson said in a response to PTI.

A journalist had posted a screenshot of a query around PM Modi it asked Google’ Gemini, which received uncharitable comments. The responses were circumspect when asked of Ukraine President Volodymyr Zelenskyy and former US President Donald Trump.

Taking cognisance of the matter, Minister of State for Electronics and Information Technology (MeitY) Rajeev Chandrasekhar said the response is in direct violation of IT rules as well as several provisions of the criminal code.

Not the only case

Gemini has also faced flak on its text-to-image generation feature, which saw a response from the company in a blogpost.

The feature has ‘inaccuracies’ when it comes to historical images generated by the chatbot. Several people of white ethnicity said the chatbot omitted representation for them.

‘It is clear that this feature missed the mark. Some of the images generated are inaccurate or even offensive. We’re grateful for users’ feedback and are sorry the feature didn't work well,” Prabhakar Raghavan, Senior Vice President of the company said.

Raghavan said the company responds to targeted prompts such as “a Black teacher in a classroom,” or “a white veterinarian with a dog” — or people in particular cultural or historical contexts.

The feature “overcompensates” for representation, leading to “embarrassing and wrong” images, he added.
 

Updated 20:47 IST, February 24th 2024