In a move that has sparked debates and reflections in the technology world and beyond, Google has announced a temporary suspension of its new artificial intelligence model, Gemini, from producing images of people following criticism over the ethnic portrayal of historical figures such as Second World War German soldiers and Vikings. These portrayals, which included people of color in historically traditionally associated with Caucasian figures, raised questions of historical accuracy and bias in AI.
Google’s decision to halt people image generation with Gemini comes after social media users shared examples of images produced by the tool depicting historical figures – including popes and the founding fathers of the US – in a variety of ethnicities and genders. “We are already working to address recent issues with Gemini’s image generation feature. While we do this, we will pause the image generation of people and will soon release an improved version,” Google said in a statement.
Although Google did not refer to specific images in its statement, examples of Gemini image results were widely available on X, accompanied by comments on AI’s issues with accuracy and bias. A former Google employee highlighted the difficulty in getting Google Gemini to recognize the existence of white people.
Jack Krawczyk, a senior director on Google’s Gemini team, admitted that the model’s image generator needed adjustments. “We are working to immediately improve these kinds of depictions,” he said. Krawczyk added that while Gemini’s AI image generation does produce a wide range of people, which is generally a good thing given its global user base, “it’s clear we missed the mark here.”
Coverage of bias in AI has highlighted numerous examples of negative impact on people of color, including a Washington Post investigation that showed examples of image generators showing bias against people of color and sexism. Andrew Rogoyski, from the Institute for People-Centered AI at the University of Surrey, commented that mitigating bias is a “difficult problem in most fields of deep learning and generative AI,” but it is likely that things will improve over time through research and various approaches to eliminating bias.
In conclusion, Google’s pause in generating images of people with Gemini underscores the importance of addressing and correcting bias in artificial intelligence, highlighting the need for constant attention to issues of fairness and inclusion in emerging technologies.
Sign up for the newsletter. Stay updated!
We will send you periodical important communications and news about the digital world. You can unsubscribe at any time by clicking the appropriate link at the bottom of the newsletter.