Google Fixes “Woke” Gemini AI Image Generation

Google Fixes “Woke” Gemini AI Image Generation

Google Gemini AI
Google is once again allowing users to generate AI images of people after months of controversy and a whole different Gemini model. Credit: Courtesy of Google

Through its latest update, Google’s AI chatbot, Gemini, will once again allow users to generate AI pictures of people. Google had previously removed this feature from Gemini AI after the chatbot generated historically inaccurate images that critics have labeled as “woke.”

The specific incident that prompted Google to remove this feature was a report from users that the AI was generating pictures of “culturally diverse” Nazis.

Through a press release, the tech giant announced that it would roll out the new feature through Gemini Advanced, Business, and Enterprise users in English.

Google Gemini’s AI-generated images will be powered by Imagen 3

Earlier in August, Google had rolled out Imagen 3 through its AI test kitchen. The update to Google Gemini powered by this AI will be capable of generating photorealistic landscapes and texturized paintings and will soon be available in all languages.

Google has also made sure that its Imagen 3 model comes with safeguards and performs favorably compared to other image generation models.

This has been a hot topic in the AI-generated image community, given that Grok, X’s chatbot, was producing some unhinged results earlier this month.

Google has taken this aspect of AI image generation very seriously, so much so that Gemini will not allow users to create photorealistic images of public figures, minors, bloodshed, violence, or sexual scenes.

It remains to be seen whether users can reverse engineer AI prompts to override this security feature implemented by Google.

Some critics had previously labeled the AI as “woke”

In February 2024, Google Gemini came under fire for being “woke” and historically inaccurate. The AI was generating images that featured the founding fathers of the United States with one of them being portrayed as an African American, as well as Catholic popes portrayed as women.

Notorious alt-right user Ian Miles Cheong shared an example of the issue through his X account:

Google Gemini AI
This image was generated by Ian Miles Cheong with Google Gemini Credit: @stillgray/X

This is just one example of the issues Google Gemini was facing with image generation of people.

The real issue for Google, however, came when its AI mistakenly generated a portrayal of Nazi soldiers as a black man and an Asian woman. This prompted Google to respond with a blog post titled “Gemini image generation got it wrong. We’ll do better,” written by Senior Vice President Prabhakar Raghavan.

Through the blog post, Raghavan acknowledged the mistake and explained exactly what had happened.

Prahbkar explains that if a user wrote a prompt with the aim of generating a person of a specific ethnicity, the AI should absolutely generate what the user asked for. Nevertheless, this is exactly what didn’t happen.

“So what went wrong? In short, two things. First, our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range. Second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely—wrongly interpreting some very anonymous prompts as sensitive,” explained Raghavan.

Nonetheless, Google says it has now rectified the issue and is confident in the AI’s behavior and its capabilities to match the standard of competitors such as DALL-E, Midjourney, and Grok.



Leave a Reply

Your email address will not be published. Required fields are marked *