Why people didn’t like that Google’s AI tool showed pictures of black people

Gemini’s effort to fix the way that racial and ethnic minorities are treated has gone wrong.

In late February, Google’s generative AI tool Gemini reimagined the world as if the founding fathers of the United States were Black women and the fighters of Ancient Greece were Asian women and men.

When the new image generation tool came out, it caused a lot of confusion and interest on social media sites. Gemini mostly showed users results with people of color, whether it was acceptable or not, when they typed in any prompts to make AI-generated pictures of people.

STAY READING
list of three things 1 of 3
Google stops Gemini’s picture tool for people after a list of people who are against being “woke” 2 of 3
Gem­i­ni was criticized by Google over images of China on a list of anti-“woke” people. 3 of 3
The big changes in AI in 2023 were laws, school bans, and the drama with Sam Altman.

X users laughed as they tried over and over to make pictures of white people on Gemini but failed. Online users found some examples funny, while others, like pictures of brown people wearing Nazi uniforms from World War II with swastikas on them, caused outrage, which is what caused Google to briefly disable the tool.
Here’s more about Google Gemini and the current fuss that it caused.

How do I get Google Gemini?

There was a robot named Bard that Google made as its first AI project.

Google CEO Sundar Pichai announced Bard on February 6, 2023, as a talking AI program or “chatbot” that can pretend to talk with users. It was made available for use on March 21, 2023.

Because the user could give it written instructions, it could write articles or even code. This is how it got its name, “generative AI.”

Google said that Gemini would take the place of Bard, and the public could get both a free and a paid version of Gemini through its website and smartphone app. Google said that Gemini would be able to handle text, pictures, and videos, among other types of input and output.But because of the debate, the part of Gemini that got the most attention was the part that lets you make images.

What kinds of pictures did Gemini make?

The most offensive pictures were those that showed women and people of color during historical events or in roles that were usually filled by white men. One image, for instance, showed what looked like a Black woman as the pope.

There may have been three Black popes in the history of the Catholic Church. The last Black pope left office in 496 AD. There is no record of a female pope in the official history of the Vatican, but a story from the Middle Ages says that a young woman named Pope Joan posed as a man and was pope in the ninth century.

How does Gemini do its job?

According to Margaret Mitchell, chief ethics scientist at the AI startup Hugging Face, Gemini is a generative AI system that combines the models that made Bard work. These models include LaMDA, which makes the AI conversational and intuitive, and Imagen, which turns words into images.

Generative AI tools come with a lot of “training data” that they use to answer questions and ask users for feedback.

In a blog post, Pichai and Demis Hassabis, CEO and co-founder of the British-American AI lab Google DeepMind, talked about how the tool can work with “text, images, audio, and more at the same time.”

Mitchell explained, “It can take text prompts as input and come up with likely responses as output.” “Likely” in this case means “statistically probable” based on what it has seen in the training data.
What kind of bias does creative AI have?

People have said that generative AI models are biased because their algorithms don’t take into account people of color or don’t stop at stereotypes when they come up with results.

According to Ayo Tometi, co-founder of the anti-racist movement Black Lives Matter in the US, AI, like other technologies, has the potential to make stereotypes stronger.

For seven years, artist Stephanie Dinkins has been playing around with AI’s ability to draw Black women in an accurate way. Dinkins discovered that when AI was told to make pictures, it tended to change the shape of faces and the pattern of hair. Other artists who have used different systems, like Stability AI, Midjourney, or DALL-E, to try to make images of Black women have had the same problems.

Some people also say that generative AI models make the pictures of Black and Asian women they make too sexual. Some Asian and Black women have also said that when they use AI to make pictures of themselves, the photos make their skin lighter.

In an episode of Al Jazeera’s Digital Dilemma, data reporter Lam Thuy Vo said that these kinds of things happen when the people who share the training data don’t include people of color or people who aren’t part of “mainstream culture.” If there aren’t enough people of different backgrounds entering the training data for image generation AI, the AI may “learn” skewed patterns and similarities in the images and use that information to make new images.

Also, training data comes from the internet, which has a lot of different kinds of material and images, some of which are racist and sexist. The AI may be able to copy that after learning from the training data.

So, the people who aren’t given much attention in data sets are more likely to use technology that doesn’t take them into account or shows them properly, which can lead to discrimination and keep it going.

Is this why Gemini came up with offensive pictures?

This is not true at all. Gemini was made to try to stop these problems from happening again.

Other generative AI models have mostly used training data of light-skinned guys to make images. Gemini, on the other hand, has been making images of people of color, especially women, even when it’s not right to do so.

Mitchell said that AI can be taught to add words to a user’s prompt after they write and send it.

First, online conservatives were against Gemini’s renders because they were “furthering Big Tech’s woke agenda” by showing, for example, the Founding Fathers of the United States as men and women from racial and ethnic minority groups.

On the other hand, Google was able to hurt minority racial groups by making pictures of Black men and women wearing Nazi uniforms, for example.

What did Google say in response?

Google said last week that the pictures that Gemini is making are the result of the company’s work to get rid of biases that used to support discriminatory and stereotypical views.

Google’s Prabhakar Raghavan wrote on his blog that Gemini had been set up to show a variety of people, but it hadn’t changed for prompts where that would be wrong. It had also been too “cautious” and thought “some very bland prompts were sensitive,” he said.

In some situations, these two things made the model overcompensate, and in others, they made her too conservative. This led to embarrassing and wrong pictures, he said.

Do you know what else Gemini got wrong?

Users were mad about more than just the AI-made pictures of people.

People who used Gemini also said on X that the tool couldn’t make accurate pictures of events like the Tiananmen Square killing in 1989 and the pro-democracy protests in Hong Kong in 2019.

“It is important to approach this topic with respect and accuracy, and I am not able to ensure that an image generated by me would adequately capture the nuance and gravity of the situation,” Gemini said, according to a screenshot shared by Stephen L. Miller, a conservative commentator on X in the US.

On X, PhD student Kennedy Wong from the University of California said that Gemini wouldn’t translate Chinese words like “Liberate Hong Kong, Revolution of Our Times” and “China is an authoritarian state” into English because Beijing thought they were too sensitive.

A reporter in India named Arnab Ray asked the Gemini chatbot if Narendra Modi, the Prime Minister of India, is a Nazi. Gemini replied that Modi has been “accused of putting in place policies that some experts have called fascist.” When Ray asked the same questions about former US President Donald Trump and current Ukrainian President Volodymyr Zelenskyy, Gemini gave more vague answers.

The Guardian said that when asked about Trump, Gemini said, “Elections are a complicated subject with information that changes quickly.” Try Google Search to make sure you have the most up-to-date information. It said that for Zelenksyy, it was “a complicated and sharply contested question with no simple answer.” “It’s important to approach this topic with nuance and think about different points of view,” it said.

This made people who back Modi very angry, and Gemini’s response was seen as bad by the junior IT minister, Rajeev Chandrasekhar.

Has Google taken Gemini offline?

Google hasn’t completely shut down Gemini.

But on February 22, the company said it would temporarily stop Gemini from making pictures of people.

Sundar Pichai, the CEO of Google, wrote a letter to the news website Semafor on Tuesday admitting that Gemini had upset people. “I know that some of its responses have hurt our users and shown bias. To be clear, that’s not okay, and we did something wrong,” he wrote.

He also said that Google is trying to fix the mistakes, but he didn’t say when the image-making tool would be available again. According to him, “no AI is perfect, especially at this early stage of the industry’s growth. But we know the bar is high for us and we will keep at it for as long as it takes.”

Raghavan also said that the tool would be tested thoroughly before the feature could be used again.

What changes has the debate made to Google?

Because of the fuss on Wall Street, Alphabet, the parent company of Google, lost about $96.9bn in market value on February 26.

Alphabet’s stock has dropped almost 4%, from $140.10 on February 27 to $133.78 on Tuesday.

Leave a Reply

Your email address will not be published. Required fields are marked *