Saturday, April 20, 2024
ad
HomeOpinionHow AI Image Generators are Compounding existing Gender and Cultural Bias woes?

How AI Image Generators are Compounding existing Gender and Cultural Bias woes?

While AI Image Generators are causing quite a commotion in the industry, can they comply with the fairness in results?

AI image generators are one of the hottest trends this year. AI image generators have gained more traction this year with the release of OpenAI’s DALL-E 2 in April and the self-titled product from independent research lab Midjourney in July. Additionally, Google announced Imagen in May, and Stability AI rolled out the text-to-image generator Stable Diffusion. We were also blessed with knock-offs like HuggingFace’s DALL-E Mini (later renamed to crayon). While these innovations are already sparking controversies about ownership and copyright violation, the results also ‘display’ another challenge: perpetuating gender and cultural bias in the results.

Hugging Face’s artificial intelligence researcher Sasha Luccioni developed a tool to show how the bias in text-to-art generators is put to use. Using the Stable Diffusion Explorer as an example, entering the word “ambitious CEO” produced results that showed various men in various black and blue suits, however, entering “supporting CEO” produced results that displayed an equal number of both women and men.

Luccioni explained to Gizmodo that the LAION image set, which comprises billions of photographs, photos, and other images collected from the internet, including image hosting and art sites, serves as the foundation for the Stable Diffusion system. The way Stability AI classifies various image categories leads to the establishment of this gender, as well as some racial and cultural prejudice. According to Luccioni, the system is trained to focus on the 90% of photos connected to a prompt if the training dataset comprises 90% male and 10% female images. That is the most extreme case, but the greater the diversity of visual data on the LAION dataset, the less probable it is that the Stable Diffusion Explorer would generate bias-free results.

In a Risks and Limitations document released in April, OpenAI noted that their technology might encourage prejudices. Their technology generated photos that disproportionately displayed white individuals and frequently depicted western culture, such as western-style weddings. Similar to this, the DALL-E 2 generated results that were biased toward men for the phrase “builder” and women for the term “flight attendant,” despite the fact that both men and women work as flight attendants and builders, respectively. 

Large tech companies are generally hesitant to grant unrestricted access to their AI product because of concerns about being misused and, in this case: to generate ethically questionable content. But in the last few years, several AI researchers have started creating and making AI products available to the public. The development leads to experts raising questions about the possible exploitation of AI technology. Even though Stability AI touted Stable Diffusion as a far more “open” platform than its competitors, its actually quite unregulated. In comparison, OpenAI has been upfront about the biases in their AI image generators. 

Even Google mentioned that Imagen encodes social prejudices, as its algorithms also produce content that is frequently racist, sexist, or harmful in other creative ways.

The core inspiration behind Stable Diffusion is to make AI-based image generation more accessible. However, this did not age well with time! Though the official version of Stable Diffusion contains safeguards to prevent the production of nudity or gore, since the entire code of the AI model has been made available, others have been able to remove such restrictions – allowing the public to generate explicit content and thereby adding more bias to the system. The inability to AI image generators to state or showcase how well they understood the prompt while generating results is another gray area. Even if they produce hazy or disfigured results like Craiyon, giving incorrect results can be frustrating.  For example, results of individuals dressed as judges for a ‘lawyer’ prompt. 

Read More: Study Finds Humans Fail to Distinguish Real Image from Deepfakes: An Impending Concern

People have been motivated to quit their jobs, pursue AI art, and fantasize about the future when artificial intelligence will make it possible for them to experiment with making creative material since AI image generators have evolved so much better over the past year. The earlier generation of generative AIs was built using an algorithm known as GANs, or generative adversarial networks. GANs are notorious for their ability to create deepfakes of humans by pitting two AI models against one another to see who can better produce a picture that serves a certain purpose. In comparison, AI image generators rely on transformers initially introduced in a 2017 Google study. Transformers are trained on larger datasets that are generally web-scrapped online data, including user information from social networking sites. And every social media user is aware of the fact that social networking sites can be breeding grounds for ethnic and racial hate, coupled with content that are either sexist or misogynist. Therefore, training transformer models on online (visual) data that already leans towards human socio-ethnic bias, is something that needs to be addressed.

AI image generators are not the first instance of AI models being criticized for biased results. For instance, Microsoft created an AI chatbot to learn how to comprehend conversations a few years ago. While engaging with thousands of Twitter users, the software began to act abnormally, and in a surprising turn of events, the chatbot professed its hatred for the Jewish people and voiced allegiance to Adolf Hitler. A few years ago, Amazon’s test-market hiring tool employed artificial intelligence to provide job seekers ratings of one to five stars, akin to how customers review purchases on Amazon. In this case, Amazon’s computer models were programmed to screen candidates by studying trends in resumes sent to the company over a 10-year period, where the majority of submissions were from males, echoing the dominance of men in the tech world. As a result, Amazon’s algorithm started to favor male applicants and penalized applications that contained the phrase “women’s.”

There are a lot of stories in the news about how AI failed to stop sexist advertising, misogynistic employment practices, racist criminal justice practices, and the dissemination of misleading information. or has contributed to it. AI image generators are attracting a lot of hype, as with any other new technology, which is reasonable given that it is a truly remarkable and ground-breaking tool. But before it becomes a mass service tool we need to grasp what is happening behind the scenes. Even if these research results may be disheartening, at least we have started paying attention to how these biases seep into models. Some companies, such as OpenAI, have addressed flaws and hazards associated with their AI image generators in documentation and research, stating that the systems are prone to gender and racial prejudices.

Developers and open-source model users can use this to identify problems and nip them in the bud to prevent them from getting worse. In other words, what can the scientific and tech community do to improve things is the question. 

For now, if some of the AI image generators are not open-sourced yet, it is better as their capacity to automate image-making spread racial and gender bias on a large scale is being reviewed before making it public.

Subscribe to our newsletter

Subscribe and never miss out on such trending AI-related articles.

We will never sell your data

Join our WhatsApp Channel and Discord Server to be a part of an engaging community.

Preetipadma K
Preetipadma K
Preeti is an Artificial Intelligence aficionado and a geek at heart. When she is not busy reading about the latest tech stories, she will be binge-watching Netflix or F1 races!

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular