Saturday, December 21, 2024
ad
HomeNewsFacebook lands in Trouble After its AI wrongfully tags black men as...

Facebook lands in Trouble After its AI wrongfully tags black men as Primates

Facebook apologized for the error caused by its AI and calls it unacceptable!

Facebook has again found itself in hot waters — this time, its AI has put the ‘Primates’ label on video of black men. 

The video titled “White man calls cops on black men at marina” dates to June 27, 2020, was by The Daily Mail and featured clips of Black men in altercations with white civilians and police officers. It showed how a white guy was able to get a black man arrested from his house after the white man claimed he had been harassed.

The problem arose when some users were viewing the video and then received an automated message prompt from Facebook asking if they wanted to “keep seeing videos about Primates”. The video has nothing to do with monkeys, chimps, or gorillas, despite the fact that humans are among the many species in the primate family.

After receiving a screenshot of the recommendation, Darci Groves, a former Facebook content design manager, took to Twitter to post about the incident. “This ‘keep seeing’ prompt is unacceptable,” Ms. Groves tweeted, aiming the message at current and former colleagues at Facebook. “This is egregious.”

A product manager for Facebook Watch, the company’s video program, responded by calling it “unacceptable” and promising that the company was looking into the underlying cause. 

Facebook immediately took down the recommendation software that was involved in this fiasco.

“As we have said, while we have made improvements to our AI we know it’s not perfect and we have more progress to make. We apologize to anyone who may have seen these offensive recommendations,” Facebook said in response to an AFP inquiry. After realizing the repercussions of the incident, the social networking giant sprang into action and quickly disabled the entire topic recommendation feature. It also began investigating the cause behind it to prevent such instances in the future.

This is not the first case of artificial intelligence gone wrong and ‘racist.’  For instance, a few years ago, an algorithm in Google had mistakenly classified black people as “gorillas” on its Photos app. This forced the company to apologize and assure that the issue would be resolved. However, more than two years later, Wired found that Google’s solution was to censor the word “gorilla” from searches, while also blocking “chimp,” “chimpanzee” and “monkey.”

Marsha de Cordova, a black MP, was mistakenly identified as Dawn Butler by the BBC in February last year. A couple of years back, another facial recognition program incorrectly identified a Brown University student as a suspect in the Sri Lanka bombings, prompting death threats against him. According to Reuters, Chinese President Xi Jinping’s name appeared on Facebook last year as ‘Mr Shithole’ when translated from Burmese, a Facebook-specific problem that was not reported elsewhere.

Read More: How Does Facebook AI’s Self-Supervised Learning Framework For Model Selection & Hyperparameter Tuning Work?

As per a recent Mozilla study, the Youtube AI recommends 71% of videos that are harmful. Most recently, last month, it was discovered that Twitter’s image-cropping algorithm favors persons who are younger, thinner, have feminine characteristics and have lighter skin.

These incidents highlight, misidentification and abuse of minorities as a result of AI are becoming increasingly prevalent. According to a 2018 MIT study of three commercial gender-recognition algorithms, dark-skinned women had mistake rates of up to 34%, approximately 49 times higher than white males. 

While the tragic death of George Floyd brought to light the widespread biases perpetrated by machines and technology, much needs to be done about detecting and removing bias in data. It is true that AI can help minimize human bias, and its models are as good as the training data; it is still a chicken and egg issue. Therefore, it is important to understand how we define bias as well as hold companies accountable for the misuse of AI algorithms.

Subscribe to our newsletter

Subscribe and never miss out on such trending AI-related articles.

We will never sell your data

Join our WhatsApp Channel and Discord Server to be a part of an engaging community.

Preetipadma K
Preetipadma K
Preeti is an Artificial Intelligence aficionado and a geek at heart. When she is not busy reading about the latest tech stories, she will be binge-watching Netflix or F1 races!

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular