Sunday, October 13, 2024
ad
HomeOpinionGauging public confidence in AI-based content moderation tools

Gauging public confidence in AI-based content moderation tools

With the rising negativity online, we need to rely on AI content moderation tools. But, can we trust them?

According to a recent study from Cornell University, people’s perceptions of the moderation decision and the moderation system are influenced by the type of moderator, human or AI, and the “temperature” of harassing content online.

The study, which has just been released in the journal Big Data & Society, made use of a unique social media website where users could submit food-related content and leave comments on others’ posts. The website includes a simulation engine called Truman, an open-source platform that uses pre-programmed bots developed and managed by researchers to replicate the activities (likes, comments, posts) of other users. The Cornell Social Media Lab, under the direction of communication professor Natalie Bazarova, developed the Truman platform, which was named after the 1998 movie “The Truman Show.”

The Truman platform gives researchers the social and design flexibility to explore a range of study topics concerning human behaviors in social media. This allows them to create a controlled yet realistic social media experience for participants. According to Bazarova, Truman has been a really helpful tool for the group and other researchers to develop, implement, and test designs and dynamic treatments while allowing for the gathering and monitoring of people’s actions on the site. 

For a number of digital and media platforms, social media websites, and e-commerce marketplaces, content moderation has grown to be a crucial technique for fostering their success. This entails eradicating unrelated, offensive, unlawful, or otherwise improper content that is regarded as inappropriate for the general public. 

While AI may not be available to flag every offensive content on social media or other websites compared to humans’ proficiency, it is useful when faced with humongous troves of online data. Besides, AI moderation costs are quite low, and it saves us from being subjected to mental trauma due to viewing hours of inappropriate content. 

Read More: The Buck Stops Where: Insight into misuse of AI by Israel Government

Nearly 400 participants were informed that they would be beta testing a new social media platform for the study. They were chosen at random to participate in one of six experiment conditions that varied the type of content moderation system (users; AI; unknown source) and the harassing comment they encountered (ambiguous or clear).

Participants had to log in at least twice daily for two days. During this time, they were exposed to a harassing comment between two separate users (bots), which was regulated by a person, AI, or unidentified source. 

The researchers discovered that when the content is fundamentally equivocal, users are more inclined to question AI moderators, particularly how much they can trust their moderation choice and the moderating system’s accountability. The level of confidence in AI, humans, or an unidentified source of moderation was almost the same for a comment that was more visibly harassing in tone. There were no distinctions in how participants judged the fairness, objectivity, and comprehension of the moderating process. Overall, the study results highlight that when an AI moderator is visible, people tend to doubt the moderation decision and system more, which emphasizes how challenging it is to successfully control the exposure of autonomous content moderation in social media environments.

According to Marie Ozanne, the study’s lead author and assistant professor of food and beverage management, both the perception of system accountability and trust, whether the system is considered to behave in the best interests of all users, in the moderation decision are subjective assessments. If there is uncertainty, an AI appears to be questioned more than a human or an unidentified source of moderation.

The researchers propose that future studies examine how social media users would respond if they saw people and AI moderators working together, with computers able to manage vast volumes of data and humans able to read comments and identify subtle linguistic cues. In other words, they are looking to research a hybrid moderation system to understand the complex process of content moderation. This is important because the increasing negativity in the social media landscape has led to the adoption of AI as a content moderator.

When presented with ambiguous content, it is natural that the participants questioned the AI moderators. This is mostly due to concerns that completely automated content moderation by AI would be overly blunt and will unintentionally violate the right to create and circulate essential information.

While NLP has made great strides in content parsing, AI systems may struggle to interpret context effectively. AI systems currently lack the ability to comprehend fundamental human notions like sarcasm and irony as well as the political and cultural context, both of which change from time to time and region to region.

Read More: New York City Proposes New Rules on Employers’ Use of Automated Employment Decision Tools

Until now, AI has aided content moderation by utilizing visual recognition to identify a wide range of error-prone content (such as nudity or accidents). It occasionally depended on matching content to a list of prohibited content, such as propaganda films, child pornography, and copyrighted information, which calls for humans first to develop a list of prohibited items. In either instance, AI has falsely flagged comments that had information about ethnicity, sexual identity and culture. Thus, causing people to have low confidence in AI-based content moderation. 

At the same time, we need to rely on AI for content moderation as new data is created. Humans cannot flag every content during a conversational exchange between two parties in real-time. Further, studies have shown that constant exposure to online toxicity has detrimental effects on the human mind. From PTSD to real-life violence (e.g. violence due to religious nationalism and islamophobia in Myanmar).

The only solution is to make a ‘responsible, fair, trustworthy, and ethical’ AI system adept in content moderation with the help of humans.

Truman may be downloaded for free from the public GitHub repository. Cornell encourages other scientists to design and carry out their own studies using Truman. The university has also released a pdf document with a full how-to guide for installing and utilizing Truman for your research.

Subscribe to our newsletter

Subscribe and never miss out on such trending AI-related articles.

We will never sell your data

Join our WhatsApp Channel and Discord Server to be a part of an engaging community.

Preetipadma K
Preetipadma K
Preeti is an Artificial Intelligence aficionado and a geek at heart. When she is not busy reading about the latest tech stories, she will be binge-watching Netflix or F1 races!

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular