In a groundbreaking study published in Science Advances, researchers have discovered that tweets written by artificial intelligence (AI) language models are more convincing to people than tweets created by humans. The study raises concerns about the potential impact of AI-generated disinformation on public perception and trust.
To conduct the study, researchers turned to OpenAI’s renowned language model, GPT-3. The AI was assigned to generate tweets with either informative or misleading content on various topics prone to misinformation, such as vaccines, 5G technology, Covid-19, and the theory of evolution. Real tweets on the same subjects were also collected for comparison.
A total of 697 members participated in an online quiz, where they had to determine whether the tweets presented to them were AI-generated or collected from Twitter. They also had to assess the accuracy of the content. Surprisingly, participants were 3% less likely to believe false tweets from humans than from AI.
Read More: Koo, Indian Rival to Twitter, integrates ChatGPT
Giovanni Spitale, the lead researcher from the University of Zurich, expressed uncertainty about the reasons behind this phenomenon. However, the study suggested that the way GPT-3 structures information might contribute to the credibility of AI-generated content. Notably, participants found it difficult to differentiate between AI-generated and organic tweets, highlighting the advanced capabilities of GPT-3.
While participants were more effective at identifying misinformation from real Twitter users, GPT-3-generated tweets with false information managed to deceive them slightly more.
The study serves as a wake-up call, and the findings highlight the influence of AI-generated content and the challenges it poses in shaping public opinion. As AI technology continues to advance, it becomes crucial to develop strategies and safeguards to tackle the spread of AI-generated disinformation.