Friday, November 1, 2024
ad
HomeOpinionDeepfake Detection Technology: Where do we stand?

Deepfake Detection Technology: Where do we stand?

The year 2022 has seen too many Deepfakes acting as vectors of misinformation. Do we stand a chance in fighting them?

A team of researchers from the Johannes Kepler Gymnasium and the University of California, Berkley, created an artificial intelligence (AI) application that can determine whether a video clip of a famous person is real or a deepfake. 

To determine if a video was real or not, researchers Matyá Boháek and Hany Farid discuss training their AI system to distinguish certain persons’ distinctive physical movements in their study published in Proceedings of the National Academy of Sciences. Their research expands on previous work in which a system was trained to recognize deepfake features and head movements of prominent political figures, including former president Barack Obama.

Deepfakes, a portmanteau of the terms “deep learning” and “fake,” initially appeared on the Internet in late 2017, powered by generative adversarial networks (GANs), an exciting new deep learning technology. Today, deepfakes have plagued the internet with their presence. 

Consider the following scenario: you receive a video of a celebrity from a friend. You see that the famous celebrity is making an absurd statement or having a dance-off or engaging in ethically questionable activity. Whether intrigued or shocked, you forward the video to your other friends, only to discover later that it is fake. Now think back to the time you first watched the video. Perhaps you assumed the video was real cause it appeared to look completely like one. Unlike earlier deepfake videos, which were quickly debunked in the previous decade, today, GANs are powerful enough to create deepfake content where the human eye cannot discern if it is manipulated media.

In February, a study published in the Proceedings of the National Academy of Sciences USA, found that humans are finding deepfake images to be more realistic than the actual ones. Researchers at Stanford Internet Observatory reported in March that they had found over 1,000 LinkedIn accounts with profile photos that seemed to be generated by AI. Such instances highlight that it is important to develop a tool or solution that identifies the deepfake content online. 

Last month, Intel introduced new software that is claimed to be capable of instantly recognizing deepfake videos. With a 96% accuracy rate and a millisecond reaction time, the company asserts that their “FakeCatcher” real-time deep fake detection is the first of its type in the world.

In their current research, Boháek and Farid trained a computer model using over 8 hours of video footage that shows Ukraine President Volodymyr Zelenskyy saying things he did not say. According to reports, the video was produced to support the Russian government in persuading its public to believe state propaganda about the invasion of Ukraine.

Inspired by the previous research study where AI could identify deepfake by analyzing the jagged edges of pupils of human eye, Boháek and Farid noted at the outset that people have other distinctive qualities aside from physical markings or facial features, one of which was body movements. For instance, they discovered Zelenskyy’s tendency to raise his right eyebrow whenever he lifts his left hand. They used this information to create a deep-learning AI system to analyze a subject’s physical gestures and movements by reviewing video footage of Zelenskyy. Over time, the system became more adept at identifying actions that people are unlikely to notice—actions that collectively were exclusive to the video’s topic. It can recognize when something doesn’t match a person’s regular patterns.

Up to 780 behavioral characteristics are analyzed by the detection system as it examines many 10-second segments obtained from a single video. It will alert human experts to take a closer look if it flags many segments from the same video as being fake. 

The researchers then tested their system by evaluating multiple deepfake films together with genuine videos of different people. Researchers obtained true positive rates of 95.0%, 99.0%, and 99.9% when comparing different subsets of an individual (facial, gestural, or vocal) or combination characteristics against various datasets. This suggests that they discovered their system successfully distinguishes between actual and deepfake. It was also successful in recognizing the fabrication of the Zelenskyy video.

Though this is an exciting and comforting news, there is a catch! While the success rates of the deepfake detection tools are encouraging, the presence of misinformation and misleading content will not disappear. As AI becomes adept in recognizing deepfakes, its technologies are also helping in the creation of more powerful deepfakes which can evade the existing technologies. Hence these detection solutions offer a partial solution to countering the threat. However, they do present a fighting chance to minimize the harm caused by deepfake content.

Subscribe to our newsletter

Subscribe and never miss out on such trending AI-related articles.

We will never sell your data

Join our WhatsApp Channel and Discord Server to be a part of an engaging community.

Preetipadma K
Preetipadma K
Preeti is an Artificial Intelligence aficionado and a geek at heart. When she is not busy reading about the latest tech stories, she will be binge-watching Netflix or F1 races!

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular