Wednesday, October 9, 2024
ad
HomeMiscellaneousMisinformation due to Deepfakes: Are we close to finding a solution?

Misinformation due to Deepfakes: Are we close to finding a solution?

Misinformation due to Deepfakes are on the rise every day but are we doing enough to curb its ugly side?

Deepfakes have taken the internet by storm, taking celebrities and politicians in its wake using misinformation about things that never happened. 

Deepfakes are made with a generative adversarial network (GAN), and the technology has advanced to the point where it is increasingly impossible to tell the difference between a genuine human’s face and one created with a GAN model. Even though this technology has some commercial potential, it also has a malevolent side that is considerably more terrifying and has major ramifications.

Typically, it is easy to discern if the content available online is deepfake or real. For instance, the majority of deepfake films on the internet are made by amateurs and are unlikely to deceive anyone. This is because deepfakes often leave blurred or flickering results as a regular occurrence, especially when the face changes angles quickly. 

Another way to notice differences is that in deepfakes the eyes frequently move independently of one another. Hence, deepfake movies are usually presented at a low quality to mask these flaws. 

Unfortunately, with advanced tools available online today, it is easy to make deepfakes that are nearly perfect or appear real to the untrained eye, with only basic editing skills. This has helped filmmakers to change the facial features of actors into ones that fit character descriptions in the movie. For instance, Samuel L Jackson was de-aged by 25 years using deepfake technology in the movie Captain Marvel. 

Deepfake Apps like Faceapp can be used to make a photograph of President Biden appear feminine. Reface is another popular face-swapping application that allows users to swap faces with celebrities, superheroes, and meme characters to produce hilarious video clips. Earlier this year, The Israel Defense Forces musical groups teamed with a company that specializes in deepfake filmography, to bring photographs from the 1948 Israeli-Arab war to life to commemorate Israel’s Memorial Day.

The US government is particularly wary that deepfakes may be used to propagate misinformation and conduct crimes. That’s because deepfake developers have the ability to make people say or do anything they want, and release the ‘fake manipulated content online.’ For instance, this year, the Dutch parliament’s foreign affairs committee was duped into conducting a video chat with someone impersonating Leonid Volkov, the chief of staff to imprisoned Russian anti-Putin politician Alexei Navalny. There is the potential not just to distribute fake news, but also to cause political unrest, an increase in cybercrime, revenge porn, phony scandals, and an increase in online harassment and abuse. When presented as evidence in court, even video footage might be rendered worthless.

Simultaneously, deepfake videos will become an issue as GPU performance scales up,  becoming more powerful and cheaper, despite the fact that they are still in the early stages of research. The commercialization of AI tools will also decrease the threshold for generating these deepfakes. These might potentially lead to generating real-time impersonations that can bypass biometric systems.

The FBI has even issued a warning that “malicious actors almost certainly will leverage synthetic content for cyber and foreign influence operations in the next 12-18 months,” citing fake videos of Obama calling Donald Trump a “complete dipshit,” Mark Zuckerberg bragging about having “total control of billions of people’s stolen data,” and a fake Tom Cruise claiming to make music for his movies on TikTok. Any modified content – visual (videos and images) and verbal (text and audio) – may be classified as synthetic content, including deepfakes.

According to a recent MIT study, Americans are more inclined to trust a deepfake than fake news in text form, but it has no effect on their political views. The researchers are also eager to point out that making too many inferences from this data is dangerous. They caution that the settings in which the study trials were carried out may not be representative of the situations in which US voters are likely to be misled by deepfakes.

Hence, the calls to develop tools that can help in early detection and further prevention in the mass spread of treacherous deepfakes are rising every year. Microsoft has released Video Authenticator, a tool that can evaluate a still photo or video and assign a score based on its degree of confidence that the material hasn’t been digitally changed.  

Google published a big dataset of visual deepfakes, called FaceForensics++ in September 2019 with the goal of enhancing deepfake identification. Since then, this dataset has been employed to build deepfake detection systems in deep learning research. FaceForensics++ focuses on two particular types of deepfake techniques: facial expression and facial identity manipulation. While the results appear to be encouraging, experts discovered a problem: when the same model was applied to real-world deepfake data found on Youtube (i.e. data not included in the paper’s dataset), the model’s accuracy was drastically reduced. This points to a failure in detecting deepfakes that were created using real-world data or the ones that the model wasn’t trained to detect.

Facebook also unveiled a sophisticated AI-based system a few months ago that can not only identify deepfakes but also re-engineer the deepfake producing software used to generate manipulated media. Built-in collaboration with academics from Michigan State University (MSU), this innovation is noteworthy because it might aid Facebook in tracking down criminal actors who are distributing deepfakes across its many social media platforms. This content might contain disinformation as well as non-consensual pornography, which is an all-too-common use of deepfake technology. The work is currently in the research stage and is not yet ready for deployment.

The reverse engineer technique starts with image attribution before moving on to the detection of attributes of the model that was used to produce the image. These attributes, referred to as hyperparameters, have to be tuned into each machine learning model. They leave a distinct fingerprint on the image as a whole, which may be used to identify its origin.

At the same time, Facebook claims that addressing the issue of deepfakes necessitates going a step further in the current practices. In machine learning, reverse engineering is not a new notion; existing algorithms can arrive at a model by evaluating its input and output data, as well as hardware statistics such as CPU and memory consumption. These strategies, on the other hand, rely on prior knowledge about the model, which restricts their utility in situations when such information is unavailable.

The winners of Facebook’s Deepfake Detection Challenge, which concluded last June, developed a system that can detect distorted videos with an average accuracy of 65.18 percent.  At the same time, deepfake detection technology is not always accessible to the general public, and it cannot be integrated across all platforms where people consume media material.

Read More: Researchers used GAN to Generate Brain Data to help the Disabled

Amid these concerns and developments, surprisingly, deepfakes are not subject to any special norms or regulations in the majority of countries. Nonetheless, legislation like the Privacy Act, the Copyright Act, the Human Rights Act, and guidelines based on the ethical use of AI offer some protection.

Though researchers around the globe are not close to finding a solution, they are still working around the clock to find robust mitigation technology to tackle the proliferation of deepfakes. Although at present deepfakes may not have incurred major harm in shaping the political opinion of the masses, it is better to have tools in the arsenal to detect this content in the future. 

It is true that deepfakes are becoming easier to make and more difficult to detect. However, organizations and people alike should be aware that technologies are being developed to not only detect harmful deepfakes but also to make it more difficult for malicious actors to propagate them. Despite the fact that existing tools have poor average accuracy rates, their ability in detecting coordinated deepfake attacks and identifying the origins of deceitful deepfakes indicate that progress is being made in the right direction.

Subscribe to our newsletter

Subscribe and never miss out on such trending AI-related articles.

We will never sell your data

Join our WhatsApp Channel and Discord Server to be a part of an engaging community.

Preetipadma K
Preetipadma K
Preeti is an Artificial Intelligence aficionado and a geek at heart. When she is not busy reading about the latest tech stories, she will be binge-watching Netflix or F1 races!

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular