On Wednesday, Facebook and Michigan State University (MSU) announced that their new artificial intelligence technique could detect and pinpoint the generative model used to create deepfake images or videos.
Deepfake is creating fake images/videos from an existing image using deep learning techniques. It has become so efficient that it’s almost impossible to tell whether a picture or video is original or a deepfake.
Although Facebook has banned and eliminated deepfake in January 2020, it still seems to be a threat to the security of the users. Being the most widely used social media platform, Facebook is the place of residence for deepfake.
Facebook employed a new way for the detection of deepfake called ‘Fingerprint Estimation Network’. This method is based on the generative fingerprints that are created during the modeling of deepfake. These fingerprints are very similar to human fingerprints; they are unique and can trace the generative models that made a particular deepfake. The technique picks them up from the deepfake image or even from a video frame and traces them back to the original model.
Once the generative fingerprints are detected, this new technique uses a reverse-engineered method called the ‘model praising approach’ to detect the original generative model. Since most of the generative models that are used to fabricate deepfake are known, this method works perfectly fine; although the generative model is unknown, the AI can still see it, says the Facebook research team.
To test the efficacy of the new AI technique, Facebook put up 100,00 deepfakes into testing that were created using 100 generative models. Facebook says that this methodology developed, along with MSU, is working significantly better than the earlier ones used to detect and remove deepfakes.
Facebook and MSU research teams have also mentioned that they are putting a forethought to open-source the datasets, code, and the trained models used to detect the generative model to ease up the area of research of deepfake detection.