Wang Wei Min, a research scientist from Singapore, wins a $100 thousand prize in a challenge to develop artificial intelligence (AI) models that can detect deepfakes.
Min single-handedly defeated 469 other participants in the competition to win this reward. Amnesty International Singapore, the office of the National Research Foundation’s Artificial Intelligence Program, organized the Trusted Media Challenge.
It was a five-month-long competition that required participating teams to build AI models for detecting deepfakes or digitally modified videos.
Wang, a National University of Singapore graduate, developed an algorithm that accurately distinguished between original syllables and those having digitally altered faces or sounds, with an accuracy of 98.53 percent.
Moreover, Wang was also offered a $300,000 start-up grant to commercialize his developed technology. Deepfakes are fake media in which a person’s appearance is replaced with someone else’s in an existing photograph or video using artificial intelligence and machine learning.
This technology has gained immense popularity over the years, and Wang says, “Deepfakes, good or bad, is an emerging technology that you simply cannot ignore.” Deep learning is used to make deepfakes, and it involves training generative neural network designs like autoencoders.
However, apart from being used to create entertainment content, the technology has also been used multiple times for spreading misinformation across the globe.
Wand said, “Technology is not just part of the problem, it can also be part of the solution. However, technology cannot be the only solution to misinformation. It must be accompanied by a broader set of measures across society.”
According to Wang, he decided to participate in the competition as the current challenges facing the media match with his research interests, and he is passionate about using AI to solve real-world problems.