On Friday, China’s internet regulator announced new guidelines for content providers that modify face and voice data, the latest step in the country’s fight against “deepfakes.” In addition, the creation of cyberspace that supports Chinese socialist ideals was also proposed by the Cyberspace Administration of China (CAC). The regulations are open for public comment through February 28, with the final version susceptible to revision.
According to the CAC’s statement, fraudsters will be more motivated to employ digitally generated voice, video, chatbots, or facial or gesture manipulation content in coming times. As a result, the proposal prohibits the use of such fakes in any application that might upset the social order, infringe on people’s rights, spread false information, or portray sexual activity. It also advises obtaining permission to utilize what China refers to as “deep synthesis” before it can be used for legal purposes. Here, deep synthesis has been defined as “Using deep learning and virtual reality to generate and synthesize algorithms to produce text, images, audio, video, virtual scenes, and other information.”
The “Internet Information Service Deep Synthesis Management Regulations” proposal vows to control technologies that generate deepfakes. Deepfake service providers must authenticate their users’ identities before providing them access to relevant items, according to the proposed regulation. Companies are also obliged to follow the correct political direction and respect social morality and ethics. The regulations also make it illegal to make deepfakes without the consent of the person or individuals who appear in them. The proposal also includes a user complaints system and procedures to prevent deepfakes from being used to spread false information. Providers of deep synthesis technology will be forced to suspend or delete their apps if required.
Deep synthesis service providers are now required to improve training data management, ensure legal and proper data processing, and take the required steps to secure data security.
According to the proposal, in case, training data contains personal information, it should also adhere to the corresponding personal information protection regulations, and personal information must not be processed illegally. As per Article 12 of the draft, “Where a deep synthesis service provider provides significant editing functions for biometric information such as face and human voice, it shall prompt them (provider) to notify and obtain the individual consent of the subject whose personal information is being edited.”
For first-time violators, the laws mandate penalties of 10,000 to 100,000 yuan (US$1,600 to US$16,000), although violations can also result in civil and criminal lawsuits.
Read More: China releases Guidelines on AI ethics, focusing on User data control
China is already buckling under the pressure of regulating the use of deepfakes which has taken the nation by storm in the past few years. For instance, in August 2019, a new app called ZAO went viral, allowing users to swap their faces with those of celebrities. Meanwhile, Chinese individuals are paying for deepfake films in which the face of their choosing – a celebrity or a person they know – is superimposed over the body of a porn star. Avatarify, a Russian AI app that converts static portrait images into videos, became popular on Douyin, China’s equivalent of TikTok, in February last year. Soon, Chinese users were quick to come up with creative ways to exploit the software to make humorous videos, including one in which Elon Musk and Jack Ma appeared to be singing the famous tune Dragostea Din Tei in unison.
According to a deepfake white paper issued by Nisos, a cybersecurity intelligence organization, the three most prominent nations where deepfake developers live are Russia, Japan, and China.
Worried about the dismal popularity of deepfakes, last March, Chinese regulators had summoned 11 domestic technology companies, including Alibaba Group, Tencent, and ByteDance, for talks on the use of ‘deepfake’ technologies on their content platforms. Regulators had also instructed the companies to perform their security evaluations and submit reports to the government when adding new functionalities or new information services that “have the ability to mobilize society.”
While the current proposed draft won’t immediately set action against deepfakes and deep synthesis service providers, it will allow the government to redeem itself in the age of manipulated, misinformed content.