Friday, April 26, 2024
ad
HomeNewsChina releases Guidelines on AI ethics, focusing on User data control

China releases Guidelines on AI ethics, focusing on User data control

It is no brainer that China has begun its crusade to outshine the rest of the world in the global AI tech industry. As the tension against the USA ensues, both nations are striving to become the world leader in AI by rapidly developing breakthrough technologies that will revolutionize the field and ensure their dominance over others. While earlier China prioritized innovation, recently, it released its first set of ethical guidelines governing artificial intelligence. Not only does it emphasize protecting user rights and preventing risks, it also aligns with Beijing’s goals of reining in Big Tech’s influence and becoming the global AI leader by 2030. 

The guidelines, titled “New Generation Artificial Intelligence Ethics Specifications,” were prepared by an AI governance group created under China’s Ministry of Science and Technology (MOST) in February 2019. In June of that year, the group presented a set of guiding principles for AI governance, which were significantly shorter and broader than the recently disclosed criteria. The document was issued last Sunday.

In 2017, China announced its AI Development Plan (AIDP), aiming to make itself an AI powerhouse by 2030, surpassing its competitors to become “the world’s top artificial intelligence innovation hub.” The country also wants to turn AI into a trillion-yuan business and become the driving force behind the development of ethical AI rules and standards. Following that, the Chinese government-backed up its AI initiatives with substantial government funding. All these factors were pivotal in boosting China’s global proportion of AI research publications from 4.26 percent (1,086) in 1997 to 27.68 percent (37,343) in 2017, outpacing every country in the world, including the United States.

But simply laying out a bevy of milestones is not enough. SenseTime, Unisound, iFLYTEK, and Face++ indeed are just a few of China’s world-leading businesses in computer vision, speech recognition, and natural language processing. The country also benefits from its vast population, which presents an enormous potential workforce and unique opportunities to train AI systems, like large patient datasets for training software to predict disease. While the United States has open-source platforms like TensorFlow and Caffe to drive innovation, Baidu’s PaddlePaddle is primarily used in China for the rapid development of AI products.

However, the mandarin nation falls behind in hardware and has been widely criticized for using AI as a way to monitor citizens, especially the Uighur Muslim community in Xinjiang.

In recent years, many governments, research groups, and tech behemoths have released ethical standards, principles, and suggestions for the ethical use of AI. It is critical to create AI governance technologies, such as AI interpretation, rigorous AI safety testing and verification, and AI ethical evaluation, to impose them in existing AI systems and products. This is necessary because many AI technologies are still in the early stages of development and are not yet ready for widespread commercial use. Some of the factors that go into building ethical AI may be found in guidelines and legal literature. Security and privacy, safety and dependability, openness, responsibility, and fairness are among them.

Read More: China Now Has The Largest Language Model With WuDao 2.0

While the trade war between the USA and China will continue in the coming years, they need to find common ground when addressing ethical AI concerns.

In 2019, the Beijing AI Principles were released by the Beijing Academy of Artificial Intelligence (BAAI), supported by the Chinese Ministry of Science and Technology and the Beijing city government. They stated that “human privacy, dignity, freedom, autonomy, and rights should be properly respected” as guiding principles for AI research and development.

The Cyberspace Administration of China (CAC), China’s internet watchdog, published proposed regulations in August this year to govern the use of algorithmic recommender systems by online information services. The recommendations are the most thorough effort to govern recommender systems by any country to date, and they might serve as a model for other countries considering similar laws. Unfortunately, this three-year plan is also an attempt to limit the use of algorithms, signaling Beijing’s latest move to strictly control the country’s internet economy.

The latest move by the Chinese government is another attempt to exercise control over the nation’s tech sector without putting user security and privacy at risk. The idea is to give users control over how their interactions with AI are handled. Hence the data security, personal privacy and the right to opt-out of AI-driven decision-making are also included in the document.

The document states that preventing risks necessitates identifying and fixing technical and security vulnerabilities in AI systems while ensuring that relevant organizations are held accountable and that AI product quality management and control are enhanced. The guidelines also prohibit AI products and services from engaging in unlawful actions or placing national, public, or manufacturing security at risk. They should also not be allowed to undermine the public interest, according to the document.

To surmise, protecting user privacy and data should be paramount for every nation, and China is beginning to take infant steps in this direction.

Subscribe to our newsletter

Subscribe and never miss out on such trending AI-related articles.

We will never sell your data

Join our WhatsApp Channel and Discord Server to be a part of an engaging community.

Preetipadma K
Preetipadma K
Preeti is an Artificial Intelligence aficionado and a geek at heart. When she is not busy reading about the latest tech stories, she will be binge-watching Netflix or F1 races!

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular