European Union policymakers reached an agreement on the AI Act, a comprehensive law aimed at regulating Artificial Intelligence (AI). This groundbreaking legislation serves as a global standard, balancing the potential benefits of AI with efforts to mitigate associated risks such as job automation, online misinformation, and threats to national security.
European policymakers focus on regulating the riskiest applications of AI in both corporate and government realms, particularly in law enforcement and essential services like water and energy.
The AI Act introduces new transparency requirements for creators of major general-purpose AI systems. Additionally, the guidelines specify that any content created by AI, such as deepfakes, must be clearly labeled as AI-generated.
Read More: What is the AI Action Plan Established by New York Officials to Promote Responsible Use of AI?
The law also imposes restrictions on the use of facial recognition technology (an impending concern by the EU) by law enforcement and governments, except in specific safety and national scenarios. Companies found violating these regulations could face fines of up to 7 percent of their global sales.
After three days of negotiations in Brussels, which included a 22-hour session starting Wednesday afternoon and stretching out into Thursday, the final agreement was not immediately disclosed. Further discussions were anticipated to finalize technical details, potentially causing delays in the ultimate approval process.
The legislation requires votes in both the Parliament and the European Council, representing the 27 countries in the union. The AI Act has been a longstanding dream of the EU to regulate the responsible use of AI. The EU first announced its decision to draft such a policy came back in August 2022.