EU reaches the world's first provisional agreement providing comprehensive regulation of AI
The European Union, EU, recently celebrated another policy making victory. After extensive negotiation, on Friday, December 8, the EU reached a provisional agreement on the Artificial Intelligence Act, AIA. When enacted, the AIA will become a new global benchmark for rapidly evolving technology as the world’s first comprehensive regulation of artificial intelligence, or AI.
Below are key elements of the AIA worth highlighting:
Artificial Intelligence Act definition of AI:
The AIA sets out a broad definition of AI. Its designation was made to be “future proof” and able to accommodate all technological advancements. The AIA would not only apply to AI providers and users in the EU, but also those located outside of the EU, if the output of their AI system is used in the EU.
Risk categories of AI:
The AIA also categorized AI systems into four risk categories with different regulatory requirements. Specifically, unacceptable risk systems and high risk systems are of most concern.
AIA proposes a ban on AI practices that are deemed to pose an unacceptable risk, which includes AI systems that exploit vulnerabilities of specific groups of people. The prohibition includes:
• Biometric categorization systems that use sensitive information, such as political affiliation, religious beliefs, philosophical beliefs, sexual orientation, and race.
• Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases.
• Emotion recognition in the workplace and educational institutions.
• Social scoring based on social behavior or personal characteristics.
• AI systems that manipulate human behavior to circumvent their free will.
• AI used to exploit the vulnerabilities of people due to their age, disability, social or economic status.
AI systems that are categorized as high risk systems must be designated and developed to manage biases effectively, ensuring non-discrimination and respect for fundamental rights. The emphasis is on the adherence to ethical principles and the protection of fundamental rights, including privacy, nondiscrimination, and human dignity.
Limited risk: The AIA imposes transparency obligations for AI systems that interact with humans or are used to detect emotions or classify biometric data. AI systems with specific transparency obligations are categorized as limited risk. These systems have to notify users that they are interacting with an AI system, and not a human.
Minimal and low risk systems are those that can be used without any additional safeguards, such as email spam filters.
The Artificial Intelligence Act also sets out governance to ensure compliance with its provisions. This includes the establishment of national supervisory authorities and a European Artificial Intelligence Board. It also gives proposed penalties of up to 7% of global annual turnover, or 35 million euros, for prohibited AI violations and up to 3% of global annual turnover, or 15 million euros, for most other violations.
Though the AIA has its challenges and criticism, particularly its broad and sometimes vague definitions, it represents a significant step towards the legal and ethical governance of AI technologies. The AIA will have a global impact by setting standards for other countries.
If you have any questions about your company’s compliance with cyber regulations, concerns about vulnerability to attacks or other breaches, or if you want to learn more about proactive cybersecurity defense, contact a member of McDonald Hopkins’ national data privacy and cybersecurity team.