Published Date: 2/08/2024
The European Union's Artificial Intelligence Act (EU AI Act) has been published in the Official Journal of the European Union, marking a significant milestone in the regulation of artificial intelligence (AI) systems across the EU. The Act establishes a comprehensive horizontal legal framework for regulating AI systems, with a risk-based approach that assigns different regulatory requirements based on the level of risk associated with AI systems.
The EU AI Act will enter into force on August 1, 2024, with the majority of its provisions becoming enforceable on August 2, 2026. While the compliance timeline may appear extended, the process of developing an AI compliance program is intricate and time-intensive. It is imperative for businesses to commence their compliance efforts promptly to ensure they are adequately prepared to meet the regulatory requirements.
The Act has broad extraterritorial implications, extending its reach to providers who put AI systems into service within the EU market, regardless of their location. Thus, a number of U.S. businesses will be within its scope, depending on the exact role in their use of AI. It also applies to providers or deployers established outside the EU if the AI systems output is used within the EU.
The EU AI Act covers deployers, importers, and affected individuals within the EU, though it lacks clarity regarding distributors. Certain exemptions are specified within the Act. It does not apply to AI systems developed and used solely for scientific research and development. Activities involving research, testing, and development of AI are exempt from the Act’s provisions until the AI is placed on the market or put into service, although real-world testing is not covered by this exemption. AI systems released under free and open-source licenses are also exempt unless they are classified as high risk, prohibited, or generative AI.
The EU AI Act adopts a risk-based approach, assigning different regulatory requirements based on the level of risk associated with AI systems. The Act categorizes AI systems into four risk categories unacceptable risk, high risk, limited risk, and minimal risk.
Unacceptable Risk AI practices that pose a clear threat to fundamental rights are prohibited. This includes AI systems that manipulate behavior or exploit vulnerabilities (e.g., based on age or disability) to distort actions. Prohibited AI also includes certain biometric systems, like emotion recognition in the workplace or real-time categorization of individuals.
High Risk AI systems classified as high risk must adhere to stringent requirements. These include implementing risk-mitigation strategies, using high-quality data sets, maintaining activity logs, providing detailed documentation, ensuring human oversight, and achieving high standards of robustness, accuracy, and cybersecurity. High-risk AI examples include critical infrastructures (e.g., energy and transport), medical devices, and systems determining access to education or employment.
Limited Risk AI systems with limited risk, such as chatbots, must be designed to inform users that they are interacting with AI. Deployers of AI generating or manipulating deepfakes must disclose the artificial nature of the content.
Minimal Risk AI systems with minimal risk, such as AI-enabled video games or spam filters, face no restrictions. Companies may opt to follow voluntary codes of conduct.
Medical Uses AI intended for medical purposes is already regulated as a medical device in Europe and the United Kingdom. It must undergo a thorough assessment before being marketed, in accordance with the EU Medical Device Regulations 2017 (MDR) and the EU In Vitro Diagnostic Medical Devices Regulation (IVDR). Under the Act, any AI system that is a Class IIa or higher medical device, or uses an AI system as a safety component, is defined as high risk.
Recent Updates The European Commission has established a new EU level regulator, the European AI Office, which will operate within the Directorate-General for Communication Networks Content and Technology. The AI Office will be responsible for overseeing and enforcing compliance with the AI Act’s requirements for general purpose AI (GPAI) models and systems across all 27 EU member states.
Timeline of Developments With the publications in the Official Journal, the dates to comply with the regulations are now confirmed. The Act will go into effect on August 1, 2024, with the majority of its provisions becoming enforceable on August 2, 2026.
Next Steps Once the Act becomes operative on August 1, these milestones will follow according to Article 113. The Codes of Practice must be finalized within nine months of the Act’s commencement according to Article 56. The European Commission will then have an additional three months, for a total of 12 months, to approve or reject these Codes via an implementing act, based on the advice of the AI Office and Board.
Barnes & Thornburg LLP is a national law firm that provides transactional, regulatory, and litigation services to businesses and individuals. With over 700 attorneys and other legal professionals, the firm is one of the largest in the United States.
About the Authors Kaitlyn Stone and Michael Zogby are attorneys in the data security and privacy practice group of Barnes & Thornburg LLP. Aury Quezada, summer law clerk, assisted with this alert.
Q: When does the EU AI Act enter into force?
A: The EU AI Act enters into force on August 1, 2024.
Q: What is the scope of the EU AI Act?
A: The EU AI Act has broad extraterritorial implications, extending its reach to providers who put AI systems into service within the EU market, regardless of their location.
Q: What are the four risk categories under the EU AI Act?
A: The EU AI Act categorizes AI systems into four risk categories: unacceptable risk, high risk, limited risk, and minimal risk.
Q: What are the requirements for high-risk AI systems under the EU AI Act?
A: High-risk AI systems must adhere to stringent requirements, including implementing risk-mitigation strategies, using high-quality data sets, maintaining activity logs, providing detailed documentation, ensuring human oversight, and achieving high standards of robustness, accuracy, and cybersecurity.
Q: What is the role of the European AI Office?
A: The European AI Office will be responsible for overseeing and enforcing compliance with the AI Act’s requirements for general purpose AI (GPAI) models and systems across all 27 EU member states.