EU's Landmark AI Regulation: What You Need to Know About the AI Act

Published Date: 27/05/2024      

The European Union's AI Act is set to take effect next month, introducing a risk-based approach to regulating artificial intelligence systems. Learn about the four categories of risk and the implications for AI providers.

The European Union has taken a significant step towards regulating artificial intelligence (AI) with the endorsement of the AI Act on May 21, 2024. This landmark legislation introduces a risk-based approach to regulating AI systems, categorizing them into four levels of risk: Unacceptable Risk, High Risk, Limited Risk, and Minimal Risk.

The AI Act prohibits certain "unacceptable" uses of AI, such as social scoring systems and facial recognition databases. On the other hand, minimal risk uses, such as email providers' spam filters, are unregulated. The High Risk category includes AI systems that impact health, safety, or fundamental rights of individuals, while Limited Risk includes chatbots and shallow-fake and deep-fake generation.

Providers of High Risk AI systems must establish a risk management system, conduct data governance, design systems for record-keeping, provide instructions for use, implement human oversight, and achieve appropriate levels of accuracy, robustness, and security. They must also register their AI system in the EU database established under the AI Act.

General purpose AI (GPAI) models and systems can fall into any of the four categories of risk. Providers of GPAI models must document training, testing, and evaluation results, provide downstream providers with information and documentation, adhere to internal policy devoted to honoring the Copyright Directive, and publish a detailed summary of the content used for training.

In addition, all GPAI models with systemic risk must track, document, and report incidents and possible corrective measures to the EU AI Office and relevant national competent authorities without "undue delay." Such incidents may include instances where the AI system generates discriminatory results or inadvertent manipulative content.

The AI Act has significant implications for AI providers looking to offer services in the EU. They will be required to prepare for and satisfy bias testing to identify algorithmic discrimination with their systems. The requirements set forth under the AI Act offer a strong preview of what can be expected for future federal or state legislation in the United States.

The AI Act is the world's first comprehensive regulation with respect to providers of AI systems. Its implementation will be conducted in phases, with many regulations taking effect next month.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

FAQs:

1. What is the AI Act, and when does it take effect?

The AI Act is a landmark legislation that regulates artificial intelligence systems in the European Union. It is set to take effect 20 days after publication in the EU's Official Journal of the European Union.

2. What are the four categories of risk under the AI Act?

The four categories of risk are Unacceptable Risk, High Risk, Limited Risk, and Minimal Risk.

3. What are some examples of High Risk AI systems?

High Risk AI systems include those that impact health, safety, or fundamental rights of individuals, such as critical infrastructure, education, employment, migration, democracy, elections, and the environment.

4. What are the requirements for providers of High Risk AI systems?

Providers of High Risk AI systems must establish a risk management system, conduct data governance, design systems for record-keeping, provide instructions for use, implement human oversight, and achieve appropriate levels of accuracy, robustness, and security.

5. How does the AI Act affect AI providers looking to offer services in the EU?

AI providers looking to offer services in the EU will be required to prepare for and satisfy bias testing to identify algorithmic discrimination with their systems.