Published Date: 11/08/2024
On August 1st, the EU Artificial Intelligence Act came into force, aiming to protect users of AI technology created or used in the EU. The Act categorizes AI systems based on their risk level.
Unacceptable risk AI systems are prohibited, as they pose a significant risk to human rights. This includes using deception to impair decision-making or exploiting vulnerabilities related to age, disability, or socio-economic status. Biometric systems that infer sensitive attributes like race or political beliefs are banned, except when used lawfully by law enforcement. Social scoring, which negatively impacts individuals based on their behavior or traits, is also not allowed.
High-risk AI systems will be subject to specific requirements. These include purposes of biometric identification (excluding verification), critical infrastructure management, education and vocational training, employment and worker management, public service access, law enforcement, migration and border control, and justice and democratic processes. Providers of high-risk AI systems must establish risk management, ensure accurate data governance, maintain technical documentation, enable record-keeping, allow for human oversight, and ensure accuracy and cybersecurity.
Specific transparency risk AI systems, like chatbots, must notify users in a clear manner that they are interacting with AI. AI-generated content, such as deep fakes, must be properly labeled, and users must be made aware when biometric categorization or emotion recognition technologies are being used.
Minimal risk AI systems, which include examples such as AI-enabled suggestion systems, video games that utilize AI, and spam filters, are unregulated by the EU. Companies may choose to establish their own codes of conduct in order to provide transparency and accountability.
Additionally, there are specific rules for general purpose AI models (GPAI) – AI models that can perform a wide variety of tasks on their own or when incorporated into other technologies, such as ones that generate human-like text. These models will be subject to scrutinous transparency requirements in order to mitigate possible risks.
EU Member States have until 2 August 2025 to designate national competent authorities to oversee the application of AI rules and conduct market surveillance. Companies that fail to comply with the Artificial Intelligence Act will face substantial fines.
The Artificial Intelligence Act is an important step in the direction of regulated and safe AI, and will hopefully be an inspiration for countries outside of the EU as well.
information
The EU Artificial Intelligence Act was proposed in 2021 as part of the EU's Digital Strategy. The Act aims to promote the development and use of AI in the EU while ensuring that AI systems are safe, transparent, and respect human rights.The European Union (EU) is a political and economic union of 27 European countries that aims to promote peace, stability, and economic growth in Europe.
Q: What is the EU Artificial Intelligence Act?
A: The EU Artificial Intelligence Act is a regulation that aims to promote the development and use of AI in the EU while ensuring that AI systems are safe, transparent, and respect human rights.
Q: What types of AI systems are prohibited under the Act?
A: Unacceptable risk AI systems, which pose a significant risk to human rights, are prohibited under the Act. This includes AI systems that use deception to impair decision-making or exploit vulnerabilities related to age, disability, or socio-economic status.
Q: What requirements must providers of high-risk AI systems meet?
A: Providers of high-risk AI systems must establish risk management, ensure accurate data governance, maintain technical documentation, enable record-keeping, allow for human oversight, and ensure accuracy and cybersecurity.
Q: What is the deadline for EU Member States to designate national competent authorities to oversee the application of AI rules?
A: EU Member States have until 2 August 2025 to designate national competent authorities to oversee the application of AI rules and conduct market surveillance.
Q: What are the consequences for companies that fail to comply with the Artificial Intelligence Act?
A: Companies that fail to comply with the Artificial Intelligence Act will face substantial fines.