Published Date: 27/08/2024
In this blog post, we will focus on obligations that the European Union’s Artificial Intelligence Act (AI Act) sets for deployers, providers, importers and distributors regarding high-risk AI systems. The AI Act’s overall risk-based approach means that, depending on the level of risk, different requirements apply. In total, there are four levels of risk unacceptable risk, high risk, limited risk, and minimal risk.
Key Players
---------
The AI Act identifies and defines the following key players, all of which can be natural or legal persons.
Deployers use AI under their authority in the course of their professional activities.
Providers develop AI systems with a view to placing them on the market or putting them into service under their own name or trademark, whether for payment or free of charge.
Importers are located outside the European Union and place on the market AI systems bearing the name or trademark of a natural or legal person established outside the European Union.
Distributors are players in the supply chain, other than the provider or the importer, that make an AI system available on the EU market.
Obligations for Deployers of High-Risk AI Systems
------------------------------------------------
Instructions for use. Deployers must take appropriate technical and organizational measures to ensure they use high-risk AI systems in accordance with the instructions for use. EU or national law can impose additional obligations in this respect.
Monitoring. Deployers must monitor the operation of the system on the basis of the instructions for use. Where relevant, deployers must inform providers.
Risk to health, safety or fundamental rights. Where deployers have reasons to believe that using the system in accordance with the instructions may adversely affect individuals’ health, safety or fundamental rights, they must, without undue delay, inform the provider or distributor and the relevant market surveillance authority.
Serious incident. Where deployers have identified a serious incident, they must immediately inform first the provider and then the importer or distributor and the relevant market surveillance authorities.
Logs. Deployers of high-risk AI systems must retain the logs automatically generated by the system, to the extent that such logs are within their control, for a duration appropriate to the system’s intended purpose but of at least six months, unless provided otherwise in applicable EU or national law.
Input data. If the deployer exercises control over the input data, it must ensure that such data is relevant and sufficiently representative in view of the intended purpose of the system.
Human oversight. Deployers must assign human oversight to individuals who have the necessary competence, training, authority and support.
Workplace. Before putting into service or using a high-risk AI system in the workplace, deployers that are employers must inform workers’ representatives and the affected workers that they will be subject to the use of a high-risk AI system.
Transparency. Deployers of specific high-risk AI systems listed in the AI Act that make decisions or assist in making decisions related to natural persons must inform these persons that they are subject to the use of the high-risk AI system.
Obligations for Providers of High-Risk AI Systems
------------------------------------------------
Put in place a quality management system to ensure compliance.
Keep the required documentation for 10 years after the system has been placed on the market or put into service in the European Union.
Keep the logs automatically generated by the system to the extent they are under providers’ control.
Ensure that the system undergoes the conformity assessment procedure before being placed on the market or put into service in the European Union.
Draw up an EU declaration of conformity with the requirements associated with high-risk AI systems.
Affix the CE marking to the system or, where that is not possible, on its packaging or its accompanying documentation to indicate conformity with the AI Act.
Comply with the registration obligations.
Ensure post-market monitoring.
Report serious incidents.
Ensure follow-up on reporting serious incidents.
Take the necessary corrective actions and provide the required information.
Ensure that the high-risk AI system complies with accessibility requirements for certain products and services.
Cooperate with competent authorities.
Appoint authorized representatives.
-----------
WilmerHale is a leading international law firm that helps its clients achieve great results in high-stakes controversies, transactions, and regulatory matters. With 1,000 lawyers across 13 offices worldwide, the firm provides unparalleled legal representation and counsel in a wide range of industries and disciplines.
Q: What are the four levels of risk under the AI Act?
A: Unacceptable risk, high risk, limited risk, and minimal risk.
Q: Who are the key players under the AI Act?
A: Deployers, providers, importers, and distributors.
Q: What are the obligations of deployers of high-risk AI systems?
A: Instructions for use, monitoring, risk to health, safety or fundamental rights, serious incident, logs, human oversight, workplace, and transparency.
Q: What are the obligations of providers of high-risk AI systems?
A: Put in place a quality management system, keep required documentation, keep logs, ensure conformity assessment, draw up EU declaration of conformity, affix CE marking, comply with registration obligations, ensure post-market monitoring, report serious incidents, ensure follow-up, take corrective actions, ensure accessibility, cooperate with authorities, and appoint authorized representatives.
Q: What is the purpose of the CE marking?
A: To indicate conformity with the AI Act.