By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

Article 6

Classification Rules for High-Risk AI Systems

Updated on Feb 6th 2024 based on the version endorsed by the Coreper I on Feb 2nd

1. Irrespective of whether an AI system is placed on the market or put into service independently from the products referred to in points (a) and (b), that AI system shall be considered high-risk where both of the following conditions are fulfilled:

  1. the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex II;
  2. the product whose safety component pursuant to point (a) is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment, with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II.

2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall also be considered high-risk.

2a. By derogation from paragraph 2 AI systems shall not be considered as high risk if they do not pose a significant risk of harm, to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making. This shall be the case if one or more of the following criteria are fulfilled:

  1. the AI system is intended to perform a narrow procedural task;
  2. the AI system is intended to improve the result of a previously completed human activity;
  3. the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; or
  4. the AI system is intended to perform a preparatory task to an assessment relevant for the purpose of the use cases listed in Annex III.

Notwithstanding first subparagraph of this paragraph, an AI system shall always be considered high-risk if the AI system performs profiling of natural persons.

2b. A provider who considers that an AI system referred to in Annex III is not high-risk shall document its assessment before that system is placed on the market or put into service.

Such provider shall be subject to the registration obligation set out in Article 51(1a). Upon request of national competent authorities, the provider shall provide the documentation of the assessment.

2c. The Commission shall, after consulting the AI Board, and no later than 18 months after the entry into force of this Regulation, provide guidelines specifying the practical implementation of this article completed by a comprehensive list of practical examples of high risk and non-high risk use cases on AI systems in accordance with the conditions set out in Article 82a.

2d. The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend the criteria laid down in points (a) to (d) of the first subparagraph of paragraph 2a.

The Commission may adopt delegated acts adding new criteria to those laid down in points

(a) to (d) of the first subparagraph of paragraph 2a, or modifying them, only where there is concrete and reliable evidence of the existence of AI systems that fall under the scope of Annex III but that do not pose a significant risk of harm to the health, safety and fundamental rights.

The Commission shall adopt delegated acts deleting any of the criteria laid down in the first subparagraph of paragraph 2a where there is concrete and reliable evidence that this is necessary for the purpose of maintaining the level of protection of health, safety and fundamental rights in the Union.

Any amendment to the criteria laid down in points (a) to (d) set out in the first subparagraph of paragraph 2a shall not decrease the overall level of protection of health, safety and fundamental rights in the Union.

When adopting the delegated acts, the Commission shall ensure consistency with the delegated acts adopted pursuant to Article 7(1) and shall take account of market and technological developments.

[Previous version]

1. An AI system that is itself a product covered by the Union harmonisation legislation listed in Annex II shall be considered as high risk if it is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the above mentioned legislation.

2. An AI system intended to be used as a safety component of a product covered by the legislation referred to in paragraph 1 shall be considered as high risk if it is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to above mentioned legislation. This provision shall apply irrespective of whether the AI system is placed on the market or put into service independently from the product.

3. AI systems referred to in Annex III shall be considered high-risk unless the output of the system is purely accessory in respect of the relevant action or decision to be taken and is not therefore likely to lead to a significant risk to the health, safety or fundamental rights.

In order to ensure uniform conditions for the implementation of this Regulation, the Commission shall, no later than one year after the entry into force of this Regulation, adopt implementing acts to specify the circumstances where the output of AI systems referred to in Annex III would be purely accessory in respect of the relevant action or decision to be taken. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 74, paragraph 2.

Report error

Report error

Please keep in mind that this form is only for feedback and suggestions for improvement.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.