By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

Article 29a


Removed on May 8th 2024 based on the version and article numbering in the EU Parliament's 'Corrigendum' version dated April 19th 2024.

[Previous version]

Removed on April 10th 2024 based on the version and article numbering approved by the EU Parliament on March 13th 2024.

Updated on Feb 6th 2024 based on the version endorsed by the Coreper I on Feb 2nd

Fundamental Rights Impact Assessment for High-Risk AI Systems

1. Prior to deploying a high-risk AI system as defined in Article 6(2), with the exception of

AI systems intended to be used in the area listed in point 2 of Annex III, deployers that are bodies governed by public law or private operators providing public services and operators deploying high-risk systems referred to in Annex III, point 5, (b) and (ca) shall perform an assessment of the impact on fundamental rights that the use of the system may produce. For that purpose, deployers shall perform an assessment consisting of:

  1. a description of the deployer’s processes in which the high-risk AI system will be used in line with its intended purpose;
  2. a description of the period of time and frequency in which each high-risk AI system is intended to be used;
  3. the categories of natural persons and groups likely to be affected by its use in the specific context;
  4. the specific risks of harm likely to impact the categories of persons or group of persons identified pursuant point (c), taking into account the information given by the provider pursuant to Article 13;
  5. a description of the implementation of human oversight measures, according to the instructions of use;
  6. the measures to be taken in case of the materialization of these risks, including their arrangements for internal governance and complaint mechanisms.

2. The obligation laid down in paragraph 1 applies to the first use of the high-risk AI system. The deployer may, in similar cases, rely on previously conducted fundamental rights impact assessments or existing impact assessments carried out by provider. If, during the use of the high-risk AI system, the deployer considers that any of the factors listed in paragraph 1 change are or no longer up to date, the deployer will take the necessary steps to update the information.

3. Once the impact assessment has been performed, the deployer shall notify the market surveillance authority of the results of the assessment, submitting the filled template referred to in paragraph 5 as a part of the notification. In the case referred to in Article 47(1), deployers may be exempted from these obligations.

4. If any of the obligations laid down in this article are already met through the data protection impact assessment conducted pursuant to Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680, the fundamental rights impact assessment referred to in paragraph 1 shall be conducted in conjunction with that data protection impact assessment.

5. The AI Office shall develop a template for a questionnaire, including through an automated tool, to facilitate deployers to implement the obligations of this Article in a simplified manner.

Report error

Report error

Please keep in mind that this form is only for feedback and suggestions for improvement.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.