By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

Article 7

Amendments to Annex III

Updated on April 10th 2024 based on the version and article numbering approved by the EU Parliament on March 13th 2024.

1. The Commission shall adopt delegated acts in accordance with Article 97 to amend Annex III by adding or modifying use-cases of high-risk AI systems where both of the following conditions are fulfilled:

  1. the AI systems are intended to be used in any of the areas listed in Annex III;
  2. the AI systems pose a risk of harm to health and safety, or an adverse impact on fundamental rights, and that risk is equivalent to, or greater than, the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.

2. When assessing the condition under paragraph 1, point (b),, the Commission shall take into account the following criteria:

  1. the intended purpose of the AI system;
  2. the extent to which an AI system has been used or is likely to be used;
  3. the nature and amount of the data processed and used by the AI system, in particular whether special categories of personal data are processed;
  4. the extent to which the AI system acts autonomously and the possibility for a human to override a decision or recommendations that may lead to potential harm;
  5. the extent to which the use of an AI system has already caused harm to health and safety, has had an adverse impact on fundamental rights or has given rise to significant concerns in relation to the likelihood of such harm or adverse impact, as demonstrated, for example, by reports or documented allegations submitted to national competent authorities or by other reports, as appropriate;
  6. the potential extent of such harm or such adverse impact, in particular in terms of its intensity and its ability to affect multiple persons or to disproportionately affect a particular group of persons;
  7. the extent to which persons who are potentially harmed or suffer an adverse impact are dependent on the outcome produced with an AI system, in particular because for practical or legal reasons it is not reasonably possible to opt-out from that outcome;
  8. the extent to which there is an imbalance of power, or the persons who are potentially harmed or suffer an adverse impact are in a vulnerable position in relation to the deployer of an AI system, in particular due to status, authority, knowledge, economic or social circumstances, or age;
  9. the extent to which the outcome produced involving an AI system is easily corrigible or reversible, taking into account the technical solutions available to correct or reverse it, whereby outcomes having an adverse impact on health, safety or fundamental rights, shall not be considered to be easily corrigible or reversible;
  10. the magnitude and likelihood of benefit of the deployment of the AI system for individuals, groups, or society at large, including possible improvements in product safety;
  11. the extent to which existing Union law provides for:
    (i) effective measures of redress in relation to the risks posed by an AI system, with the exclusion of claims for damages;
    (ii) effective measures to prevent or substantially minimise those risks.

3. The Commission shall adopt delegated acts in accordance with Article 97 to amend the list in Annex III by removing high-risk AI systems where both of the following conditions are fulfilled:

  1. the high-risk AI system concerned no longer poses any significant risks to fundamental rights, health or safety, taking into account the criteria listed in paragraph 2;
  2. the deletion does not decrease the overall level of protection of health, safety and fundamental rights under Union law.

[Previous version]

Updated on Feb 6th 2024 based on the version endorsed by the Coreper I on Feb 2nd

1. The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend Annex III by adding or modifying use cases of high-risk AI systems where both of the following conditions are fulfilled:

  1. the AI systems are intended to be used in any of the areas listed in points 1 to 8 of Annex III;
  2. the AI systems pose a risk of harm to health and safety, or an adverse impact on fundamental rights, and that risk is equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.

2. When assessing for the purposes of paragraph 1 whether an AI system poses a risk of harm to the health and safety or a risk of adverse impact on fundamental rights that is equivalent to or greater than the risk of harm posed by the high-risk AI systems already referred to in Annex III, the Commission shall take into account the following criteria:

  1. the intended purpose of the AI system;
  2. the extent to which an AI system has been used or is likely to be used;
    ba. the nature and amount of the data processed and used by the AI system, in particular whether special categories of personal data are processed;
    bb. the extent to which the AI system acts autonomously and the possibility for a human to override a decision or recommendations that may lead to potential harm;
  3. the extent to which the use of an AI system has already caused harm to health and safety, has had an adverse impact on fundamental rights or has given rise to significant concerns in relation to the likelihood of such harm or adverse impact, as demonstrated for example by reports or documented allegations submitted to national competent authorities or by other reports, as appropriate;
  4. the potential extent of such harm or such adverse impact, in particular in terms of its intensity and its ability to affect a plurality of persons or to disproportionately affect a particular group of persons;
  5. the extent to which potentially harmed or adversely impacted persons are dependent on the outcome produced with an AI system, in particular because for practical or legal reasons it is not reasonably possible to opt-out from that outcome;
  6. the extent to which there is an imbalance of power, or the potentially harmed or adversely impacted persons are in a vulnerable position in relation to the user of an AI system, in particular due to status, authority, knowledge, economic or social circumstances, or age;
  7. the extent to which the outcome produced involving an AI system is easily corrigible or reversible, taking into account the technical solutions available to correct or reverse, whereby outcomes having and adverse impact on health, safety, fundamental rights, shall not be considered as easily corrigible or reversible;
    gb. the magnitude and likelihood of benefit of the deployment of the AI system for individuals, groups, or society at large, including possible improvements in product safety;
  8. the extent to which existing Union legislation provides for:
    (i) effective measures of redress in relation to the risks posed by an AI system, with the exclusion of claims for damages;
    (ii) effective measures to prevent or substantially minimise those risks.

2a. The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend the list in Annex III by removing high-risk AI systems where both of the following conditions are fulfilled:

  1. the high-risk AI system(s) concerned no longer pose any significant risks to fundamental rights, health or safety, taking into account the criteria listed in paragraph 2;
  2. the deletion does not decrease the overall level of protection of health, safety and fundamental rights under Union law.
Report error

Report error

Please keep in mind that this form is only for feedback and suggestions for improvement.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.