By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

Article 55

Obligations for Providers of General-Purpose AI Models with Systemic Risk

Updated on May 8th 2024 based on the version and article numbering in the EU Parliament's 'Corrigendum' version dated April 19th 2024.

1. In addition to the obligations listed in Articles 53 and 54, providers of general-purpose AI models with systemic risk shall:

  1. perform model evaluation in accordance with standardised protocols and tools reflecting the state of the art, including conducting and documenting adversarial testing of the model with a view to identifying and mitigating systemic risks;
  2. assess and mitigate possible systemic risks at Union level, including their sources, that may stem from the development, the placing on the market, or the use of general-purpose AI models with systemic risk;
  3. keep track of, document, and report, without undue delay, to the AI Office and, as appropriate, to national competent authorities, relevant information about serious incidents and possible corrective measures to address them;
  4. ensure an adequate level of cybersecurity protection for the general-purpose AI model with systemic risk and the physical infrastructure of the model.

2. Providers of general-purpose AI models with systemic risk may rely on codes of practice within the meaning of Article 56 to demonstrate compliance with the obligations set out in paragraph 1 of this Article, until a harmonised standard is published. Compliance with European harmonised standards grants providers the presumption of conformity to the extent that those standards cover those obligations. Providers of general-purpose AI models with systemic risks who do not adhere to an approved code of practice or do not comply with a European harmonised standard shall demonstrate alternative adequate means of compliance for assessment by the Commission.

3. Any information or documentation obtained pursuant to this Article, including trade secrets, shall be treated in accordance with the confidentiality obligations set out in Article 78.

[Previous version]

Updated on April 10th 2024 based on the version and article numbering approved by the EU Parliament on March 13th 2024.

1. In addition to the obligations listed in Article 53, providers of general-purpose AI models with systemic risk shall:

  1. perform model evaluation in accordance with standardised protocols and tools reflecting the state-of-the-art, including conducting and documenting adversarial testing of the model with a view to identifying and mitigating systemic risk;
  2. assess and mitigate possible systemic risks at Union level, including their sources, that may stem from the development, the placing on the market, or the use of general-purpose AI models with systemic risk;
  3. keep track of, document and report without undue delay to the AI Office and, as appropriate, to national competent authorities, relevant information about serious incidents and possible corrective measures to address them;
  4. ensure an adequate level of cybersecurity protection for the general-purpose AI model with systemic risk and the physical infrastructure of the model.

2. Providers of general-purpose AI models with systemic risk may rely on codes of practice within the meaning of Article 56 to demonstrate compliance with the obligations set out in paragraph 1 of this Article, until a harmonised standard is published. Providers who are in compliance with a European harmonised standard shall be presumed to be in compliance with the obligations set out in paragraph 1 of this Article. Providers of general-purpose AI models with systemic risks who do not adhere to an approved code of practice shall demonstrate alternative adequate means of compliance for approval by the Commission.

3. Any information or documentation obtained pursuant to this Article, including trade secrets, shall be treated in compliance with the confidentiality obligations set out in Article 78.

Updated on Feb 6th 2024 based on the version endorsed by the Coreper I on Feb 2nd

Measures for Deployers, in Particular SMEs, including Start-Ups

1. Member States shall undertake the following actions:

  1. provide SMEs, including start-ups, having a registered office or a branch in the Union, with priority access to the AI regulatory sandboxes, to the extent that they fulfil the eligibility conditions and selection criteria. The priority access shall not preclude other SMEs including start-ups other than those referred to in the first subparagraph to access to the AI regulatory sandbox, provided that they fulfil the eligibility conditions and selection criteria;
  2. organise specific awareness raising and training activities on the application of this Regulation tailored to the needs of SMEs including start-ups, users and, as appropriate, local public authorities;
  3. utilise existing dedicated channels and where appropriate, establish new ones for communication with SMEs including start-ups, users, other innovators and, as appropriate, local public authorities to provide advice and respond to queries about the implementation of this Regulation, including as regards participation in AI regulatory sandboxes;

     ca. facilitate the participation of SMEs and other relevant stakeholders in the standardisation development process.

2. The specific interests and needs of the SME providers, including start-ups, shall be taken into account when setting the fees for conformity assessment under Article 43, reducing those fees proportionately to their size, market size and other relevant indicators.

2a. The AI Office shall undertake the following actions:

  1. upon request of the AI Board, provide standardised templates for the areas covered by this Regulation;
  2. develop and maintain a single information platform providing easy to use information in relation to this Regulation for all operators across the Union;
  3. organise appropriate communication campaigns to raise awareness about the obligations arising from this Regulation;
  4. evaluate and promote the convergence of best practices in public procurement procedures in relation to AI systems.
Suitable Recitals
Report error

Report error

Please keep in mind that this form is only for feedback and suggestions for improvement.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.