By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

Article 53

Obligations for Providers of General-Purpose AI Models

Updated on May 8th 2024 based on the version and article numbering in the EU Parliament's 'Corrigendum' version dated April 19th 2024.

1. Providers of general-purpose AI models shall:

  1. draw up and keep up-to-date the technical documentation of the model, including its training and testing process and the results of its evaluation, which shall contain, at a minimum, the information set out in Annex XI for the purpose of providing it, upon request, to the AI Office and the national competent authorities;
  2. draw up, keep up-to-date and make available information and documentation to providers of AI systems who intend to integrate the general-purpose AI model into their AI systems. Without prejudice to the need to observe and protect intellectual property rights and confidential business information or trade secrets in accordance with Union and national law, the information and documentation shall:
    (i) enable providers of AI systems to have a good understanding of the capabilities and limitations of the general-purpose AI model and to comply with their obligations pursuant to this Regulation; and
    (ii) contain, at a minimum, the elements set out in Annex XII;
  3. put in place a policy to comply with Union law on copyright and related rights, and in particular to identify and comply with, including through state-of-the-art technologies, a reservation of rights expressed pursuant to Article 4(3) of Directive (EU) 2019/790;
  4. draw up and make publicly available a sufficiently detailed summary about the content used for training of the general-purpose AI model, according to a template provided by the AI Office.

2. The obligations set out in paragraph 1, points (a) and (b), shall not apply to providers of AI models that are released under a free and open-source licence that allows for the access, usage, modification, and distribution of the model, and whose parameters, including the weights, the information on the model architecture, and the information on model usage, are made publicly available. This exception shall not apply to general- purpose AI models with systemic risks.

3. Providers of general-purpose AI models shall cooperate as necessary with the Commission and the national competent authorities in the exercise of their competences and powers pursuant to this Regulation.

4. Providers of general-purpose AI models may rely on codes of practice within the meaning of Article 56 to demonstrate compliance with the obligations set out in paragraph 1 of this Article, until a harmonised standard is published. Compliance with European harmonised standards grants providers the presumption of conformity to the extent that those standards cover those obligations. Providers of general-purpose AI models who do not adhere to an approved code of practice or do not comply with a European harmonised standard shall demonstrate alternative adequate means of compliance for assessment by the Commission.

5. For the purpose of facilitating compliance with Annex XI, in particular points 2 (d) and (e) thereof, the Commission is empowered to adopt delegated acts in accordance with Article 97 to detail measurement and calculation methodologies with a view to allowing for comparable and verifiable documentation.

6. The Commission is empowered to adopt delegated acts in accordance with Article 97(2) to amend Annexes XI and XII in light of evolving technological developments.

7. Any information or documentation obtained pursuant to this Article, including trade secrets, shall be treated in accordance with the confidentiality obligations set out in Article 78.

[Previous version]

Updated on April 10th 2024 based on the version and article numbering approved by the EU Parliament on March 13th 2024.

1. Providers of general-purpose AI models shall:

  1. draw up and keep up-to-date the technical documentation of the model, including its training and testing process and the results of its evaluation, which shall contain, at a minimum, the elements set out in Annex XI for the purpose of providing it, upon request, to the AI Office and the national competent authorities;
  2. draw up, keep up-to-date and make available information and documentation to providers of AI systems who intend to integrate the general-purpose AI model into their AI systems. Without prejudice to the need to respect and protect intellectual property rights and confidential business information or trade secrets in accordance with Union and national law, the information and documentation shall:
    (i) enable providers of AI systems to have a good understanding of the capabilities and limitations of the general-purpose AI model and to comply with their obligations pursuant to this Regulation; and
    (ii) contain, at a minimum, the elements set out in Annex XII;
  3. put in place a policy to comply with Union copyright law, and in particular to identify and comply with, including through state of the art technologies, a reservation of rights expressed pursuant to Article 4(3) of Directive (EU) 2019/790;
  4. draw up and make publicly available a sufficiently detailed summary about the content used for training of the general-purpose AI model, according to a template provided by the AI Office.

2. The obligations set out in paragraph 1, points (a) and (b), shall not apply to providers of AI models that are released under a free and open licence that allows for the access, usage, modification, and distribution of the model, and whose parameters, including the weights, the information on the model architecture, and the information on model usage, are made publicly available. This exception shall not apply to general-purpose AI models with systemic risks.

3. Providers of general-purpose AI models shall cooperate as necessary with the Commission and the national competent authorities in the exercise of their competences and powers pursuant to this Regulation.

4. Providers of general-purpose AI models may rely on codes of practice within the meaning of Article 56 to demonstrate compliance with the obligations set out in paragraph 1 of this Article, until a harmonised standard is published. Providers who are in compliance with a European harmonised standard shall be presumed to be in compliance with the obligations set out in paragraph 1 of this Article. Providers of general-purpose AI models who do not adhere to an approved code of practice shall demonstrate alternative adequate means of compliance for approval by the Commission.

5. For the purpose of facilitating compliance with Annex XI, in particular points 2 (d) and (e) thereof, the Commission shall adopt delegated acts in accordance with Article 97 to detail measurement and calculation methodologies with a view to allowing for comparable and verifiable documentation.

6. The Commission shall adopt delegated acts in accordance with Article 97(2) to amend Annexes XI and XII in the light of evolving technological developments.

7. Any information or documentation obtained pursuant to this Article, including trade secrets, shall be treated in compliance with the confidentiality obligations set out in Article 78.

Updated on Feb 6th 2024 based on the version endorsed by the Coreper I on Feb 2nd

AI Regulatory Sandboxes

1. Member States shall ensure that their competent authorities establish at least one AI regulatory sandbox at national level, which shall be operational 24 months after entry into force. This sandbox may also be established jointly with one or several other Member States’ competent authorities. The Commission may provide technical support, advice and tools for the establishment and operation of AI regulatory sandboxes.

The obligation established in previous paragraph can also be fulfilled by participation in an existing sandbox insofar as this participation provides equivalent level of national coverage for the participating Member States.

1a. Additional AI regulatory sandboxes at regional or local levels or jointly with other Member States' competent authorities may also be established.

1b. The European Data Protection Supervisor may also establish an AI regulatory sandbox for the EU institutions, bodies and agencies and exercise the roles and the tasks of national competent authorities in accordance with this chapter.

1c. Member States shall ensure that competent authorities referred to in paragraphs 1 and 1a allocate sufficient resources to comply with this Article effectively and in a timely manner. Where appropriate, national competent authorities shall cooperate with other relevant authorities and may allow for the involvement of other actors within the AI ecosystem.

This Article shall not affect other regulatory sandboxes established under national or Union law. Member States shall ensure an appropriate level of cooperation between the authorities supervising those other sandboxes and the national competent authorities.

1d. AI regulatory sandboxes established under Article 53(1) of this Regulation shall, in accordance with Articles 53 and 53a, provide for a controlled environment that fosters innovation and facilitates the development, training, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service

pursuant to a specific sandbox plan agreed between the prospective providers and the competent authority. Such regulatory sandboxes may include testing in real world conditions supervised in the sandbox.

1e. Competent authorities shall provide, as appropriate, guidance, supervision and support within the sandbox with a view to identifying risks, in particular to fundamental rights, health and safety, testing, mitigation measures, and their effectiveness in relation to the obligations and requirements of this Regulation and, where relevant, other Union and Member States legislation supervised within the sandbox.

1f. Competent authorities shall provide providers and prospective providers with guidance on regulatory expectations and how to fulfil the requirements and obligations set out in this Regulation.

Upon request of the provider or prospective provider of the AI system, the competent authority shall provide a written proof of the activities successfully carried out in the sandbox. The competent authority shall also provide an exit report detailing the activities carried out in the sandbox and the related results and learning outcomes. Providers may use such documentation to demonstrate the compliance with this Regulation through the conformity assessment process or relevant market surveillance activities. In this regard, the exit reports and the written proof provided by the national competent authority shall be taken positively into account by market surveillance authorities and notified bodies, with a view to accelerate conformity assessment procedures to a reasonable extent.

1fa. Subject to the confidentiality provisions in Article 70 and with the agreement of the sandbox provider/prospective provider, the European Commission and the Board shall be authorised to access the exit reports and shall take them into account, as appropriate, when exercising their tasks under this Regulation. If both provider and prospective provider and the national competent authority explicitly agree to this, the exit report can be made publicly available through the single information platform referred to in this article.

1g. The establishment of AI regulatory sandboxes shall aim to contribute to the following objectives:

  1. improve legal certainty to achieve regulatory compliance with this Regulation or, where relevant, other applicable Union and Member States legislation;
  2. support the sharing of best practices through cooperation with the authorities involved in the AI regulatory sandbox;
  3. foster innovation and competitiveness and facilitate the development of an AI ecosystem;
  4. contribute to evidence-based regulatory learning;
  5. facilitate and accelerate access to the Union market for AI systems, in particular when provided by small and medium-sized enterprises (SMEs), including start- ups.

2. National competent authorities shall ensure that, to the extent the innovative AI systems involve the processing of personal data or otherwise fall under the supervisory remit of other national authorities or competent authorities providing or supporting access to data, the national data protection authorities, and those other national authorities are associated to the operation of the AI regulatory sandbox and involved in the supervision of those aspects to the extent of their respective tasks and powers, as applicable.

3. The AI regulatory sandboxes shall not affect the supervisory and corrective powers of the competent authorities supervising the sandboxes, including at regional or local level. Any significant risks to health and safety and fundamental rights identified during the development and testing of such AI systems shall result in an adequate mitigation. National competent authorities shall have the power to temporarily or permanently suspend the testing process, or participation in the sandbox if no effective mitigation is possible and inform the AI Office of such decision. National competent authorities shall exercise their supervisory powers within the limits of the relevant legislation, using their discretionary powers when implementing legal provisions to a specific AI sandbox project, with the objective of supporting innovation in AI in the Union.

4. Providers and prospective providers in the AI regulatory sandbox shall remain liable under applicable Union and Member States liability legislation for any damage inflicted on third parties as a result of the experimentation taking place in the sandbox. However, provided that the prospective provider(s) respect the specific plan and the terms and conditions for their participation and follow in good faith the guidance given by the national competent authority, no administrative fines shall be imposed by the authorities for infringements of this Regulation. To the extent that other competent authorities responsible for other Union and Member States’ legislation have been actively involved in the supervision of the AI system in the sandbox and have provided guidance for compliance, no administrative fines shall be imposed regarding that legislation.

4b. The AI regulatory sandboxes shall be designed and implemented in such a way that, where relevant, they facilitate cross-border cooperation between national competent authorities.

5. National competent authorities shall coordinate their activities and cooperate within the framework of the Board.

5a. National competent authorities shall inform the AI Office and the Board of the establishment of a sandbox and may ask for support and guidance. A list of planned and existing AI sandboxes shall be made publicly available by the AI Office and kept up to date in order to encourage more interaction in the regulatory sandboxes and cross-border cooperation.

5b. National competent authorities shall submit to the AI Office and to the Board, annual reports, starting one year after the establishment of the AI regulatory sandbox and then every year until its termination and a final report. Those reports shall provide information on the progress and results of the implementation of those sandboxes, including best practices, incidents, lessons learnt and recommendations on their setup and, where relevant, on the application and possible revision of this Regulation, including its delegated and implementing acts, and other Union law supervised within the sandbox. Those annual reports or abstracts thereof shall be made available to the public, online. The Commission shall, where appropriate, take the annual reports into account when exercising their tasks under this Regulation.

6. The Commission shall develop a single and dedicated interface containing all relevant information related to sandboxes to allow stakeholders to interact with regulatory sandboxes and to raise enquiries with competent authorities, and to seek non-binding guidance on the conformity of innovative products, services, business models embedding AI technologies, in accordance with Article 55(1)(c). The Commission shall proactively coordinate with national competent authorities, where relevant.

Report error

Report error

Please keep in mind that this form is only for feedback and suggestions for improvement.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.