By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

Article 61

Informed Consent to Participate in Testing in Real World Conditions Outside AI Regulatory Sandboxes

Updated on May 8th 2024 based on the version and article numbering in the EU Parliament's 'Corrigendum' version dated April 19th 2024.

1. For the purpose of testing in real world conditions under Article 60, freely-given informed consent shall be obtained from the subjects of testing prior to their participation in such testing and after their having been duly informed with concise, clear, relevant, and understandable information regarding:

  1. the nature and objectives of the testing in real world conditions and the possible inconvenience that may be linked to their participation;
  2. the conditions under which the testing in real world conditions is to be conducted, including the expected duration of the subject or subjects' participation;
  3. their rights, and the guarantees regarding their participation, in particular their right to refuse to participate in, and the right to withdraw from, testing in real world conditions at any time without any resulting detriment and without having to provide any justification;
  4. the arrangements for requesting the reversal or the disregarding of the predictions, recommendations or decisions of the AI system;
  5. the Union-wide unique single identification number of the testing in real world conditions in accordance with Article 60(4) point (c), and the contact details of the provider or its legal representative from whom further information can be obtained.

2. The informed consent shall be dated and documented and a copy shall be given to the subjects of testing or their legal representative.

[Previous version]

Updated on April 10th 2024 based on the version and article numbering approved by the EU Parliament on March 13th 2024.

1. For the purpose of testing in real world conditions under Article 60, freely-given informed consent shall obtained from the subjects of testing prior to their participation in such testing and after their having been duly informed with concise, clear, relevant, and understandable information regarding:

  1. the nature and objectives of the testing in real world conditions and the possible inconvenience that may be linked to their participation;
  2. the conditions under which the testing in real world conditions is to be conducted, including the expected duration of the subject or subjects' participation;
  3. their rights, and the guarantees regarding their participation, in particular their right to refuse to participate in, and the right to withdraw from, testing in real world conditions at any time without any resulting detriment and without having to provide any justification;
  4. the arrangements for requesting the reversal or the disregard of the predictions, recommendations or decisions of the AI system;
  5. the Union-wide unique single identification number of the testing in real world conditions in accordance with Article 60(4) point (c), and the contact details of the provider or its legal representative from whom further information can be obtained.

2. The informed consent shall be dated and documented and a copy shall be given to the subjects of testing or their legal representative.

Updated on Feb 6th 2024 based on the version endorsed by the Coreper I on Feb 2nd

Post-Market Monitoring by Providers and Post-Market Monitoring Plan for High-Risk AI Systems

1. Providers shall establish and document a post-market monitoring system in a manner that is proportionate to the nature of the artificial intelligence technologies and the risks of the high-risk AI system.

2. The post-market monitoring system shall actively and systematically collect, document and analyse relevant data which may be provided by deployers or which may be collected through other sources on the performance of high-risk AI systems throughout their lifetime, and allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Title III, Chapter 2. Where relevant, post-market monitoring shall include an analysis of the interaction with other AI systems. This obligation shall not cover sensitive operational data of deployers which are law enforcement authorities.

3. The post-market monitoring system shall be based on a post-market monitoring plan. The post-market monitoring plan shall be part of the technical documentation referred to in Annex IV. The Commission shall adopt an implementing act laying down detailed provisions establishing a template for the post-market monitoring plan and the list of elements to be included in the plan by six months before the entry into application of this Regulation.

4. For high-risk AI systems covered by the legal acts referred to in Annex II, Section A, where a post-market monitoring system and plan is already established under that legislation, in order to ensure consistency, avoid duplications and minimise additional burdens, providers shall have a choice to integrate, as appropriate, the necessary elements described in paragraphs 1, 2 and 3 using the template referred in paragraph 3 into already existing system and plan under the Union harmonisation legislation listed in Annex II, Section A, provided it achieves an equivalent level of protection.

The first subparagraph shall also apply high-risk AI systems referred to in point 5 of Annex III placed on the market or put into service by financial institutions that are subject to requirements regarding their internal governance, arrangements or processes under Union financial services legislation.

Suitable Recitals
Report error

Report error

Please keep in mind that this form is only for feedback and suggestions for improvement.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.