By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

Article 62

Measures for Providers and Deployers, in Particular SMEs, including Start-Ups

Updated on May 8th 2024 based on the version and article numbering in the EU Parliament's 'Corrigendum' version dated April 19th 2024.

1. Member States shall undertake the following actions:

  1. provide SMEs, including start-ups, having a registered office or a branch in the Union, with priority access to the AI regulatory sandboxes, to the extent that they fulfil the eligibility conditions and selection criteria; the priority access shall not preclude other SMEs, including start-ups, other than those referred to in this paragraph from access to the AI regulatory sandbox, provided that they also fulfil the eligibility conditions and selection criteria;
  2. organise specific awareness raising and training activities on the application of this Regulation tailored to the needs of SMEs including start-ups, deployers and, as appropriate, local public authorities;
  3. utilise existing dedicated channels and where appropriate, establish new ones for communication with SMEs including start-ups, deployers, other innovators and, as appropriate, local public authorities to provide advice and respond to queries about the implementation of this Regulation, including as regards participation in AI regulatory sandboxes;
  4. facilitate the participation of SMEs and other relevant stakeholders in the standardisation development process.

2. The specific interests and needs of the SME providers, including start-ups, shall be taken into account when setting the fees for conformity assessment under Article 43, reducing those fees proportionately to their size, market size and other relevant indicators.

3. The AI Office shall undertake the following actions:

  1. provide standardised templates for areas covered by this Regulation, as specified by the Board in its request;
  2. develop and maintain a single information platform providing easy to use information in relation to this Regulation for all operators across the Union;
  3. organise appropriate communication campaigns to raise awareness about the obligations arising from this Regulation;
  4. evaluate and promote the convergence of best practices in public procurement procedures in relation to AI systems.

[Previous version]

Updated on April 10th 2024 based on the version and article numbering approved by the EU Parliament on March 13th 2024.

1. Member States shall undertake the following actions:

  1. provide SMEs, including start-ups, having a registered office or a branch in the Union, with priority access to the AI regulatory sandboxes, to the extent that they fulfil the eligibility conditions and selection criteria. The priority access shall not preclude other SMEs including start-ups other than those referred to in the first subparagraph from access to the AI regulatory sandbox, provided that they also fulfil the eligibility conditions and selection criteria;
  2. organise specific awareness raising and training activities on the application of this Regulation tailored to the needs of SMEs including start-ups, users and, as appropriate, local public authorities;
  3. utilise existing dedicated channels and where appropriate, establish new ones for communication with SMEs including start-ups, users, other innovators and, as appropriate, local public authorities to provide advice and respond to queries about the implementation of this Regulation, including as regards participation in AI regulatory sandboxes;
  4. facilitate the participation of SMEs and other relevant stakeholders in the standardisation development process.

2. The specific interests and needs of the SME providers, including start-ups, shall be taken into account when setting the fees for conformity assessment under Article 43, reducing those fees proportionately to their size, market size and other relevant indicators.

3. The AI Office shall undertake the following actions:

  1. provide standardised templates for areas covered by this Regulation, as specified by the Board in its reasoned request;
  2. develop and maintain a single information platform providing easy to use information in relation to this Regulation for all operators across the Union;
  3. organise appropriate communication campaigns to raise awareness about the obligations arising from this Regulation;
  4. evaluate and promote the convergence of best practices in public procurement procedures in relation to AI systems.

Updated on Feb 6th 2024 based on the version endorsed by the Coreper I on Feb 2nd

Reporting of Serious Incidents

1. Providers of high-risk AI systems placed on the Union market shall report any serious incident to the market surveillance authorities of the Member States where that incident occurred.

1a. As a general rule, the period for the reporting referred to in paragraph 1 shall take account of the severity of the serious incident.

1b. The notification referred to in paragraph 1 shall be made immediately after the provider has established a causal link between the AI system and the serious incident or the reasonable likelihood of such a link, and, in any event, not later than 15 days after the provider or, where applicable, the deployer, becomes aware of the serious incident.

1c. Not with standing paragraph 1b, in the event of a widespread infringement or a serious incident as defined in Article 3(44) point (b) the report referred to in paragraph 1 shall be provided immediately, and not later than 2 days after the provider or, where applicable, the deployer becomes aware of that incident.

1d. Notwithstanding paragraph 1b, in the event of death of a person the report shall be provided immediately after the provider or the deployer has established or as soon as it suspects a causal relationship between the high-risk AI system and the serious incident but not later than 10 days after the date on which the provider or, where applicable, the deployer becomes aware of the serious incident.

1e. Where necessary to ensure timely reporting, the provider or, where applicable, the deployer, may submit an initial report that is incomplete followed up by a complete report.

1a. Following the reporting of a serious incident pursuant to the first subparagraph, the provider shall, without delay, perform the necessary investigations in relation to the serious incident and the AI system concerned. This shall include a risk assessment of the incident and corrective action. The provider shall co-operate with the competent authorities and where relevant with the notified body concerned during the investigations referred to in the first subparagraph and shall not perform any investigation which involves altering the AI system concerned in a way which may affect any subsequent evaluation of the causes of the incident, prior to informing the competent authorities of such action.

2. Upon receiving a notification related to a serious incident referred to in Article 3(44)(c), the relevant market surveillance authority shall inform the national public authorities or bodies referred to in Article 64(3). The Commission shall develop dedicated guidance to facilitate compliance with the obligations set out in paragraph 1. That guidance shall be issued 12 months after the entry into force of this Regulation, at the latest, and shall be assessed regularly.

2a. The market surveillance authority shall take appropriate measures, as provided in Article 19 of the Regulation 2019/1020, within 7 days from the date it received the notification referred to in paragraph 1 and follow the notification procedures as provided in the Regulation 2019/1020.

3. For high-risk AI systems referred to in Annex III that are placed on the market or put into service by providers that are subject to Union legislative instruments laying down reporting obligations equivalent to those set out in this Regulation, the notification of serious incidents shall be limited to those referred to in Article 3(44)(c).

3a. For high-risk AI systems which are safety components of devices, or are themselves devices, covered by Regulation (EU) 2017/745 and Regulation (EU) 2017/746 the notification of serious incidents shall be limited to those referred to in Article 3(44)(c) and be made to the national competent authority chosen for this purpose by the Member States where that incident occurred.

3a. National competent authorities shall immediately notify the Commission of any serious incident, whether or not it has taken action on it, in accordance with Article 20 of Regulation 2019/1020.

Suitable Recitals
Report error

Report error

Please keep in mind that this form is only for feedback and suggestions for improvement.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.