By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

46

Updated on May 8th 2024 based on the version and article numbering in the EU Parliament's 'Corrigendum' version dated April 19th 2024.

High-risk AI systems should only be placed on the Union market, put into service or used if they comply with certain mandatory requirements. Those requirements should ensure that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law. On the basis of the New Legislative Framework, as clarified in the Commission notice “The ‘Blue Guide’ on the implementation of EU product rules 2022”20, the general rule is that more than one legal act of Union harmonisation legislation, such as Regulations (EU) 2017/74521 and (EU) 2017/74622 of the European Parliament and of the Council or Directive 2006/42/EC of the European Parliament and of the Council23, may be applicable to one product, since the making available or putting into service can take place only when the product complies with all applicable Union harmonisation legislation. To ensure consistency and avoid unnecessary administrative burdens or costs, providers of a product that contains one or more high-risk AI systems, to which the requirements of this Regulation and of the Union harmonisation legislation listed in an annex to this Regulation apply, should have flexibility with regard to operational decisions on how to ensure compliance of a product that contains one or more AI systems with all applicable requirements of the Union harmonisation legislation in an optimal manner. AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety and fundamental rights of persons in the Union and such limitation should minimise any potential restriction to international trade.

20OJ C 247, 29.6.2022, p. 1.

21Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (OJ L 117, 5.5.2017, p. 1).

22Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU (OJ L 117, 5.5.2017, p. 176).

23Directive 2006/42/EC of the European Parliament and of the Council of 17 May 2006 on machinery, and amending Directive 95/16/EC (OJ L 157, 9.6.2006, p. 24).

[Previous version]

Updated on April 10th 2024 based on the version and article numbering approved by the EU Parliament on March 13th 2024.

High-risk AI systems should only be placed on the Union market, put into service or used if they comply with certain mandatory requirements. Those requirements should ensure that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law. Based on the New Legislative Framework, as clarified in the Commission notice “The ‘Blue Guide’ on the implementation of EU product rules 2022”21, the general rule is that Union harmonisation legislation, such as Regulations (EU) 2017/74522 and (EU) 2017/74623 of the European Parliament and of the Council and Directive 2006/42/EC of the European Parliament and of the Council24, may be applicable to one product, since the making available or putting into service can take place only when the product complies with all applicable Union harmonisation legislation. To ensure consistency and avoid an unnecessary administrative burden or unnecessary costs, providers of a product that contains one or more high-risk AI system, to which the requirements of this Regulation or of the Union harmonisation legislation listed in an annex to this Regulation apply, should be flexible with regard to operational decisions on how to ensure compliance of a product that contains one or more AI systems with all applicable requirements of the Union harmonisation legislation in an optimal manner. AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety and fundamental rights of persons in the Union and such limitation minimises any potential restriction to international trade.

21OJ C 247, 29.6.2022, p. 1.

22Regulation (EU) 2017/745 of the EuropeanParliament and of the Councilof 5 April 2017 onmedical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 andRegulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and93/42/EEC (OJ L 117, 5.5.2017, p. 1).

23Regulation (EU) 2017/746 of the EuropeanParliament and of the Councilof 5 April 2017 on in vitro diagnostic medicaldevices and repealingDirective 98/79/EC and CommissionDecision 2010/227/EU (OJ L 117, 5.5.2017, p. 176).

24Directive 2006/42/EC of the EuropeanParliament and of the Councilof 17 May 2006 on machinery, and amending Directive95/16/EC (OJ L 157, 9.6.2006, p. 24).

Updated on Feb 6th 2024 based on the version endorsed by the Coreper I on Feb 2nd

Having comprehensible information on how high-risk AI systems have been developed and how they perform throughout their lifetime is essential to enable traceability of those systems, verify compliance with the requirements under this Regulation, as well as monitoring of their operations and post market monitoring. This requires keeping records and the availability of a technical documentation, containing information which is necessary to assess the compliance of the AI system with the relevant requirements and facilitate post market monitoring. Such information should include the general characteristics, capabilities and limitations of the system, algorithms, data, training, testing and validation processes used as well as documentation on the relevant risk management system and drawn in a clear and comprehensive form. The technical documentation should be kept up to date, appropriately throughout the lifetime of the AI system. Furthermore, high risk AI systems should technically allow for automatic recording of events (logs) over the duration of the lifetime of the system.

Report error

Report error

Please keep in mind that this form is only for feedback and suggestions for improvement.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.