By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

42a

Removed on April 10th 2024 based on the version and article numbering approved by the EU Parliament on March 13th 2024.

[Previous version]

Updated on Feb 6th 2024 based on the version endorsed by the Coreper I on Feb 2nd

The risk management system should consist of a continuous, iterative process that is planned and run throughout the entire lifecycle of a high-risk AI system. This process should be aimed at identifying and mitigating the relevant risks of artificial intelligence systems on health, safety and fundamental rights. The risk management system should be regularly reviewed and updated to ensure its continuing effectiveness, as well as justification and documentation of any significant decisions and actions taken subject to this Regulation. This process should ensure that the provider identifies risks or adverse impacts and implements mitigation measures for the known and reasonably foreseeable risks of artificial intelligence systems to the health, safety and fundamental rights in light of its intended purpose and reasonably foreseeable misuse, including the possible risks arising from the interaction between the AI system and the environment within which it operates. The risk management system should adopt the most appropriate risk management measures in the light of the state of the art in AI. When identifying the most appropriate risk management measures, the provider should document and explain the choices made and, when relevant, involve experts and external stakeholders. In identifying reasonably foreseeable misuse of high-risk AI systems the provider should cover uses of the AI systems which, while not directly covered by the intended purpose and provided for in the instruction for use may nevertheless be reasonably expected to result from readily predictable human behaviour in the context of the specific characteristics and use of the particular AI system. Any known or foreseeable circumstances, related to the use of the high-risk AI system in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, which may lead to risks to the health and safety or fundamental rights should be included in the instructions for use provided by the provider. This is to ensure that the deployer is aware and takes them into account when using the high-risk AI system. Identifying and implementing risk mitigation measures for foreseeable misuse under this Regulation should not require specific additional training measures for the high-risk AI system by the provider to address them. The providers however are encouraged to consider such additional training measures to mitigate reasonable foreseeable misuses as necessary and appropriate.

Report error

Report error

Please keep in mind that this form is only for feedback and suggestions for improvement.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.