1. A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems.
2. The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic updating. It shall comprise the following steps:
The risks referred to in this paragraph shall concern only those which may be reasonably mitigated or eliminated through the development or design of the high-risk AI system, or the provision of adequate technical information.
3. The risk management measures referred to in paragraph 2, point (d) shall give due consideration to the effects and possible interaction resulting from the combined application of the requirements set out in this Chapter 2, with a view to minimising risks more effectively while achieving an appropriate balance in implementing the measures to fulfil those requirements.
4. The risk management measures referred to in paragraph 2, point (d) shall be such that any residual risk associated with each hazard as well as the overall residual risk of the high-risk AI systems is judged acceptable.
In identifying the most appropriate risk management measures, the following shall be ensured:
With a view to eliminating or reducing risks related to the use of the high-risk AI system, due consideration shall be given to the technical knowledge, experience, education, training to be expected by the user and the environment in which the system is intended to be used.
5. High-risk AI systems shall be tested in order to ensure that high-risk AI systems perform in a manner that is consistent with their intended purpose and they are in compliance with the requirements set out in this Chapter.
6. Testing procedures may include testing in real world conditions in accordance with Article 54a.
7. The testing of the high-risk AI systems shall be performed, as appropriate, at any point in time throughout the development process, and, in any event, prior to the placing on the market or the putting into service. Testing shall be made against preliminarily defined metrics and probabilistic thresholds that are appropriate to the intended purpose of the high-risk AI system.
8. The risk management system described in paragraphs 1 to 7 shall give specific consideration to whether the high-risk AI system is likely to be accessed by or have an impact on persons under the age of 18.
9. For providers of high-risk AI systems that are subject to requirements regarding internal risk management processes under relevant sectorial Union law, the aspects described in paragraphs 1 to 8 may be part of the risk management procedures established pursuant to that law.
Please keep in mind that this form is only for feedback and suggestions for improvement.