By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

32a

Removed on April 10th 2024 based on the version and article numbering approved by the EU Parliament on March 13th 2024.

[Previous version]

Updated on Feb 6th 2024 based on the version endorsed by the Coreper I on Feb 2nd

It is also important to clarify that there may be specific cases in which AI systems referred to pre-defined areas specified in this Regulation do not lead to a significant risk of harm to the legal interests protected under those areas, because they do not materially influence the decision-making or do not harm those interests substantially. For the purpose of this Regulation an AI system not materially influencing the outcome of decision-making should be understood as an AI system that does not impact the substance, and thereby the outcome, of decision-making, whether human or automated. This could be the case if one or more of the following conditions are fulfilled. The first criterion should be that the AI system is intended to perform a narrow procedural task, such as an AI system that transforms unstructured data into structured data, an AI system that classifies incoming documents into categories or an AI system that is used to detect duplicates among a large number of applications. These tasks are of such narrow and limited nature that they pose only limited risks which are not increased through the use in a context listed in Annex III. The second criterion should be that the task performed by the AI system is intended to improve the result of a previously completed human activity that may be relevant for the purpose of the use case listed in Annex III. Considering these characteristics, the AI system only provides an additional layer to a human activity with consequently lowered risk. For example, this criterion would apply to AI systems that are intended to improve the language used in previously drafted documents, for instance in relation to professional tone, academic style of language or by aligning text to a certain brand messaging. The third criterion should be that the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns. The risk would be lowered because the use of the AI system follows a previously completed human assessment which it is not meant to replace or influence, without proper human review. Such AI systems include for instance those that, given a certain grading pattern of a teacher, can be used to check ex post whether the teacher may have deviated from the grading pattern so as to flag potential inconsistencies or anomalies. The fourth criterion should be that the AI system is intended to perform a task that is only preparatory to an assessment relevant for the purpose of the use case listed in Annex III, thus making the possible impact of the output of the system very low in terms of representing a risk for the assessment to follow. For example, this criterion covers smart solutions for file handling, which include various functions from indexing, searching, text and speech processing or linking data to other data sources, or AI systems used for translation of initial documents. In any case, AI systems referred to in Annex III should be considered to pose significant risks of harm to the health, safety or fundamental rights of natural persons if the AI system implies profiling within the meaning of Article 4(4) of Regulation (EU) 2016/679 and Article 3(4) of Directive (EU) 2016/680 and Article 3(5) of Regulation 2018/1725. To ensure traceability and transparency, a provider who considers that an AI system referred to in Annex III is not high-risk on the basis of the aforementioned criteria should draw up documentation of the assessment before that system is placed on the market or put into service and should provide this documentation to national competent authorities upon request. Such provider should be obliged to register the system in the EU database established under this Regulation. With a view to provide further guidance for the practical implementation of the criteria under which AI systems referred to in Annex III are exceptionally not high-risk, the Commission should, after consulting the AI Board, provide guidelines specifying this practical implementation completed by a comprehensive list of practical examples of high risk and non-high risk use cases of AI systems.

[Previous version]

Classification of Safety Components by Intended Purpose

As regards high-risk AI systems other than those that are safety components of products, or which are themselves products, it is appropriate to classify them as high-risk if, in the light of their intended purpose, they pose a high risk of harm to the health and safety or the fundamental rights of persons, taking into account both the severity of the possible harm and its probability of occurrence, and they are used in a number of specifically pre-defined areas specified in the Regulation. The identification of those systems is based on the same methodology and criteria envisaged also for any future amendments of the list of high-risk AI systems. It is also important to clarify that within the high-risk scenarios referred to in Annex III there may be systems that do not lead to a significant risk to the legal interests protected under those scenarios, taking into account the output produced by the AI system. Therefore only when such output has a high degree of importance (i.e. is not purely accessory) in respect of the relevant action or decision so as to generate a significant risk to the legal interests protected, the AI system generating such output should be considered as high-risk. For instance, when the information provided by an AI systems to the human consists of the profiling of natural persons within the meaning of Article 4(4) Regulation (EU) 2016/679 and Article 3(4) of Directive (EU) 2016/680 and Article 3(5) of Regulation (EU) 2018/1725, such information should not typically be considered of accessory nature in the context of high risk AI systems as referred to in Annex III. However, if the output of the AI system has only negligible or minor relevance for human action or decision, it may be considered purely accessory, including for example, AI systems used for translation for informative purposes or for the management of documents.

Report error

Report error

Please keep in mind that this form is only for feedback and suggestions for improvement.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.