In each of the areas listed under points 1-8, the AI systems specifically mentioned under each letter are considered to be high-risk AI systems pursuant to Article 6(3):
1. Biometrics:
- Remote biometric identification systems.
2. Critical infrastructure:
- AI systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic and the supply of water, gas, heating and electricity.
3. Education and vocational training:
- AI systems intended to be used to determine access, admission or to assign natural persons to educational and vocational training institutions or programmes at all levels;
- AI systems intended to be used to evaluate learning outcomes, including when those outcomes are used to steer the learning process of natural persons in educational and vocational training institutions or programmes at all levels.
4. Employment, workers management and access to self-employment:
- AI systems intended to be used for recruitment or selection of natural persons, notably to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates;
- AI intended to be used to make decisions on promotion and termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics and to monitor and evaluate performance and behaviour of persons in such relationships.
5. Access to and enjoyment of essential private services and essential public services and benefits:
- AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for essential public assistance benefits and services, as well as to grant, reduce, revoke, or reclaim such benefits and services;
- AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems put into service by providers that are micro and small-sized enterprises as defined in the Annex of Commission Recommendation 2003/361/EC for their own use;
- AI systems intended to be used to dispatch, or to establish priority in the dispatching of emergency first response services, including by firefighters and medical aid;
- AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance with the exception of AI systems put into service by providers that are micro and small-sized enterprises as defined in the Annex of Commission Recommendation 2003/361/EC for their own use.
6. Law enforcement:
- AI systems intended to be used by law enforcement authorities or on their behalf to assess the risk of a natural person for offending or reoffending or the risk for a natural person to become a potential victim of criminal offences;
- AI systems intended to be used by law enforcement authorities or on their behalf as polygraphs and similar tools or to detect the emotional state of a natural person;
- [deleted]
- AI systems intended to be used by law enforcement authorities or on their behalf to evaluate the reliability of evidence in the course of investigation or prosecution of criminal offences;
- AI systems intended to be used by law enforcement authorities or on their behalf to predict the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 or to assess personality traits and characteristics or past criminal behaviour of natural persons or groups;
- AI systems intended to be used by law enforcement authorities or on their behalf to profile natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of detection, investigation or prosecution of criminal offences.
- [deleted]
7. Migration, asylum and border control management:
- AI systems intended to be used by competent public authorities or on their behalf as polygraphs and similar tools or to detect the emotional state of a natural person;
- AI systems intended to be used by competent public authorities or on their behalf to assess a risk, including a security risk, a risk of irregular migration, or a health risk, posed by a natural person who intends to enter or has entered into the territory of a Member State;
- [deleted]
- AI systems intended to be used by competent public authorities or on their behalf to examine applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status.
8. Administration of justice and democratic processes:
- AI systems intended to be used by a judicial authority or on their behalf to interpret facts or the law and to apply the law to a concrete set of facts.