By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

Recital 70

Transparency and Notification Obligations for AI Interactions with Natural Persons

Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems. In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect taking into account the circumstances and the context of use. When implementing such obligation, the characteristics of individuals belonging to vulnerable groups due to their age or disability should be taken into account to the extent the AI system is intended to interact with those groups as well. Moreover, natural persons should be notified when they are exposed to systems that, by processing their biometric data, can identify or infer the emotions or intentions of those persons or assign them to specific categories. Such specific categories can relate to aspects such as sex, age, hair colour, eye colour, tatoos, personal traits, ethnic origin, personal preferences and interests or to other aspects such as sexual or political orientation. Such information and notifications should be provided in accessible formats for persons with disabilities. Further, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a person to be authentic, should disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin. The compliance with the information obligations referred to above should not be interpreted as indicating that the use of the system or its output is lawful under this Regulation or other Union and Member State law and should be without prejudice to other transparency obligations for users of AI systems laid down in Union or national law. Furthermore it should also not be interpreted as indicating that the use of the system or its output impedes the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, in particular where the content is part of an evidently creative, satirical, artistic or fictional work or programme, subject to appropriate safeguards for the rights and freedoms of third parties.

Report error

Report error

Please keep in mind that this form is only for feedback and suggestions for improvement.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.