By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

Key Issues

Transparency Obligations

Introduction

A key priority of the EU AI Act is transparency due to its ability to enable citizens to understand AI systems’ design and use, as well as enable accountability for decisions made by companies and public authorities. Transparency is also essential for creating public trust in AI systems and ensuring their responsible deployment. To meet the requirements of transparency, the EU AI Act mandates the disclosure of certain information to individuals and the public. However, implementation of these obligations can prove to be difficult due to potential issues.

Under the EU AI Act, there are different provisions governing the different aspects of transparency for different products.

Transparency as a principle applicable to all AI systems

To begin with, the latest version of the EU AI Act, the Parliament position adopted on 14 June 2023, lists transparency among the general principles applicable to all AI systems under Article 4a and stipulates that transparency for the purposes of this provision indicates that “AI systems shall be developed and used in a way that allows appropriate traceability and explainability while making humans aware that they communicate or interact with an AI system as well as duly informing users of the capabilities and limitations of that AI system and affected persons about their rights”.

Transparency within the meaning of this provision is related to the technical infrastructure of AI systems. Transparency obligations within this sense require AI-based systems to be transparent in their functioning so that users can understand how decisions are made and the logic behind them. This includes providing an explanation of how an AI system arrived at its decisions, as well as information on the data used to train the system and the accuracy of the system.

Despite the wording that this principle applies to all AI systems, the EU AI Act itself provides that for high-risk AI systems, this principle is established through the special transparency and provision of information requirements as per Article 13 of the EU AI Act. Additionally, the EU AI Act also states that the application of this principle may be established through Articles 28 and 52, the application of harmonised standards or codes of conduct.

Transparency and provision of information as a high-risk AI system requirement

Article 13 of the EU AI Act provides the requirement of transparency and provision of information for high-risk AI systems, according to which “high-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable providers and users to reasonably understand the system’s functioning.

Again, this provision mandates that high-risk AI systems should be designed and developed in such a way that their operation is sufficiently transparent so that users can interpret the system’s output and use it appropriately, which is supplemented by the requirement of appropriate human-machine interface tools to enable oversight, another requirement provided under Article 14.

Transparency obligations for certain AI systems

Another reference to transparency within the EU AI Act is the transparency-focused obligations provided for certain AI systems as per Article 52. These are not obligations exclusive to high-risk AI systems but applicable to any AI system that falls within one of the groups provided under Article 52, according to which:

  • for AI systems intended to interact with natural persons, the natural persons exposed to the system must be informed that they are interacting with an AI system,
  • for emotion recognition or biometric categorisation systems, the natural persons exposed to the system must be informed and asked for their consent prior to the processing of their biometric data and,
  • for deep fake-generating systems, it must be disclosed that the content has been artificially generated or manipulated in a clearly visible manner.

All this information must be provided to the exposed natural persons at the latest at the time of their first interaction with or exposure to the system.

Transparency obligations of the providers of foundation models

The Parliament’s position introduced the concept of “foundation model” to the EU AI Act and set forth a set of different transparency-related requirements, particularly for the providers of foundation models specifically intended to generate content, to which the text also refers as “generative AI”.

Pursuant to Article 28b of the EU AI Act, the providers of generative foundation models or generative AI systems designed by the specialisation of such a foundation model must:

  • comply with the transparency obligations under Article 52,
  • take necessary measures to prevent the generation of content violating the Union law, and,
  • document and make a summary of the use of copyrighted training data publicly available.

The interplay between different transparency-related provisions

The transparency requirements under the EU AI Act for a given AI system vary depending on the system’s risk level. The applicable rules must be identified on a case-by-case basis after a meticulous examination and taking the special circumstances into account. However, the identification procedure without the involvement of harmonised standards and codes of conduct may be summarised as follows:

A flowchart of a modelDescription automatically generated

Transparency in data governance

The EU AI Act also provides transparency requirements in data governance. According to Article 10, high-risk AI systems must implement data governance measures concerning certain issues, one of them being the “transparency as regards the original purpose of data collection”. This means that if and when the data within the training, validation, or testing sets of an AI system is collected initially for another purpose, this original purpose must be transparent.

The database as a systematic transparency measure

The AI Act includes a potentially powerful mechanism for ensuring systematic transparency: a public database for high-risk AI systems created and maintained by the Commission. This could be an important tool for providing a centralised resource to explore and critique high-risk AI systems in the EU without presenting an inordinate burden on companies. The obligation to register high-risk AI systems applies to both providers and some deployers of the high-risk systems as well as the providers of foundation models.

The lack of clarity

In addition to above, it must also be emphasised that the draft legislation is silent as to the level of transparency and interpretability that will be imposed on AI systems. There is also a lack of consensus on the meaning of interpretability and clarity on how exactly the provision of information will enable interpretability.

Report error

Report error

Please keep in mind that this form is only for feedback and suggestions for improvement.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.