By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

Key Issues

Risk-Based Approach

The AI Act promises a proportionate risk-based approach that imposes regulatory burdens only when an AI system is likely to pose high risks to fundamental rights and safety. Targeting specific sectors and applications, the regulation has shifted from the binary low-risk vs. high-risk framework proposed in the Commission’s White Paper on AI to a four-tiered risk framework, which classifies risk into four categories: ‘unacceptable risks’ that lead to prohibited practices; ‘high risks’ that trigger a set of stringent obligations, including conducting a conformity assessment; ‘limited risks’ with associated transparency obligations; and ‘minimal risks’, where stakeholders are encouraged to build codes of conduct—irrespective of whether they are established in the EU or a third-country. The systems considered as having high or unacceptable levels of risk – for example systems used in social scoring or those that interact with children in the context of personal development or education, respectively, will be one key issue under consideration by the Parliament and the Council. The hope is that this approach will limit regulatory oversight to only sensitive AI systems—resulting in fewer restrictions on the trade and use of AI within a single market.

There is broad consensus globally in support of a risk-based approach to AI regulation. In the U.S., the National Institute for Standards and Technology has developed an Artificial Intelligence Risk Management Framework that could facilitate alignment on approaches to identifying and assessing risk. However, concerns have been raised that some applications under the EU AI Act could fall through the cracks in classification between low and unacceptable risk. Of particular concern is the fact that the criteria for establishing whether or not an AI system is to be considered as posing an unacceptable risk is unclear. For example, the inclusion of systems that manipulate people through subliminal techniques appears intuitive, but in practice, it is unclear how harm is defined and which applications may be subject to prohibition. The threshold for manipulation, therefore, may have to be clarified through future interpretive guidance with the scope of existing provisions such as the General Data Protection Regulation (GDPR) and consumer protection legislation.

A similar set of considerations apply to the definition of high-risk applications of AI under the EU AI Act. The list and criteria proposed in the latest text have been subject to lively debate, especially given that the list –originally described by the Commission as potentially referring to a small subset of future AI systems on the market, around 5 to 10 % – keeps expanding. This is well illustrated by a proposal in the Draft report of two MEP committees to qualify any AI systems used in insurance and medical triage, as well as AI systems that interact with children or affect democratic processes, as high-risk. It is important that the Proposal addresses this gap in requirements for high-risk AI systems, especially when accounting for data protection requirements.

Further, Article 35 of the GDPR contains rules for performing a Data Protection Impact Assessment (DPIA), and paragraph 3 underlines the cases in which a DPIA is required. It should therefore be clarified whether the categorisation of AI systems as high-risk would, by default, categorise them as high-risk under the GDPR, and what legal consequences this might have. Although the regulation has not yet been passed, there is a pressing need for this categorisation to be as clear as possible as soon as possible, so that businesses can anticipate the level of regulation that will be imposed on their systems and adapt their planning for the coming years.

Moreover, in the Commission proposal, most of the transparency measures envisioned for high-risk AI systems – notably in relation to their registration in the EU-wide high-risk AI systems database – apply to the developers of these systems, not to the actors deploying them. Stakeholders have argued that there should be greater transparency in relation to deployers. Indeed, a recent Mozilla report argues that ‘deployers must therefore be obligated to disclose AI systems they use for high-risk use cases and provide meaningful information on the exact purpose for which these high—risk AI systems are used as well as the context of deployment’.

Once the AI Act is passed, the European Commission will have the power to amend the list of high-risk AI systems accompanying its proposal. However, additions to this list can only be made within eight pre-defined high-risk areas. Leaving room for adjustments is important where AI is developed and deployed rapidly across an increasing number of sectors and use cases, as unknown and unanticipated risks will inevitably arise.

Perhaps the most significant gap relates to the fact that the AI Act does not consider the risk associated with the interaction between multiple AI systems. For example, several AI systems with individual low-risk profiles could end up interacting and generating significant risks for individuals and society as a whole. These so-called interactive risks of AI are for now excluded from the scope of the Act and might be tackled by the proposed EU legislative initiative on AI Liability. However, the AI Act is focused entirely on a linear risk-based approach, with an isolated discussion of the role played by citizens that consume such services. As a result, the proposal has been critiqued, with some advocating for the consideration of Fundamental Rights that encourage human rights impact assessments.

Report error

Report error

Please keep in mind that this form is only for feedback and suggestions for improvement.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.