By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.


Removed on May 8th 2024 based on the version and article numbering in the EU Parliament's 'Corrigendum' version dated April 19th 2024.

[Previous version]

Removed on April 10th 2024 based on the version and article numbering approved by the EU Parliament on March 13th 2024.

Updated on Feb 6th 2024 based on the version endorsed by the Coreper I on Feb 2nd

It is appropriate to establish a methodology for the classification of general purpose AI models as general purpose AI model with systemic risks. Since systemic risks result from particularly high capabilities, a general-purpose AI models should be considered to present systemic risks if it has high-impact capabilities, evaluated on the basis of appropriate technical tools and methodologies, or significant impact on the internal market due to its reach. High-impact capabilities in general purpose AI models means capabilities that match or exceed the capabilities recorded in the most advanced general-purpose AI models. The full range of capabilities in a model could be better understood after its release on the market or when users interact with the model. According to the state of the art at the time of entry into force of this Regulation, the cumulative amount of compute used for the training of the general purpose AI model measured in floating point operations (FLOPs) is one of the relevant approximations for model capabilities. The amount of compute used for training cumulates the compute used across the activities and methods that are intended to enhance the capabilities of the model prior to deployment, such as pre-training, synthetic data generation and fine-tuning. Therefore, an initial threshold of FLOPs should be set, which, if met by a general-purpose AI model, leads to a presumption that the model is a general-purpose AI model with systemic risks. This threshold should be adjusted over time to reflect technological and industrial changes, such as algorithmic improvements or increased hardware efficiency, and should be supplemented with benchmarks and indicators for model capability. To inform this, the AI Office should engage with the scientific community, industry, civil society and other experts. Thresholds, as well as tools and benchmarks for the assessment of high-impact capabilities, should be strong predictors of generality, its capabilities and associated systemic risk of general-purpose AI models, and could take into taking into account the way the model will be placed on the market or the number of users it may affect. To complement this system, there should be a possibility for the Commission to take individual decisions designating a general-purpose AI model as a general-purpose AI model with systemic risk if it is found that such model has capabilities or impact equivalent to those captured by the set threshold. This decision should be taken on the basis of an overall assessment of the criteria set out in Annex YY, such as quality or size of the training data set, number of business and end users, its input and output modalities, its degree of autonomy and scalability, or the tools it has access to. Upon a reasoned request of a provider whose model has been designated as a general- purpose AI model with systemic risk, the Commission should take the request into account and may decide to reassess whether the general-purpose AI model can still be considered to present systemic risks.

Report error

Report error

Please keep in mind that this form is only for feedback and suggestions for improvement.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.