AI System is defined in the EU AI Act (para 1) as a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence real or virtual environments.
Artificial Intelligence (AI): the ability of a machine to display human-like capabilities such as reasoning, learning, planning and creativity by analysing their environment and taking action – with some degree of autonomy – to achieve specific goals.
Certain AI system: Specific artificial intelligence applications identified by regulatory authorities that are subject to additional scrutiny due to their unique characteristics, significant impacts, or potential risks to mitigate any potential harm.
Deployer is defined in the EU AI Act (para 4) and means a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.
EU AI Act means Regulation 2024/1689 of the EU laying down harmonised rules on artificial intelligence.
General-Purpose Artificial Intelligence (GPAI): Artificial Intelligence systems designed to execute a range of tasks without the need for specialisation or customisation between varying tasks.
GPAI Model is defined in the EU AI Act (para 63) as an AI model including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications. The definition explicitly excludes AI models that are used before release on the market for the purpose of research, development and prototyping activities.
General-Purpose AI with systemic risk: Artificial Intelligence systems designed to execute a range of tasks without the need for specialisation or customisation between varying tasks which pose significant threats of causing widespread disruption or harm to multiple sectors leading to adverse effects on societal, economic, or operational stability. GPAI with systemic risk will be subject to stricter rules of compliance.
Generative AI: Artificial intelligence systems capable of creating new content such as text, imagery, or audio, often using existing information and data as input.
High risk system: Systems that negatively affect safety or fundament rights. Two categories, one for AI used in products under the scope of EU product safety legislation, and AI systems required to register in the EU database (Critical infrastructure, Education and vocation, Employment, Essential private services and public services and benefits, Law enforcement, Migration, Legal assistance).
Prohibited: Through a risk-based approach, systems deemed to be of ‘unacceptable risk’ by the nature of their use contravening European Union values, such as violating fundamental rights.
Provider is defined in the EU AI Act (para 3) and means a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge.
Systemic risk: The potential for AI systems to cause widespread disruption or harm across multiple sectors leading to adverse effects on societal, economic, or operational stability.
Unacceptable risk: Systems that are considered a threat to people and society. This includes cognitive behaviour manipulation, social scoring, biometric identification, and real-time remote biometric identification. Exceptions have been made for specific law enforcement cases.