As European lawmakers reach provisional agreement on the final text of the EU Artificial Intelligence Act, Jackie Hennessy and Dani Michaux delve into the potential risks businesses may face
In December 2023, European lawmakers announced a provisional agreement on the final text of a new Artificial Intelligence Act (AI Act), giving developers and users of AI systems the first chance to consider in detail what the proposed new framework could mean for them.
Businesses are now in a position to consider the role AI plays in their organisation and how to mitigate potential risks that may arise as a result of this new legislative advancement.
What is an AI system?
According to the Act, an AI system is a “machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as content, predictions, recommendations, or decisions that can influence physical or virtual environments”.
Why do we need this Act?
The AI Act classifies AI systems into three risk categories:
- Unacceptable risk AI systems are considered to pose an unacceptable risk and are prohibited by the Act. These practices include systems that target vulnerable people or groups of persons, systems that materially distort a person’s behaviour, the use of biometric categorisation and identification systems and systems that classify natural persons that lead to unjustifiable detrimental treatment.
- High-risk AI systems are those that, based on their intended purpose, pose a high risk of harm to the health and safety or the fundamental rights of persons, taking into account both the severity of the possible harm and its probability of occurrence.
- A General Purpose AI (GPAI) system displays significant generality and competently performs a wide range of distinct tasks regardless of the way the model is placed on the market. It can be integrated into a variety of downstream systems or applications.
The Act is intended to ensure better conditions for the development and use of AI and is a pillar of the EU’s digital strategy. Furthermore, the Act takes aim at the emerging issue of ‘deepfake’ technology.
Deepfakes are defined as “AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful”.
The Act places a requirement on deployers of this technology to disclose that the content has been artificially generated or manipulated.
Who will the Act affect?
The Act will impact both developers and deployers of AI systems and will legislate the following:
- Human oversight measures for high-risk AI systems;
- Effective employer obligations for organisations planning to deploy AI in the workplace;
- Testing of AI systems in real-world conditions; and
- Implementation of codes of practice for proper compliance with the obligations of the regulation for providers of General Purpose AI systems.
The Act represents a major overhaul for businesses developing or deploying AI systems. Businesses doing either in the course of their work should consider how AI can be made compliant with the EU AI Act and what impact this might have on the business and its operational performance.
Jackie Hennessy is the Risk Consulting Partner at KPMG
Dani Michaux is EMA Cyber Leader at KPMG International