With the EU AI Act now in effect, businesses must navigate its regulatory challenges while unlocking AI's potential. Keith Power outlines three key strategies to ensure compliance and drive innovation
With the introduction of the EU Artificial Intelligence Act, exactly three years after its first draft, organisations now face the challenge of understanding the business impact of this new regulation and determining appropriate measures to take.
What contributes to this dynamic is, for the majority of organisations, thinking about the risk and compliance implications of AI while exploring its business potential. Here are three so-called steps to deal with both of these challenges.
Step 1: Map your landscape of current and expected AI applications
- Top-down: Define current and foreseeable business opportunities and issues and compare these with the potential that generative AI technology offers.
- Bottom-up: Do a brainstorming session with appropriate representation of relevant business functions to identify potential AI use cases. The success factor in brainstorming is not overthinking it.
- Combine both categories of AI use cases and plot these against two dimensions:
- overall business impact; and
- implementation effort required.
- Highlight your ‘quick wins’ (high business impact, low implementation effort) and ‘high potentials’ (high business impact, high implementation effort) to get a strategic landscape of AI applications.
- Create an inventory of your current AI applications, in use and development, and add them to the strategic landscape of AI applications. Don’t forget third-party applications.
The inventory should at least capture:
- the purpose and intended use of each AI system;
- the data it uses;
- its core functionality/workings;
- the processes, functions and (in)direct stakeholders it affects; and
- risk categorisation that is consistent with the EU AI Act.
Result: A robust starting point for an AI strategy and a regulatory impact analysis.
Step 2: Raise awareness and upskill employees
For every job, function or role out there, the question is not if AI will change it, but when.
Not having an AI strategy is not a sufficient reason to wait to offer employees upskilling opportunities or create a safe learning environment in which they can build skills in using AI and dealing with the risks of the technology.
The latter is especially important because employees can start working with generative AI on their own initiative.
Agile is the keyword here.
Applying the latest generation of AI technology is like learning to work with a new colleague – you have to spend time together to get attuned to each other.
What upskilling should focus on for now:
- Introduction to generative AI and its principles: This topic provides an overview of generative AI and explains its fundamental principles and applications. Employees will learn to understand the potential benefits and challenges associated with using generative AI.
- Responsible use of generative AI: This topic highlights the importance of responsible and ethical AI use. Employees will learn about risk considerations, including human impact, ethics, bias, fairness, privacy, and transparency, in the context of AI applications and the consequence(s) of their use. They will gain an understanding of the need to ensure that AI systems are developed and deployed in a responsible and accountable manner, in accordance with new legal requirements under the AI Act.
- Prompt engineering: This topic focuses on the concept of prompt engineering, which involves designing effective prompts or instructions to direct the behaviour of a generative AI model. Employees will learn how to craft prompts that produce desired outputs while avoiding unintended biases or undesirable outcomes. They will gain an understanding of the significance of prompt engineering for achieving reliable and ethical AI results.
By covering these three key topics, organisations can provide employees with a comprehensive understanding of generative AI, responsible AI use, and the importance of prompt engineering for effective and ethical AI application.
Result: An equipped workforce to execute the (future) AI strategy, to handle AI responsibly, and to shape, implement and comply with legal requirements.
Step 3: Implement responsible use guidelines
Responsible use of AI revolves around desired business conduct.
First, it requires awareness and clarity about what that is and second, the ability to recognise the associated risks in practice and to respond effectively to them.
Organisations should establish simple but clear and workable responsible use guidelines. These guidelines address what should always be done and/or what should never happen (i.e. the ‘non-negotiables’) when it comes to the use of AI and data.
To determine the working principles for daily use, organisations can draw inspiration from ethical AI principles, such as transparency, accountability, human oversight, and social and ecological well-being, as formulated in 2019 by the High-Level Expert Group of the European Commission. These principles provide broad guidance and usually need to be further operationalised to be workable in daily practice.
When developing these guidelines for responsible use for the organisation, it is important to find an appropriate balance between setting boundaries and offering freedom for innovation within the organisation.
Result: Clear criteria to guide the AI strategy and its execution, end-to-end through the organisational AI lifecycle.
Keith Power is Partner at PwC