The EU AI Act – sweeping regulation brings opportunity and challenge
Aug 02, 2024
The European Union’s new Artificial Intelligence Act brings opportunities for businesses but will not be without challenge, writes Keith Power
Just seven percent of Irish businesses currently have governance structures in place for artificial intelligence (AI) or generative AI (GenAI).
Despite this, the overwhelming majority (91%) believe that GenAI will increase cybersecurity risks in the year ahead. This is according to PwC’s latest GenAI Business Leaders survey, published in June 2024.
The European Union’s Artificial Intelligence Act (EU AI Act) is a sweeping new regulation aimed at ensuring that businesses have the appropriate AI governance and control mechanisms in place to deliver safe and secure outcomes.
Indeed, a large majority (84%) of our survey respondents welcomed the introduction of the EU AI Act, saying regulation is necessary to prevent the potential negative impact of AI in the future.
The new EU AI Act will also bring challenges, however. Its aim is to protect businesses, consumers and citizens in the EU from potential risks associated with AI in terms of health, safety, fundamental rights, democracy, rule of law and the environment.
By introducing standards and providing legal certainty, the Act also seeks to foster innovation, growth and competitiveness in the EU’s internal market.
It is the EU’s first comprehensive legal framework for AI and will level the playing field for businesses using the technology.
The Act adopts a risk-based approach, with its biggest compliance requirements applying to “high risk” AI systems.
These requirements include addressing data governance concerns, mitigating bias, ensuring transparency and implementing a system of quality management.
The Act also requires that users must be informed when they are interacting with chatbots, and that any AI-generated content must be clearly identifiable as such.
Several specific risks are particular to the EU AI Act, including failure to identify all uses of AI across a business as well as the potential for the inaccurate risk classification of AI uses.
The Act also obliges organisations to assess all of their use cases for AI. This may prove an onerous and time-consuming task given the dispersed nature of the use of AI in many companies.
The risk of misclassification is high as risk classifications may change as an organisation’s use of AI evolves over time.
This necessitates the implementation of appropriate ongoing governance and control procedures to maintain compliance, bringing its own challenges.
There is also a risk that the focus on compliance may lead to a drag on innovation.
The nuanced nature of some of the language used in the Act, coupled with risk classifications and role designations being subject to change, may prove problematic for some organisations.
The use of AI systems by third parties acting on behalf of organisations may also cause a degree of complexity.
There is much to be considered by Irish businesses to ensure they will be compliant with the new EU AI Act.
It will bring competitive opportunities, but complying with the new regulations will be a complex process.
Keith Power is a Partner with PwC Ireland
*Disclaimer: The views expressed in this column published in the August/September 2024 issue of Accountancy Ireland are the author’s own. The views of contributors to Accountancy Ireland may differ from official Institute policies and do not reflect the views of Chartered Accountants Ireland, its Council, its committees, or the editor.