Introduction
The present enthusiasm for AI is rational, writes Emmet Kelly, but it needs to be balanced with proper governance, legal compliance, risk management, and human responsibility.
Irish businesses are embracing artificial intelligence (AI) with enthusiasm. Across sectors such as professional services, finance, retail, manufacturing, logistics, as well as the public sphere, there is a widespread perception that AI, in a general sense, is a decisive productivity tool, capable of accelerating analysis, improving decision-making, and reducing operational friction.
Most companies do not ‘adopt AI’ as a single initiative. Instead, they accumulate multiple AI systems over time – some explicit, some embedded, some user-driven, each with distinct risk profiles, data dependencies, and governance requirements. AI is not a single tool, nor a discrete system that can be cleanly ‘adopted’ or ‘switched on’. It is now a pervasive layer embedded across the software stack, the internet, and a set of tools to be easily accessed and used daily in the world of work.
The principal challenge for companies therefore is not technical capability, but coherence: understanding where AI is present, what role it plays in decision-making, and how accountability is maintained across this fragmented landscape.
The enthusiasm for AI is rational. AI systems continually demonstrate an impressive ability to summarise complex information, generate plausible text, detect patterns at scale, and automate tasks that previously required substantial human effort.
However, this enthusiasm and use frequently outpaces a realistic appreciation of the complexity, opacity, and governance challenges that accompany AI deployment. For the accounting profession in particular – for whom judgement, verification and accountability remain foundational – this imbalance presents material risks.
This article explores how Irish businesses are encountering AI in practice, why its complexity is often underestimated, and what recent research conducted by Amárach reveals about readiness among SMEs. Effective governance will require not only regulatory compliance, but a clearer understanding of the optimum relationship between humans and machines, one that preserves responsibility rather than attempting to outsource it.
The illusion of simplicity
For many organisations, the first encounter with AI is through publicly accessible systems such as OpenAI’s ChatGPT, Claude by Anthropic, or Gemini from Google. These large language models (LLMs) present AI as conversational, accessible, and apparently intuitive. Users ask a question, using ‘prompts’, and receive a fluent, structured response, from what appears to be a single, confident synthesis of vast amounts of underlying information.
This interaction, or ‘chat’, creates a powerful illusion of simplicity. The complexity of the system, the scale of its training data, the probabilistic nature of the outputs, the constraints imposed by prompts, and the absence of any true understanding remains largely invisible. AI doesn’t understand anything. It merely matches words, numbers and context of the chat that best fits the data the AI is referencing. What appears to be a dialogue is a statistical process that predicts likely continuations of text based on patterns in data. The capacity to understand and interpret the quality and value of the AI response, or output, remains the responsibility of the human user.
For professionals accustomed to contextual awareness, critical analysis and judgement, this distinction matters. LLMs do not ‘know’ when nuance is missing, when assumptions are incorrect, or when an answer is incomplete. They summarise, average, and generalise. In doing so, they may lose minority positions, edge cases, and context-specific considerations that are often critical in accounting, audit, tax, and governance work.
AI in office and productivity software
Beyond these publicly visible systems, AI is now embedded in many workplace productivity tools. Platforms provided by Microsoft, Google, and Apple increasingly incorporate AI-driven features: document drafting, spreadsheet analysis, email prioritisation, meeting summarisation, and even predictions of where trends in numeric data are likely to lead over time.
Because these capabilities are integrated into familiar software, they are often perceived as incremental enhancements rather than as AI systems per se. Yet the AI governance implications still apply. For example, automated summarisation of discussions may omit critical qualifications or nuances. Predictive suggestions may reinforce historical biases. Decision-support features may subtly shape professional judgement without being formally recognised as decision-making inputs.
The risk here is not malicious intent, but unexamined reliance. When AI-generated outputs are treated as neutral or authoritative simply because they are embedded within trusted tools, accountability can become blurred.
AI in enterprise systems
AI’s influence extends further into enterprise systems that underpin organisational operations. Enterprise resource planning (ERP), customer relationship management (CRM), and accounting platforms from providers such as SAP, Salesforce and Sage, now deploy AI for forecasting, anomaly detection, credit assessment, inventory optimisation, and workflow prioritisation.
In these contexts, AI outputs can directly influence financial reporting, risk classification, and operational decisions. Yet they are often treated as system features rather than as models with assumptions, limitations, and potential points of failure. For accountants and finance leaders, this raises critical questions: Who validates these models? How are errors detected? What documentation exists? And how does professional responsibility apply when an AI-driven recommendation is followed?
In-house AI models
Larger organisations increasingly develop AI models ‘in-house’, using proprietary data to support functions such as fraud detection, credit-risk assessment, demand forecasting, and operational optimisation. These systems may be customised, powerful, and bring competitive advantage, but they also bring concentrated risk.
In-house AI models depend entirely on the quality, scope, and representativeness of the data used to train them. They may reflect historical practices that are no longer appropriate, or embedded organisational biases. Without robust governance, policies, procedures, documentation, and on-going monitoring, such systems can quickly drift away from legal compliance and ethical acceptability.
Regulation: the EU AI Act in context
The European Union (EU) has sought to address these risks through the EU AI Act, which introduces a risk-based framework for AI governance. The EU AI Act emphasises transparency, human oversight, data quality, and accountability, principles that align closely with professional standards in accounting, auditing and assurance.
However, regulation alone cannot resolve the underlying issues, as AI is no longer confined to discrete, easily identifiable systems. It will pervade software, services, and information flows from within an organisation, and often beyond. Organisations may be using dozens of AI-enabled tools without explicitly recognising them as such. Compliance, therefore, cannot be treated as a one-off assessment; it must become an on-going capability.
Are Irish businesses AI-ready?
Recent AI-readiness research conducted by Amárach Research in collaboration with InstaComply provides a clear picture of this structural gap. While the findings indicate strong enthusiasm for AI adoption among Irish SMEs, many of which are deploying AI at speed, there are also significant weaknesses in governance readiness. While many companies are experimenting with and deploying AI, far fewer have established clear ownership, policies, controls, and governance structures required to manage these systems safely, transparently, and in compliance with existing and emerging regulation.
For example, only 37% have appointed a policy owner responsible for AI and data governance, while just 32% maintain risk registers that include AI-related risks. More than one-third have none of the basic structures that the EU AI Act will expect businesses to maintain.
These findings do not show a failure of intent, but a structural gap. Use of AI is moving from an experimental phase to the operational core, yet the governance mechanisms needed to control it remain underdeveloped. The EU AI Act is not simply another compliance obligation – it requires a fundamental shift in how organisations must design, monitor, and document their automated systems.
Taking responsibility
Arguably, and as can be seen from our research findings, Irish businesses are engaging in “conversations with machines”, most often using LLMs, without fully understanding the mechanisms underlying the ‘conversation’ or the operations and quality of the machine with which the user is conversing. LLMs respond blindly based on the level, quality, and structure of the data that informs them. They do not challenge objectives, interrogate ethical implications, or assume responsibility for outcomes.
Where complexity is poorly understood, responses tend to polarise. Some users may become distrustful, focusing on AI’s errors and limitations and rejecting its utility. Others may move in the opposite direction, treating AI outputs as authoritative and implicitly transferring responsibility to the system.
Both kinds of behaviour present problems and risk. AI does not absolve individuals or organisations of responsibility, nor should it be dismissed as inherently unreliable. A useful analogy is that of tools in the physical world. The saying, “a bad workman blames his tools”, holds true for the use of AI, and a driver should not blame their car for their negligent driving. The workman remains bad, the driver negligent, and the tools and machines, just that: tools and machines. Responsibility remains with the human agent and the organisation deploying the tools.
A new RACI model for human–AI collaboration
What is required is a new articulation of responsibility, effectively, a new ‘RACI’ model that clarifies who is responsible, accountable, consulted, and informed when AI systems are used. This approach reflects a broader shift: compliance must move from static documentation to dynamic, operational governance, which embeds EU AI Act requirements, such as data quality, traceability, and human oversight, directly into development and operational processes as code.
Human-in-the-loop approaches are not merely a regulatory preference; they are a practical necessity for maintaining standards, managing risks and sustaining businesses on the frontiers of rapidly changing, brimming with possibilities AI landscape.
Conclusion: learning to drive the machine
AI represents an extraordinary technological advance, and Irish businesses are right to explore its potential. But power without understanding presents risks. The accountancy profession, with its long-standing emphasis on judgement, accountability, and assurance, is well placed to lead a more mature, responsible and strategy-led engagement with AI.
The challenge is not to slow innovation, but to learn to ‘drive’ these machines responsibly – within the limits of the law, ethics, business sense and professional judgement. AI is a tool, not an actor. Recognising that distinction will be central to protecting customers, clients, organisations, and public trust in the years ahead.
Emmet Kelly is an AI data governance and compliance expert, and CEO of InstaComply, which empowers organisations to navigate regulatory complexity with smart automation.