David Codd delves into strategies for effectively integrating AI into the workplace, ensuring both technological advantage and operational safety
Artificial intelligence (AI) systems offer immense potential, but they can also introduce new and significant risks.
When considering integrating AI into the workplace, there will be elements that do not typically feature in other IT investment proposals.
Equally, commercial realities may be obscured by the excitement arising from the eye-catching power of this technology and its potential to cut out so much work.
Due diligence and risk management should be to the fore, especially when considering new AI technologies.
So, what should those in governance and finance teams look out for?
Should we move faster and invest more right now?
The cost reductions that AI can enable in many situations are transformative. So, if the business is efficient and can handle change effectively, pushing the pace could stretch that lead.
Many proposals will envisage cautious, phased roll-out because AI represents unknown territory and it is expensive.
The key questions to consider is whether you should take on more risk to achieve a quicker roll-out, and whether there is a chance to grow and take advantage of significant cost savings by doing so.
Will customer service improvements result in increased market share?
AI systems can improve service quality speed in the short- to medium-term. However, while your proprietary data is your own, the technology itself is widely available to those who can afford it, meaning it is unlikely to underpin a unique long-term competitive advantage.
Recent business cases claiming increased market share increase arising from the roll-out of an AI solution should be treated with scepticism. They may really be “me too” projects.
Nevertheless, AI investment might still be needed just to keep pace and retain share.
How much change does our operating model need and is the cost understood?
Most business cases will include the obvious costs arising from a technology-enabled process change. However, other substantial and costly business changes may be necessary – mature data classification and quality control, for example.
Expertise will be needed to carry out tasks, such as message auditing and defining and implementing guardrails on an ongoing basis to prevent bias creeping in through “data drift”.
This expertise can be expensive, and the associated costs should be built into project planning.
Do we understand the risks and when will we be ready to mitigate and control them?
Responsible AI is not simply a question of steering away from the deployment of high-risk systems as defined by the European Union’s Artificial Intelligence Act.
AI brings privacy, explainability and bias risks which are exacerbated by the plausibility of the output of large language models.
Risk governance is not merely an extension of current practices. Early use cases can present challenges while risk governance is recalibrated.
This can slow projects down and the timing of realising benefits in proposals should take account of this risk.
Understanding all the risks
Business cases should reflect the fact that AI is different to previous technologies in terms of potential, risk and operational impact.
Those in governance and finance teams can make a valuable contribution by ensuring the full implications are reflected in investment proposals.
David Codd is an Independent Non-executive Director and Transformation Advisor