Artificial intelligence is ushering in a new era of tech-enabled efficiency in many sectors, but its widespread adoption also throws up ethical dilemmas. Dr Susan McKeever digs into the details
Dr. Susan McKeever is Head of Discipline for Data Science and Artificial Intelligence (AI) at Technological University Dublin’s School of Computer Science. Here, McKeever talks to Accountancy Ireland about the benefits AI is bringing to sectors reliant on data and how regulators, Chartered Accountants and other professions must ensure its ethical adoption as it continues to evolve at a rapid pace.
How is the emergence of AI impacting the world of accounting and other professions and sectors?
Any profession, function or industry reliant on large amounts of data and repetitive data-related tasks traditionally carried out by people will be impacted by the advent of AI, if they are not being impacted already.
These repetitive tasks might involve data entry, data assessment and the generation of reports and correspondence based on this data.
AI is very “friendly” to taking over these kinds of tasks. It is really good at getting to grips with a lot of data, interpreting and analysing this data and generating knowledge from it.
The medical sector is one example of an AI-friendly sector, as is the legal sector and insurance. Accountancy is, in a sense, data-driven, but uses a very specific kind of data that needs to be assessed and interpreted, so it is quite specialist.
You can train AI to do simple, repetitive, data-related tasks in accounting. It won’t get tired and it won’t forget what it has already learned.
You can continue to re-train AI as the world moves along, or as the situation changes, and it will continue to build on its existing knowledge and become more and more intelligent.
People are excited about the emergence of AI, but also fearful – is this fear well-founded?
One of the fears surrounding AI is the general concept that it will “take over” in certain fields.
I do believe that the widespread uptake of AI across industries will displace certain kinds of repetitive jobs further down the value chain – the kind of roles that can easily be automated.
The silver lining – and I do truly believe this – is that, as a result, we will see an uptick in higher-value roles.
If you take accountancy, we will likely see a shift away from the very granular, detail-driven examination of individual transactions, for example.
Instead, with AI gathering and analysing this data, the accountant will be able to focus on higher-value work, spotting interesting patterns or anomalies of immediate value to their organisation.
My advice to accountants, as with all professions, is to go with it. AI is here to stay.
ChatGPT really seeded the concept of AI in the public imagination. It is just one of the larger language models out there, but it just happens to be the one that has really landed in the public consciousness.
You have all sorts of people already using ChatGPT to write letters, draft CVs and so on. Change is inevitable. The widespread use of AI is inevitable.
My advice to all professionals is to adapt and prepare. Re-train or upskill if you need to. Try not to resist it too much.
What else should we be concerned about when it comes to the widespread adoption of AI?
There is a fear out there that AI will start to make decisions we, as humans, used to own.
What is really important here – and this needs to be enshrined in legislation – is that, at all times, humans must be responsible for any decisions made.
So, while AI may be by your side, acting as an “intelligent” support to you in your work as an accountant, you – the human – must always be responsible for any decisions made.
Once you move away from this principle, you enter problematic territory. AI must be accountable to humans. People must maintain ownership of any and all decisions made, always.
We train AI based on existing data and data sets – does this carry its own risk?
In AI, machine learning models are trained using previous examples. This subset of AI uses algorithms to interpret large amounts of data. It learns from experience.
So, if you use a machine learning model to train an AI algorithm to recognise suspicious transactions, for example, you might give it a dataset of 1,000 transactions in which 100 are suspicious.
The model will start to figure out the pattern of what makes a transaction suspicious where a human might not have been able to decipher the “rules” underpinning these suspicious transactions.
If you train your AI algorithm based on 1,000 transactions, it might get a certain level of detail. If you up this training to a larger dataset comprising 100,000 examples, your AI algorithm will start to get really good at recognising the patterns in suspicious transactions.
One issue with this kind of machine learning is bias. If you are training your AI algorithm on what has gone before, you are also embedding biases that have existed over time.
You are enshrining the world as it is, or was, into the trained examples you use. You have to be very careful that you do this well.
Already, we have seen how the use of AI-driven CV evaluation systems has brought bias to the hiring process based on race, gender, age and other factors. It is something we need to be very aware of.
Are we doing enough to regulate and legislate for the safe and ethical use of AI now and in the future?
The effective regulation of AI is something I feel very strongly about. This technology, like so many others, is already shaping our society and will continue to do so in the future.
Our legislation is lagging behind the rapid evolution and deployment of AI in Ireland and across the world. We are behind the wave, and this is a problem.
In the European Union, the Digital Service Act came into full effect in February and the Artificial Intelligence Act is also coming down the line. Its aim is to ensure that AI systems placed on the European market, and used in the EU, are safe and respect fundamental rights and EU values.
These regulations are welcome, but their introduction is too slow. It is not keeping pace with AI. Our legislators are falling behind, and this has to be addressed.
Otherwise, we could be looking at a society that is framed by technology instead of the democratic and legislative code that should prevail.
This is not to paint an entirely negative picture. AI can be used for so much good. There is so much to be positive about in this extraordinary technology.
It is up to us to make sure that it is used for good, however, and that the necessary controls are in place to make sure that we continue to have the kind of society we want. To do this, the legislation needs to get in front of the technology, and this is something we need to prioritise today.