Michael Diviney and Níall Fitzgerald explore the ethical challenges arising from artificial intelligence (AI), particularly ‘narrow’ AI, and highlight the importance of ethics and professional competence in its deployment
Earlier this year, artificial intelligence (AI) industry leaders, leading researchers and influencers signed a succinct statement and warning:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Was this a publicity stunt? Well, probably not, as the generative AI ChatGPT was already the fastest-adopted application in history.
Was this an over-the-top, alarmist statement by a group possibly trying to steal a march on self-regulation of a rapidly emerging technology and growing industry?
Again, this is unlikely if one considers the warnings of pioneer thinkers like Nick Bostrom, Max Tegmark, Stephen Hawking and Astronomer Royal Martin Rees. They concur that there is an existential threat to humankind if human-level or ‘general’ AI is developed and the ‘singularity’ is reached when AI surpasses human intelligence.
Autonomous weapons and targeting are a clear risk, but more broadly, unless we can ensure that the goals of a future superintelligence are aligned and remain aligned with our goals, we may be considered superfluous and dispensable by that superintelligence.
As well as the extinction threat, general AI presents other potential ethical challenges.
For example, if AI attains subjective consciousness and is capable of suffering, does it then acquire rights? Do we have the right to interfere with these, including the right to attempt to switch it off and end its digital life?
Will AI become a legal entity and have property rights? After all, much of our economy is owned by companies, another form of artificial ‘person’.
Ethical challenges from ‘narrow’ AI
Until general AI is here, however – and there is informed scepticism about its possibility – the AI tools currently in use are weak or ‘narrow’ AI. They are designed to perform a specific task or a group of related tasks and rely on algorithms to process data on which they have been trained.
Narrow AI presents various ethical challenges:
Unfairness arising from bias and opacity (e.g. AI used in the initial screening of job candidates include a gender bias based on historical data – in the past more men were hired);
The right to privacy (AI trained with data without the consent of the data subjects);
Threats to physical safety (e.g. self-driving vehicles);
Intellectual property and moral rights, plagiarism and passing-off issues in the use of generative AI like ChatGPT and Bard; and
Threats to human dignity from the hollowing out of work and loss of purpose.
Regulation vs. ethics
Such issues arising from the use of AI, particularly related to personal data, mean that regulation is inevitable.
We can see this, for example, with the EU’s landmark AI Act, due to apply by the end of 2025, which aims to regulate AI’s potential to cause harm and to hold companies accountable for how their systems are used.
However, as Professor Pat Barker explained at a recent Consultative Committee of Accountancy Bodies (CCAB) webinar, until such laws are in place, and in the absence of clear rules, ethics are required for deciding on the right way to use AI.
Even when the regulation is in place, there are likely to be cases and dilemmas that it has not anticipated or about which it is unclear.
Legal compliance should not be assumed to have all the ethical issues covered, and as AI is evolving so quickly, new ethical issues and choices will inevitably emerge.
Ethics involves the application of a decision-making framework to a dilemma or choice about the right thing to do. While such a framework or philosophy can reflect one’s values, it must also be objective, considered, universalisable and not just based on an instinctual response or what may be expedient.
Established ethics frameworks include:
the consequentialist or utilitarian approach – in the case of AI, does it maximise benefits for the greatest number of people?; and
the deontological approach, which is based on first principles, such as the inalienable rights of the individual (an underlying philosophy of the EU’s AI Act).
(The Institute’s Ethics Quick Reference Guide, found on the charteredaccountants.ie website, outlines five steps to prepare for ethical dilemmas and decision-making.)
A practical approach
While such philosophical approaches are effective for questions like “Should we do this?” and “Is it good for society”, as Reid Blackman argues in Harvard Business Review, businesses and professionals may need a more practical approach, asking: “Given that we are going to [use AI], how can we do it without making ourselves vulnerable to ethical risks?”
Clear protocols, policies, due diligence and an emphasis on ethical risk management and mitigation are required, for example responsible AI clauses in agreements with suppliers.
In this respect, accountants have an arguably competitive advantage in being members of a profession; they can access and apply an existing ethical framework, which is evolving and adapting as the technology, its opportunities and challenges change.
The Code of Ethics
The International Ethics Standards Board for Accountants (IESBA) recently revised the Code of Ethics for Professional Accountants (Code) to reflect the impact of technology, including AI, on the profession. The Chartered Accountants Ireland Code of Ethics will ultimately reflect these revisions.
IESBA has identified the two types of AI likely to have the most impact on the ethical behaviour of accountants:
Assisted intelligence or robotic process automation (RPA) in which machines carry out tasks previously done by humans, who continue to make decisions; and
Augmented intelligence, which involves collaboration between human and machine in decision-making.
The revisions also include guidance on how accountants might address the risks presented by AI to ethical behaviour and decision-making in performing their role and responsibilities.
Professional competence and due care
The Code requires an accountant to ensure they have an appropriate level of understanding relevant to their role and responsibilities and the work they undertake. The revisions acknowledge that the accountant’s role is evolving and that many of the activities they undertake can be impacted by AI.
The degree of competency required in relation to AI will be commensurate with the extent of an accountant’s use of and/or reliance on it. While programming AI may be beyond the competency of many accountants, they have the skill set to:
identify and articulate the problem the AI is being used to solve;
understand the type, source and integrity of the data required; and
assess the utility and reasonableness of the output.
This makes accountants well placed to advise on aspects of the use of AI.
The Code provides some examples of risks and considerations to be managed by professional accountants using AI, including:
The data available might not be sufficient for the effective use of the AI tool. The accountant needs to consider the appropriateness of the source data (e.g. relevance, completeness and integrity) and other inputs, such as the decisions and assumptions being used as inputs by the AI. This includes identifying any underlying bias so that it can be addressed in final decision-making.
The AI might not be appropriate for the purpose for which the organisation intends to use it. Is it the right tool for the job and designed for that particular purpose? Are users of the AI tool authorised and trained in its correct use within the organisation’s control framework? (One chief technology officer has suggested not only considering the capabilities of the AI tool but also its limitations to be better aware of the risks of something going wrong or where its use may not be appropriate.)
The accountant may not have the ability, or have access to an expert with that ability, to understand and explain the AI and its appropriate use.
If the AI has been appropriately tested and evaluated for the purpose intended.
The controls relating to the source data and the AI’s design, implementation and use, including user access.
So, how does the accountant apply their skills and expertise in this context?
It is expected that accountants will use many of the established skills for which the profession is known to assess the input and interpret the output of an AI tool, including interpersonal, communication and organisational skills, but also technical knowledge relevant to the activity they are performing, whether it is an accounting, tax, auditing, compliance, strategic or operational business decision that is being made.
Data and confidentiality
According to the Code, when an accountant receives or acquires confidential information, their duty of confidentiality begins. AI requires data, usually lots of it, with which it is trained. It also requires decisions by individuals in relation to how the AI should work (programming), when it should be used, how its use should be controlled, etc.
The use of confidential information with AI presents several confidentiality challenges for accountants. The Code includes several considerations for accountants in this regard, including:
Obtaining authorisation from the source (e.g. clients or customers) for the use of confidential information, whether anonymised or otherwise, for purposes other than those for which it was provided. This includes whether the information can be used for training AI tools.
Considering controls to safeguard confidentiality, including anonymising data, encryption and access controls, and security policies to protect against data leaks.
Ensuring controls are in place for the coding and updating of the AI used in the organisation. Outdated code, bugs and irregular updates to the software can pose a security risk. Reviewing the security certification of the AI tool and ensuring it is up to date can offer some comfort.
Many data breaches result from human error, e.g. inputting confidential information into an open-access web-based application is a confidentiality breach if that information is saved, stored and later used by that application. Staff need to be trained in the correct use and purpose of AI applications and the safeguarding of confidential information.
Dealing with complexity
The Code acknowledges that technology, including AI, can help manage complexity.
AI tools can be particularly useful for performing complex analysis or financial modelling to inform decision-making or alerting the accountant to any developments or changes that require a re-assessment of a situation. In doing so, vast amounts of data are collected and used by AI, and the ability to check and verify the integrity of the data introduces another level of complexity.
The Code makes frequent reference to “relevancy” in relation to the analysis of information, scenarios, variables, relationships, etc., and highlights the importance of ensuring that data is relevant to the problem or issue being addressed.
IESBA was mindful, when revising the Code, that there are various conceivable ways AI tools can be designed and developed to use and interpret data.
For example, objectivity can be challenged when faced with the complexity of divergent views supported by data, making it difficult to come to a decision. AI can present additional complexity for accountants, but the considerations set out in the Code are useful reminders of the essential skills necessary to manage complexity.
Changing how we work
As well as its hugely beneficial applications in, for example, healthcare and science, AI is proving to be transformative as a source of business value.
With a range of significant new tools launched daily, from personal effectiveness to analysis and process optimisation, AI is changing how we work. These are powerful tools, but with power comes responsibility.
For the professional accountant, certain skills will be brought to the fore, including adaptability, change and risk management, and leadership amidst rapidly evolving work practices and business models. Accountants are well placed to provide these skills and support the responsible and ethical use of AI.
Rather than fearing being replaced by AI, accountants can prepare to meet expectations to provide added value and be at the helm of using AI tools for finance, management, strategic decision-making and other opportunities.
Michael Diviney is Executive Head of Thought Leadership at Chartered Accountants Ireland
Níall Fitzgerald is Head of Ethics and Governance at Chartered Accountants Ireland