• Current students
      • Student centre
        Enrol on a course/exam
        My enrolments
        Exam results
        Mock exams
      • Course information
        Students FAQs
        Student induction
        Course enrolment information
        Key dates
        Book distribution
        Timetables
        FAE elective information
        CPA Ireland student
      • Exams
        CAP1 exam
        CAP2 exam
        FAE exam
        Access support/reasonable accommodation
        E-Assessment information
        Exam and appeals regulations/exam rules
        Timetables for exams & interim assessments
        Sample papers
        Practice papers
        Extenuating circumstances
        PEC/FAEC reports
        Information and appeals scheme
        Certified statements of results
        JIEB: NI Insolvency Qualification
      • CA Diary resources
        Mentors: Getting started on the CA Diary
        CA Diary for Flexible Route FAQs
      • Admission to membership
        Joining as a reciprocal member
        Admission to Membership Ceremonies
        Admissions FAQs
      • Support & services
        Recruitment to and transferring of training contracts
        CASSI
        Student supports and wellbeing
        Audit qualification
        Diversity and Inclusion Committee
    • Students

      View all the services available for students of the Institute

      Read More
  • Becoming a student
      • About Chartered Accountancy
        The Chartered difference
        Student benefits
        Study in Northern Ireland
        Events
        Hear from past students
        Become a Chartered Accountant podcast series
      • Entry routes
        College
        Working
        Accounting Technicians
        School leavers
        Member of another body
        CPA student
        International student
        Flexible Route
        Training Contract
      • Course description
        CAP1
        CAP2
        FAE
        Our education offering
      • Apply
        How to apply
        Exemptions guide
        Fees & payment options
        External students
      • Training vacancies
        Training vacancies search
        Training firms list
        Large training firms
        Milkround
        Recruitment to and transferring of training contract
      • Support & services
        Becoming a student FAQs
        School Bootcamp
        Register for a school visit
        Third Level Hub
        Who to contact for employers
    • Becoming a
      student

      Study with us

      Read More
  • Members
      • Members Hub
        My account
        Member subscriptions
        Newly admitted members
        Annual returns
        Application forms
        CPD/events
        Member services A-Z
        District societies
        Professional Standards
        ACA Professionals
        Careers development
        Recruitment service
        Diversity and Inclusion Committee
      • Members in practice
        Going into practice
        Managing your practice FAQs
        Practice compliance FAQs
        Toolkits and resources
        Audit FAQs
        Practice Consulting services
        Practice News/Practice Matters
        Practice Link
      • In business
        Networking and special interest groups
        Articles
      • Overseas members
        Home
        Key supports
        Tax for returning Irish members
        Networks and people
      • Public sector
        Public sector presentations
      • Member benefits
        Member benefits
      • Support & services
        Letters of good standing form
        Member FAQs
        AML confidential disclosure form
        Institute Technical content
        TaxSource Total
        The Educational Requirements for the Audit Qualification
        Pocket diaries
        Thrive Hub
    • Members

      View member services

      Read More
  • Employers
      • Training organisations
        Authorise to train
        Training in business
        Manage my students
        Incentive Scheme
        Recruitment to and transferring of training contracts
        Securing and retaining the best talent
        Tips on writing a job specification
      • Training
        In-house training
        Training tickets
      • Recruitment services
        Hire a qualified Chartered Accountant
        Hire a trainee student
      • Non executive directors recruitment service
      • Support & services
        Hire members: log a job vacancy
        Firm/employers FAQs
        Training ticket FAQs
        Authorisations
        Hire a room
        Who to contact for employers
    • Employers

      Services to support your business

      Read More
☰
  • Find a firm
  • Jobs
  • Login
☰
  • Home
  • Knowledge centre
  • Professional development
  • About us
  • Shop
  • News
Search
View Cart 0 Item

Ethics Resource Centre

☰
  • Ethics home
  • Resources
  • News & articles
  • Courses & events
  • Home/
  • Ethics Resource Centre/
  • News & articles

Ethics articles

On this page we present special articles on ethics, a selection of relevant articles from Accountancy Ireland, as well as recent news from across Chartered Accountants Ireland in relation to ethics.

Ethics and Governance
(?)

Navigating the ethics of AI

Michael Diviney and Níall Fitzgerald explore the ethical challenges arising from artificial intelligence (AI), particularly ‘narrow’ AI, and highlight the importance of ethics and professional competence in its deployment Earlier this year, artificial intelligence (AI) industry leaders, leading researchers and influencers signed a succinct statement and warning: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Was this a publicity stunt? Well, probably not, as the generative AI ChatGPT was already the fastest-adopted application in history.  Was this an over-the-top, alarmist statement by a group possibly trying to steal a march on self-regulation of a rapidly emerging technology and growing industry?  Again, this is unlikely if one considers the warnings of pioneer thinkers like Nick Bostrom, Max Tegmark, Stephen Hawking and Astronomer Royal Martin Rees. They concur that there is an existential threat to humankind if human-level or ‘general’ AI is developed and the ‘singularity’ is reached when AI surpasses human intelligence.  Autonomous weapons and targeting are a clear risk, but more broadly, unless we can ensure that the goals of a future superintelligence are aligned and remain aligned with our goals, we may be considered superfluous and dispensable by that superintelligence.  As well as the extinction threat, general AI presents other potential ethical challenges.  For example, if AI attains subjective consciousness and is capable of suffering, does it then acquire rights? Do we have the right to interfere with these, including the right to attempt to switch it off and end its digital life?  Will AI become a legal entity and have property rights? After all, much of our economy is owned by companies, another form of artificial ‘person’. Ethical challenges from ‘narrow’ AI Until general AI is here, however – and there is informed scepticism about its possibility – the AI tools currently in use are weak or ‘narrow’ AI. They are designed to perform a specific task or a group of related tasks and rely on algorithms to process data on which they have been trained.  Narrow AI presents various ethical challenges:  Unfairness arising from bias and opacity (e.g. AI used in the initial screening of job candidates include a gender bias based on historical data – in the past more men were hired); The right to privacy (AI trained with data without the consent of the data subjects); Threats to physical safety (e.g. self-driving vehicles); Intellectual property and moral rights, plagiarism and passing-off issues in the use of generative AI like ChatGPT and Bard; and Threats to human dignity from the hollowing out of work and loss of purpose. Regulation vs. ethics Such issues arising from the use of AI, particularly related to personal data, mean that regulation is inevitable.  We can see this, for example, with the EU’s landmark AI Act, due to apply by the end of 2025, which aims to regulate AI’s potential to cause harm and to hold companies accountable for how their systems are used. However, as Professor Pat Barker explained at a recent Consultative Committee of Accountancy Bodies (CCAB) webinar, until such laws are in place, and in the absence of clear rules, ethics are required for deciding on the right way to use AI.  Even when the regulation is in place, there are likely to be cases and dilemmas that it has not anticipated or about which it is unclear. Legal compliance should not be assumed to have all the ethical issues covered, and as AI is evolving so quickly, new ethical issues and choices will inevitably emerge.  Ethics involves the application of a decision-making framework to a dilemma or choice about the right thing to do. While such a framework or philosophy can reflect one’s values, it must also be objective, considered, universalisable and not just based on an instinctual response or what may be expedient. Established ethics frameworks include: the consequentialist or utilitarian approach – in the case of AI, does it maximise benefits for the greatest number of people?; and the deontological approach, which is based on first principles, such as the inalienable rights of the individual (an underlying philosophy of the EU’s AI Act). (The Institute’s Ethics Quick Reference Guide, found on the charteredaccountants.ie website, outlines five steps to prepare for ethical dilemmas and decision-making.)  A practical approach While such philosophical approaches are effective for questions like “Should we do this?” and “Is it good for society”, as Reid Blackman argues in Harvard Business Review, businesses and professionals may need a more practical approach, asking: “Given that we are going to [use AI], how can we do it without making ourselves vulnerable to ethical risks?”  Clear protocols, policies, due diligence and an emphasis on ethical risk management and mitigation are required, for example responsible AI clauses in agreements with suppliers. In this respect, accountants have an arguably competitive advantage in being members of a profession; they can access and apply an existing ethical framework, which is evolving and adapting as the technology, its opportunities and challenges change.  The Code of Ethics The International Ethics Standards Board for Accountants (IESBA) recently revised the Code of Ethics for Professional Accountants (Code) to reflect the impact of technology, including AI, on the profession. The Chartered Accountants Ireland Code of Ethics will ultimately reflect these revisions.  IESBA has identified the two types of AI likely to have the most impact on the ethical behaviour of accountants:  Assisted intelligence or robotic process automation (RPA) in which machines carry out tasks previously done by humans, who continue to make decisions; and  Augmented intelligence, which involves collaboration between human and machine in decision-making. The revisions also include guidance on how accountants might address the risks presented by AI to ethical behaviour and decision-making in performing their role and responsibilities.  Professional competence and due care The Code requires an accountant to ensure they have an appropriate level of understanding relevant to their role and responsibilities and the work they undertake. The revisions acknowledge that the accountant’s role is evolving and that many of the activities they undertake can be impacted by AI.  The degree of competency required in relation to AI will be commensurate with the extent of an accountant’s use of and/or reliance on it. While programming AI may be beyond the competency of many accountants, they have the skill set to:  identify and articulate the problem the AI is being used to solve;  understand the type, source and integrity of the data required; and assess the utility and reasonableness of the output.  This makes accountants well placed to advise on aspects of the use of AI. The Code provides some examples of risks and considerations to be managed by professional accountants using AI, including: The data available might not be sufficient for the effective use of the AI tool. The accountant needs to consider the appropriateness of the source data (e.g. relevance, completeness and integrity) and other inputs, such as the decisions and assumptions being used as inputs by the AI. This includes identifying any underlying bias so that it can be addressed in final decision-making. The AI might not be appropriate for the purpose for which the organisation intends to use it. Is it the right tool for the job and designed for that particular purpose? Are users of the AI tool authorised and trained in its correct use within the organisation’s control framework? (One chief technology officer has suggested not only considering the capabilities of the AI tool but also its limitations to be better aware of the risks of something going wrong or where its use may not be appropriate.) The accountant may not have the ability, or have access to an expert with that ability, to understand and explain the AI and its appropriate use.  If the AI has been appropriately tested and evaluated for the purpose intended. The controls relating to the source data and the AI’s design, implementation and use, including user access. So, how does the accountant apply their skills and expertise in this context?  It is expected that accountants will use many of the established skills for which the profession is known to assess the input and interpret the output of an AI tool, including interpersonal, communication and organisational skills, but also technical knowledge relevant to the activity they are performing, whether it is an accounting, tax, auditing, compliance, strategic or operational business decision that is being made.  Data and confidentiality According to the Code, when an accountant receives or acquires confidential information, their duty of confidentiality begins. AI requires data, usually lots of it, with which it is trained. It also requires decisions by individuals in relation to how the AI should work (programming), when it should be used, how its use should be controlled, etc.  The use of confidential information with AI presents several confidentiality challenges for accountants. The Code includes several considerations for accountants in this regard, including: Obtaining authorisation from the source (e.g. clients or customers) for the use of confidential information, whether anonymised or otherwise, for purposes other than those for which it was provided. This includes whether the information can be used for training AI tools.  Considering controls to safeguard confidentiality, including anonymising data, encryption and access controls, and security policies to protect against data leaks.  Ensuring controls are in place for the coding and updating of the AI used in the organisation. Outdated code, bugs and irregular updates to the software can pose a security risk. Reviewing the security certification of the AI tool and ensuring it is up to date can offer some comfort.  Many data breaches result from human error, e.g. inputting confidential information into an open-access web-based application is a confidentiality breach if that information is saved, stored and later used by that application. Staff need to be trained in the correct use and purpose of AI applications and the safeguarding of confidential information. Dealing with complexity The Code acknowledges that technology, including AI, can help manage complexity.  AI tools can be particularly useful for performing complex analysis or financial modelling to inform decision-making or alerting the accountant to any developments or changes that require a re-assessment of a situation. In doing so, vast amounts of data are collected and used by AI, and the ability to check and verify the integrity of the data introduces another level of complexity.  The Code makes frequent reference to “relevancy” in relation to the analysis of information, scenarios, variables, relationships, etc., and highlights the importance of ensuring that data is relevant to the problem or issue being addressed. IESBA was mindful, when revising the Code, that there are various conceivable ways AI tools can be designed and developed to use and interpret data.  For example, objectivity can be challenged when faced with the complexity of divergent views supported by data, making it difficult to come to a decision. AI can present additional complexity for accountants, but the considerations set out in the Code are useful reminders of the essential skills necessary to manage complexity. Changing how we work As well as its hugely beneficial applications in, for example, healthcare and science, AI is proving to be transformative as a source of business value.  With a range of significant new tools launched daily, from personal effectiveness to analysis and process optimisation, AI is changing how we work. These are powerful tools, but with power comes responsibility. For the professional accountant, certain skills will be brought to the fore, including adaptability, change and risk management, and leadership amidst rapidly evolving work practices and business models. Accountants are well placed to provide these skills and support the responsible and ethical use of AI.  Rather than fearing being replaced by AI, accountants can prepare to meet expectations to provide added value and be at the helm of using AI tools for finance, management, strategic decision-making and other opportunities. Michael Diviney is Executive Head of Thought Leadership at Chartered Accountants Ireland Níall Fitzgerald is Head of Ethics and Governance at Chartered Accountants Ireland

Aug 02, 2023
READ MORE
Ethics
(?)

Championing ethical leadership amid competing pressures

A recording of the 11 May 2023 event, “Championing Ethical Leadership Amid Competing Pressures”, is now available here. Run by The Economist Impact in association with the Global Accounting Alliance, of which Chartered Accountants Ireland is a member, the event included contributions from: Emily O’Reilly, European Ombudsman, European Union Audrey Morin, Group Compliance Director, Schneider Electric Amanda Belcher, Senior Vice President, Edelman Global Advisory Elia Yi Armstrong, Director, Ethics Office, United Nations, and Barry Melancon, Chair, Global Accounting Alliance Some key takeaways include: For organisations that want to be successful for all stakeholders, doing the right thing means having integrity, being aware of what must be done (in accordance with regulations, etc.) and what should be done, and balancing differing stakeholder expectations. Awareness of ethical issues is increasing, and while the ability to do the right thing is not generation-specific, some participants suggested that the younger generations are more active in questioning behaviour and decisions. Developing a code of ethics and ensuring it is embedded across the organisation and integrated into decision-making is essential for building trust. Some contributors provided insights on how their organisations have developed codes, one referring to it as their “Trust Charter”. Insights from global standard-setters and regulators in driving ethical behaviours and how private sector entities can interact to further progress initiatives in this area. The panel provided good advice for global organisations dealing with competing, or inconsistent, regulatory frameworks to ‘think through’ their fundamental values and allow these to guide decision-making. A discussion on the degree to which Milton Friedman’s statement “the business of business is business” resonates today. While the principles of business remain similar, the purpose and objectives of business have evolved. Insights on how to increase the effectiveness of organisational ethics and compliance programmes including: ethics training (bespoke to the organisation); confidential ethics helplines; robust protected disclosure policies and procedures; supply chain and partnership controls; and embedding an organisational culture of psychological safety that allows for frank discussion without risk of repercussion. A note of caution was shared about the risks of highlighting an organisation’s ethics strategy in marketing campaigns before properly embedding it within the culture. Watch the event in full here.

Jun 22, 2023
READ MORE
Ethics
(?)

CCAB launch ethics resources for professional accountants in Ireland and UK

The Consultative Committee of Accountancy Bodies (CCAB), which includes Chartered Accountants Ireland, launched new resources on ethics and a series of webcast interviews with professionals with diverse business experience at a fully booked webinar on Thursday 15 June 2023. The webinar, “Resilience Under Pressure”, presented highlights from a CCAB survey which revealed significant pressures to act unethically experienced by professional accountants. It also included a panel discussion, moderated by Iain Lowson, Chair of the CCAB Ethics Group, that explored a range of issues raised by the audience with Professor Pat Barker, Sam Ennis, Head of Tax in financial services, Sue Allan, CFO at Willerby Group, Carol Colley, deputy Chief Executive and City Treasurer at Manchester City Council, and Ann Buttery, Head of Ethics, Policy Leadership at ICAS, who also presented guidance for professional accountants on speaking-up. Professor Pat Barker also contributed to the webcast interview series with Barry Doyle, Deputy President Chartered Accountants Ireland, Níall Fitzgerald, Head of Ethics and Governance at Chartered Accountants Ireland, Dominic Hall, Group Head of Ethical Business Conduct at BAE Systems Plc, Malcolm Bacchus, Interim Finance Director, and Professor Chris Cowton, Associate Director at the Institute of Business Ethics. Examples of the issues discussed, including questions raised by the audience, include: The most common sources of pressure to act unethically for professional accountants; The impact of such pressures on professional and personal life; The most common unethical behaviours experienced by professional accountants; The role of regulation and personal responsibilities in driving ethical behaviours; Advice on addressing common issues such as toxic leaders, sharp practices or managing ethical conflicts; Making ethical decisions and promoting an ethical culture; The ethical challenges posed by technology, including artificial intelligence, and the ethics of sustainability. Watch a recording of the webinar and access other resources on the CCAB Website    

Jun 15, 2023
READ MORE
123

Accountancy Ireland articles on ethics

Ethics and Governance
(?)

The crucial role of accountants in the age of AI

Accountants will be the profession best placed to bring the necessary rigour to the analysis and governance of critical data in the age of AI, writes Sharon Cotter Canadian philosopher Marshall McLuhan has suggested: “We become what we behold. We shape our tools, and thereafter our tools shape us”. This is important to remember today, when the spotlight is on the potential consequences, intended and unintended, of the artificial intelligence (AI) tools being shaped by humans. The rise of AI AI encompasses a vast range of computer science research. Since the 1950s, scientists have pursued the goal of building machines capable of completing tasks that normally require intelligent human behaviour.  Machine learning (ML), a subset of AI, enables machines to extract knowledge from data and to learn from it autonomously.  In the past decade, the exponential increase in the volume of data generated, captured, stored and available for analysis, coupled with advances in computing power, have created the impetus and means to rapidly advance ML, which in turn has facilitated the development of narrow AI applications.  In essence, narrow AI applications are computer programs, or algorithms, specifically trained, using very large datasets, to carry out one task, or a limited number of tasks. Best suited to tasks that do not require complex thought, narrow AI algorithms can often accomplish such tasks better and more swiftly than humans.  Most of the AI capability we use today is narrow AI – from Alexa and Siri, which carry out human voice commands, to ChatGPT and Bard, which generate output based on conversational text prompts, and Dall-E2, which generates visual images based on text prompts, to name but a few.  In the field of accounting, we can utilise coding languages and software tools such as Python, ‘R’ and Alteryx to generate predictive forecasts and models.  We often use these tools without realising that we are using elements of narrow AI. For example, these programming languages and software tools embed many of the statistical algorithms that allow us to easily carry out linear regression analysis, a common method of predicting future outcomes based on past data. Adapting to broaden our role The word ‘computer’ was first coined by the English poet Richard Brathwaite in 1613 to describe a person who carried out calculations or computations. For the next 350 years or so, most humans who needed to perform calculations used mental arithmetic, an abacus or slide rules until the widespread availability of electronic handheld calculators in the 1970s. As accountants, we have seamlessly adapted to the tools available to us – whether these are an abacus, double-analysis paper, a totting machine, or computer software tools like Excel and Alteryx.   The use of these tools, and the time saved by their use, have allowed us to broaden our role from recording, summarising and presenting the underlying economic transactions to providing a much wider range of useful information to decision-makers both within, and outside, organisations.  This is reflected in commentary from the professional accountancy bodies emphasising the importance of good organisational decision-making and suggesting that the core purpose of our profession should be to facilitate better decisions and identify the business problems that better decisions will resolve. Asking the right questions In 1968, Pablo Picasso is reputed to have said: “Computers are useless. They can only give you answers”. While the remark may have been dismissive of the then cumbersome mainframe computer, it does encapsulate the notion that the real skill lies in figuring out the right question to ask, as this requires both judgement and creativity.  Useful, timely and relevant information for decision-making can only be produced if the right question is asked of the right data at the right time. On the face of it, this seems simple and straightforward, but in practice it is often much more difficult to achieve.  Deciding what question to ask requires knowledge of the business context, and an understanding of the issue being addressed as well as an ability to clearly articulate the issue. Critical thinking is key to identifying what answers are needed to identify the range of solutions for the issue at hand. Deciding what data is appropriate to use in the analysis requires an understanding of what data is available, where it is stored, how it is stored, what each data element selected represents, how compatible it is with other data, and how current that data is. It also requires knowledge of the limitations posed by using particular sets of data. Being able to generate the answer to the right question using the right data is only relevant if it can be produced at the point at which this information is needed. Sometimes, not all the data needed to answer the question is readily available, or available in the required format. Data from several sources may need to be combined and, where data is incomplete, judgement will be needed on the assumptions necessary to generate a relevant and timely set of data. Accountants are well-positioned The skills, experience and mindsets we develop as part of our professional training positions accountants well to provide the best possible decision-enabling information to decision-makers.  Scepticism is a key tenet of our profession. We look to spot anomalies in data and information, and to question the information by asking “does it make sense?” We are trained to be methodical, thorough and to look beyond the obvious. Training and experience enable us to develop our professional judgement, which we apply when determining what is relevant, appropriate and faithfully represents the underlying economic transactions.  We are adaptable and flexible in the tools we use, and aware of the need to stay up to date with the law and regulation applying to the storage and use of data. In short, we are valued problem-solvers and critical thinkers. Accountants’ ‘jurisdiction’ In his book The System of Professions: An Essay on the Division of Expert Labor, Andrew Abbott uses the term ‘jurisdiction’ to represent the link between a profession and its work.  Jurisdiction is an important concept, as the acknowledged owner of a task is likely to be able to shape the characteristics of that task. In the context of accountants’ work, the term ‘jurisdiction’ means the extent to which organisations, and society, accept that due to their professional expertise, only specific roles and responsibilities should be carried out by accountants.  Within organisations, accountants’ jurisdiction is not static. The roles and responsibilities that fall within their remit can, and do, change.  The jurisdiction of accountants can be encroached upon. Others within the organisation may also have expertise allowing them to claim work once exclusively identified with accountants. Challenges to jurisdiction The emergence of new roles, such as data or information specialists, who collect, clean and analyse data, has meant that complex analysis of financial information can now be done by non-accountants.  Some organisations have explored ways in which operational managers and decision-makers can be given direct access to financial systems.  Known as ‘self-service’ menus, such direct access to information allows decision-makers to drill down into the detail of transactions – for example, to identify the underlying causes of deviations from budget, all without the need to consult with their colleagues in the finance department.  If an organisation transfers responsibility for data analysis and decision support to data specialists and/or decision-makers, then the jurisdiction of the accountant may be narrowed or reduced. Opportunities for role expansion Equally, however, accountants’ roles and responsibilities can be increased, resulting in their jurisdiction being broadened or expanded.  The expansion of an accountant’s role requirements can either result from increased job tasks and responsibilities, or from changes in the tools and technologies available to carry out these tasks and responsibilities.  Recent research and professional body commentary has, for example, explored the extent to which management accountants have embraced changes in their role or taken on wider responsibilities, such as business partnering.  Multiple elements such as role identity, the ability to embrace change in a positive way and developing strong communication skills, to name but a few, all contribute to the successful adoption of additional responsibility. Futureproofing with digital fluency The rapid and on-going development, enhancement and availability of software tools that can be used to capture, store, identify, slice and dice data, and present information in visual graphics, are forcing accounting professionals to consider the level of IT competency required to operate efficiently and effectively in today’s digital world.   Professional accountancy bodies emphasise the importance of digital skills in futureproofing the accountant’s role while many of the larger multinational companies espouse the need for finance staff to have good digital fluency. Challenges and opportunities Both encroachments and expansions to the jurisdiction of accountants bring their own set of challenges and opportunities.  Maintaining, and expanding, accountants’ jurisdiction over the integrity of data, and the provision of information for decision-making, should be a key part of the profession’s strategy in the digital age.  I believe that the ‘governance’ of data, rather than the use of specific AI tools, should be the focus of the accountancy profession when formulating strategies for its future direction. In addition to enhancing our digital skills, we need to consider strategies such as adapting and changing the role of the chief financial officer to include overall direct responsibility for data analytics.  The governance, management and analysis of data should be as important as traditional responsibilities in finance.  Governance of data requires rigour and objectivity to ensure that its integrity is preserved. We should noticeably stake our claim as the profession best placed to bring that rigour and objectivity to the governance and analysis of data used for decision-making.  Failure to consider such strategies may mean we increase the risk that encroachments rather than expansions to our role – our jurisdiction – will become a reality. We should strive to ensure that our future role is shaped by us rather than by these new digital tools and techniques. Sharon Cotter, FCA, lectures in accounting and finance at the University of Galway

Oct 06, 2023
READ MORE
Ethics and Governance
(?)

Navigating the ethics of AI

Michael Diviney and Níall Fitzgerald explore the ethical challenges arising from artificial intelligence (AI), particularly ‘narrow’ AI, and highlight the importance of ethics and professional competence in its deployment Earlier this year, artificial intelligence (AI) industry leaders, leading researchers and influencers signed a succinct statement and warning: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Was this a publicity stunt? Well, probably not, as the generative AI ChatGPT was already the fastest-adopted application in history.  Was this an over-the-top, alarmist statement by a group possibly trying to steal a march on self-regulation of a rapidly emerging technology and growing industry?  Again, this is unlikely if one considers the warnings of pioneer thinkers like Nick Bostrom, Max Tegmark, Stephen Hawking and Astronomer Royal Martin Rees. They concur that there is an existential threat to humankind if human-level or ‘general’ AI is developed and the ‘singularity’ is reached when AI surpasses human intelligence.  Autonomous weapons and targeting are a clear risk, but more broadly, unless we can ensure that the goals of a future superintelligence are aligned and remain aligned with our goals, we may be considered superfluous and dispensable by that superintelligence.  As well as the extinction threat, general AI presents other potential ethical challenges.  For example, if AI attains subjective consciousness and is capable of suffering, does it then acquire rights? Do we have the right to interfere with these, including the right to attempt to switch it off and end its digital life?  Will AI become a legal entity and have property rights? After all, much of our economy is owned by companies, another form of artificial ‘person’. Ethical challenges from ‘narrow’ AI Until general AI is here, however – and there is informed scepticism about its possibility – the AI tools currently in use are weak or ‘narrow’ AI. They are designed to perform a specific task or a group of related tasks and rely on algorithms to process data on which they have been trained.  Narrow AI presents various ethical challenges:  Unfairness arising from bias and opacity (e.g. AI used in the initial screening of job candidates include a gender bias based on historical data – in the past more men were hired); The right to privacy (AI trained with data without the consent of the data subjects); Threats to physical safety (e.g. self-driving vehicles); Intellectual property and moral rights, plagiarism and passing-off issues in the use of generative AI like ChatGPT and Bard; and Threats to human dignity from the hollowing out of work and loss of purpose. Regulation vs. ethics Such issues arising from the use of AI, particularly related to personal data, mean that regulation is inevitable.  We can see this, for example, with the EU’s landmark AI Act, due to apply by the end of 2025, which aims to regulate AI’s potential to cause harm and to hold companies accountable for how their systems are used. However, as Professor Pat Barker explained at a recent Consultative Committee of Accountancy Bodies (CCAB) webinar, until such laws are in place, and in the absence of clear rules, ethics are required for deciding on the right way to use AI.  Even when the regulation is in place, there are likely to be cases and dilemmas that it has not anticipated or about which it is unclear. Legal compliance should not be assumed to have all the ethical issues covered, and as AI is evolving so quickly, new ethical issues and choices will inevitably emerge.  Ethics involves the application of a decision-making framework to a dilemma or choice about the right thing to do. While such a framework or philosophy can reflect one’s values, it must also be objective, considered, universalisable and not just based on an instinctual response or what may be expedient. Established ethics frameworks include: the consequentialist or utilitarian approach – in the case of AI, does it maximise benefits for the greatest number of people?; and the deontological approach, which is based on first principles, such as the inalienable rights of the individual (an underlying philosophy of the EU’s AI Act). (The Institute’s Ethics Quick Reference Guide, found on the charteredaccountants.ie website, outlines five steps to prepare for ethical dilemmas and decision-making.)  A practical approach While such philosophical approaches are effective for questions like “Should we do this?” and “Is it good for society”, as Reid Blackman argues in Harvard Business Review, businesses and professionals may need a more practical approach, asking: “Given that we are going to [use AI], how can we do it without making ourselves vulnerable to ethical risks?”  Clear protocols, policies, due diligence and an emphasis on ethical risk management and mitigation are required, for example responsible AI clauses in agreements with suppliers. In this respect, accountants have an arguably competitive advantage in being members of a profession; they can access and apply an existing ethical framework, which is evolving and adapting as the technology, its opportunities and challenges change.  The Code of Ethics The International Ethics Standards Board for Accountants (IESBA) recently revised the Code of Ethics for Professional Accountants (Code) to reflect the impact of technology, including AI, on the profession. The Chartered Accountants Ireland Code of Ethics will ultimately reflect these revisions.  IESBA has identified the two types of AI likely to have the most impact on the ethical behaviour of accountants:  Assisted intelligence or robotic process automation (RPA) in which machines carry out tasks previously done by humans, who continue to make decisions; and  Augmented intelligence, which involves collaboration between human and machine in decision-making. The revisions also include guidance on how accountants might address the risks presented by AI to ethical behaviour and decision-making in performing their role and responsibilities.  Professional competence and due care The Code requires an accountant to ensure they have an appropriate level of understanding relevant to their role and responsibilities and the work they undertake. The revisions acknowledge that the accountant’s role is evolving and that many of the activities they undertake can be impacted by AI.  The degree of competency required in relation to AI will be commensurate with the extent of an accountant’s use of and/or reliance on it. While programming AI may be beyond the competency of many accountants, they have the skill set to:  identify and articulate the problem the AI is being used to solve;  understand the type, source and integrity of the data required; and assess the utility and reasonableness of the output.  This makes accountants well placed to advise on aspects of the use of AI. The Code provides some examples of risks and considerations to be managed by professional accountants using AI, including: The data available might not be sufficient for the effective use of the AI tool. The accountant needs to consider the appropriateness of the source data (e.g. relevance, completeness and integrity) and other inputs, such as the decisions and assumptions being used as inputs by the AI. This includes identifying any underlying bias so that it can be addressed in final decision-making. The AI might not be appropriate for the purpose for which the organisation intends to use it. Is it the right tool for the job and designed for that particular purpose? Are users of the AI tool authorised and trained in its correct use within the organisation’s control framework? (One chief technology officer has suggested not only considering the capabilities of the AI tool but also its limitations to be better aware of the risks of something going wrong or where its use may not be appropriate.) The accountant may not have the ability, or have access to an expert with that ability, to understand and explain the AI and its appropriate use.  If the AI has been appropriately tested and evaluated for the purpose intended. The controls relating to the source data and the AI’s design, implementation and use, including user access. So, how does the accountant apply their skills and expertise in this context?  It is expected that accountants will use many of the established skills for which the profession is known to assess the input and interpret the output of an AI tool, including interpersonal, communication and organisational skills, but also technical knowledge relevant to the activity they are performing, whether it is an accounting, tax, auditing, compliance, strategic or operational business decision that is being made.  Data and confidentiality According to the Code, when an accountant receives or acquires confidential information, their duty of confidentiality begins. AI requires data, usually lots of it, with which it is trained. It also requires decisions by individuals in relation to how the AI should work (programming), when it should be used, how its use should be controlled, etc.  The use of confidential information with AI presents several confidentiality challenges for accountants. The Code includes several considerations for accountants in this regard, including: Obtaining authorisation from the source (e.g. clients or customers) for the use of confidential information, whether anonymised or otherwise, for purposes other than those for which it was provided. This includes whether the information can be used for training AI tools.  Considering controls to safeguard confidentiality, including anonymising data, encryption and access controls, and security policies to protect against data leaks.  Ensuring controls are in place for the coding and updating of the AI used in the organisation. Outdated code, bugs and irregular updates to the software can pose a security risk. Reviewing the security certification of the AI tool and ensuring it is up to date can offer some comfort.  Many data breaches result from human error, e.g. inputting confidential information into an open-access web-based application is a confidentiality breach if that information is saved, stored and later used by that application. Staff need to be trained in the correct use and purpose of AI applications and the safeguarding of confidential information. Dealing with complexity The Code acknowledges that technology, including AI, can help manage complexity.  AI tools can be particularly useful for performing complex analysis or financial modelling to inform decision-making or alerting the accountant to any developments or changes that require a re-assessment of a situation. In doing so, vast amounts of data are collected and used by AI, and the ability to check and verify the integrity of the data introduces another level of complexity.  The Code makes frequent reference to “relevancy” in relation to the analysis of information, scenarios, variables, relationships, etc., and highlights the importance of ensuring that data is relevant to the problem or issue being addressed. IESBA was mindful, when revising the Code, that there are various conceivable ways AI tools can be designed and developed to use and interpret data.  For example, objectivity can be challenged when faced with the complexity of divergent views supported by data, making it difficult to come to a decision. AI can present additional complexity for accountants, but the considerations set out in the Code are useful reminders of the essential skills necessary to manage complexity. Changing how we work As well as its hugely beneficial applications in, for example, healthcare and science, AI is proving to be transformative as a source of business value.  With a range of significant new tools launched daily, from personal effectiveness to analysis and process optimisation, AI is changing how we work. These are powerful tools, but with power comes responsibility. For the professional accountant, certain skills will be brought to the fore, including adaptability, change and risk management, and leadership amidst rapidly evolving work practices and business models. Accountants are well placed to provide these skills and support the responsible and ethical use of AI.  Rather than fearing being replaced by AI, accountants can prepare to meet expectations to provide added value and be at the helm of using AI tools for finance, management, strategic decision-making and other opportunities. Michael Diviney is Executive Head of Thought Leadership at Chartered Accountants Ireland Níall Fitzgerald is Head of Ethics and Governance at Chartered Accountants Ireland

Aug 02, 2023
READ MORE
Ethics and Governance
(?)

Roadmap to Corporate Sustainability Reporting

The roadmap for the EU Commission’s milestone Corporate Sustainability Reporting Directive is taking shape and now is the time to start preparing for a brave new era in non-financial reporting, writes Conor Holland With the Corporate Sustainability Reporting Directive (CSRD) now approved by the European Council, entities in the EU must begin to invest significant time and resources in preparing for the advent of a new era in non-financial reporting, which places the public disclosure of environmental, social affairs and governance matters (ESG) matters on a par with financial information. Under the CSRD, entities will have to disclose much more sustainability-related information about their business models, strategy and supply chains than they have to date. They will also need to report ESG information in a standardised format that can be assured by an independent third party. For those charged with governance, the CSRD will bring further augmented requirements. Audit committees will need to oversee new reporting processes and monitor the effectiveness of systems and controls setup. They will also have enhanced responsibilities. Along with monitoring an entity’s ESG reporting process, and evaluating the integrity of the sustainability information reported by that entity, audit committees will need to: Monitor the effectiveness of the entity’s internal quality control and risk management systems and internal audit functions; Monitor the assurance of annual and consolidated sustainability reporting; Inform the entity’s administrative or supervisory body of the outcome of the assurance of sustainability reporting; and Review and monitor the independence of the assurance provider. The CSRD stipulates the requirement for limited assurance over the reported information. However, it also includes the option for assurance requirements to evolve to reasonable assurance at a later stage. The EU estimates that 49,000 companies across the EU will fall under the requirements of the new CSRD Directive, compared to the 11,600 companies that currently have reporting obligations. The EU has confirmed that the implementation of the CSRD will take place in three stages: 1 January 2024 for companies already subject to the non-financial reporting directive (reporting in 2025 for the financial year 2024); 1 January 2025 for large companies that are not presently subject to the non-financial reporting directive (reporting in 2026 for the financial year 2025); 1 January 2026 for listed SMEs, small and non-complex credit institutions, and captive insurance undertakings (reporting in 2027 for the financial year 2026). A large undertaking is defined as an entity that exceeds at least two of the following criteria: A net turnover of €40 million A balance sheet total of €20 million 250 employees on average over the financial year The final text of the CSRD has also set timelines for when the Commission should adopt further delegated acts on reporting standards, with 30 June 2023 set as the date by which the Commission should adopt delegated acts specifying the information that undertakings will be required to report. European Financial Reporting Advisory Group In tandem, the European Financial Reporting Advisory Group (EFRAG) is working on a first set of draft sustainability reporting standards (ESRS). These draft standards will be ready for consideration by the Commission once the Parliament and Council have agreed a legislative text. The current draft standards provide an outline as to the depth and breadth of what entities will be required to report. Significantly, the ESRS should be considered as analogous to accountancy standards—with detailed disclosure requirements (qualitative and quantitative), a conceptual framework and associated application guidance. Readers should take note—the ESRS are much more than a handful of metrics supplementary to the financial statements. They represent a step change in what corporate reporting entails, moving non-financial information toward an equilibrium with financial information. Moreover, the reporting boundaries would be based on financial statements but expanded significantly for the upstream and downstream value chain, meaning an entity would need to capture material sustainability matters that are connected to the entity by its direct or indirect business relationships, regardless of its level of control over them. While the standards and associated requirements are now largely finalised, in early November 2022, EFRAG published a revised iteration to the draft ESRS, introducing certain changes to the original draft standards. While the broad requirements and content remain largely the same, some notable changes include: Structure of the reporting areas has been aligned with TCFD (Task Force on Climate-Related Financial Disclosures) and ISSB (International Sustainability Standards Board) standards – specifically, the ESRS will be tailored around “governance”, “strategy”, “management of impacts, risks and opportunities”, and “metrics and targets”. Definition of financial materiality is now more closely aligned to ISSB standards. Impact materiality is more commensurate with the GRI (Global Reporting Initiative) definition of impact materiality. Time horizons are now just a recommendation; entities may deviate and would disclose their entity-specific time horizons used. Incorporation of one governance standard into the cross-cutting standard requirements on the reporting area of governance. Slight reduction in the number of data points required within the disclosure requirements. ESRS and international standards By adopting double materiality principles, the proposed ESRS consider a wider range of stakeholders than IFRS® Sustainability Disclosure Standards or the US Securities and Exchange Commission (SEC) published proposal. Instead, they aim to meet public policy objectives as well as meeting the needs of capital markets. It is the ISSB’s aim to create a global baseline for sustainability reporting standards that allows local standard setters to add additional requirements (building blocks), rather than face a coexistence of multiple separate frameworks. The CSRD requires EFRAG to take account of global standard-setting initiatives to the greatest extent possible. In this regard, EFRAG has published a comparison with the ISSB’s proposals and committed to joining an ISSB working group to drive global alignment. However, in the short term, entities and investors may potentially have to deal with three sets of sustainability reporting standards in setting up their reporting processes, controls, and governance. Key differences The proposed ESRS list detailed disclosure requirements for all ESG topics. The proposed IFRS Sustainability Disclosure Standards would also require disclosure in relation to all relevant ESG topics, but the ISSB has to date only prepared a detailed exposure draft on climate, asking preparers to consider general requirements and other sources of information to report on other sustainability topics. The SEC focused on climate in its recent proposal. The proposed ESRS are more prescriptive, and the number of disclosure requirements significantly exceeds those in the proposed IFRS Sustainability Disclosure Standards. Whereas the proposed IFRS Sustainability Disclosure Standards are intended to focus on the information needs of capital markets, ESRS also aim to address the policy objectives of the EU by addressing wider stakeholder needs. Given the significance of the directive—and the remaining time to get ready for it—entities should now start preparing for its implementation. It is important that entities develop plans to understand the full extent of the CSRD requirements, and the implications for their reporting infrastructure. As such, they should take some immediate steps to prepare, and consider: Performing a gap analysis—i.e. what the entity reports today, contrasted with what will be required under the CSRD. This is a useful exercise to inform entities on where resources should be directed, including how management identify sustainability-related information, and what KPIs they will be required to report on. Undertaking a ‘double materiality’ analysis to identify what topics would be considered material from an impact and financial perspective—as required under the CSRD. Get ‘assurance ready’—entities will need to be comfortable that processes and controls exist to support ESG information, and that the information can ultimately be assured. The Corporate Sustainability Reporting Directive represents a fundamental change in the nature of corporate reporting—the time to act is now and the first deadline is closing in.

Dec 02, 2022
READ MORE
123

Ethics news

News
(?)

The centrality of ethics to the accountancy profession

Ethical conduct is not a “nice to have” for accountants, but a crucial professional competence, writes Professor Patricia Barker  Global Ethics Day will be celebrated on 16 October 2024. This initiative, founded by the Carnegie Council for International Affairs, is now in its eleventh year. This year’s theme is “Ethics Empowered”. The Consultative Committee of Accountancy Bodies (CCAB) Ethics Group believes it is important to reflect on the significance of ethics for the accountancy profession and to emphasise three key messages: 1. Empower through education and self-reflection Ethics should be viewed as a professional competence. This requires accountants to undertake regular CPD on ethics, self-reflection activity, and to familiarise themselves with frameworks to guide their ethical decision-making. 2. Be true to ethical values and model ethical behaviour Compliance should not be confused with ethical behaviour. 3. Follow your North Star Accountants should always use the five fundamental ethics principles, as set out by organisations such as Chartered Accountants Ireland, as well as the duty to act in the public interest as their constant navigation tool when facing an ethical dilemma. Ethics vs compliance In every sphere of professional activity, accountants, and the clients they work for, must deal with an ever-increasing tide of regulation. In addition to financial reporting and auditing standards – and alongside legislation governing taxation, anti-money laundering and sanctions – the profession is expected to be familiar with legislation, standards and regulations ranging from those relating to employment, competition and procurement to sustainability, data protection and corporate governance. This is the price to pay for being a trusted advisor. So great is the volume and weight of regulation today, however, that it pervades much of the profession’s decision-making and innovation.  More than just compliance It is important that accountants do not become complacent and that they remember that professional ethics is about much more than mere compliance. Indeed, they may be so preoccupied with gathering evidence of compliance, that they fail to reflect properly on the reality of the rightness and wrongness of actions and the decisions they take.  Dilemmas facing accountants can be regarded, broadly, as either regulatory or judgemental in nature.  Law and regulation provide the framework for ensuring compliance with regulatory issues.  As the body of rules and regulations grows unevenly across different jurisdictions, however, opportunities for regulatory arbitrage increase, potentially distorting markets. More importantly, not all dilemmas can be dealt with directly by a clear regulation. Ethical issues that fall outside clear rules must be judged in the context of the value framework the individual professional believes in.  This framework is provided by the ethical education and self-awareness of the accountant, supported by a Professional Code of Ethics and experiential/reflective learning.  The role of personal values In determining how to deal with any ethical dilemma, the accountant will be strongly influenced by their individual moral perspective. When considering whether a particular action is potentially good or bad, some accountants may prefer to emphasise the ultimate outcome, taking the view that the end will justify the means.  Others may believe that the action itself must be judged, rather than its consequences. Still others may believe that humans are inherently self-centred and competitive, and will make decisions in their own interests, albeit complying with the law.  Ethical behaviour, therefore, requires that each professional accountant undertakes detailed self-reflection to fully understand how their values influence their approach to decision-making and how they are likely to react under pressure. When there is a conflict between our conscience, our ethical reasoning, the requirements of our workplace and our limited ability to influence outcomes, cognitive dissonance is inevitable. Ethical self-reflection and close scrutiny of the guidance provided by the Code of Ethics for Professional Accountants can help the professional accountant forge a trajectory to ethical decision-making when under pressure. Importance of Code of Ethics for professional accountants Professional accountants who are members of one of the bodies comprising the CCAB must adhere to the Code of Ethics for Professional Accountants. This includes the International Independence Standards issued by the International Ethics Standards Board for Accountants (the Code). Perhaps inevitably, to accommodate the increase in regulation and standards, the Code has expanded exponentially in recent years. However, it is important to remember that the application material and more detailed sections of the Code are simply an expansion of the five fundamental ethics principles. Professional accountants should be guided not merely by the terms but also by the spirit of the Code. These principles, together with the overarching professional duty to act in the public interest set out in the Code, are broad enough to deal with most of the challenges accountants face in their daily professional lives – particularly when combined with informed ethical self-reflection. This article was written by Professor Patricia Barker, FCA, Lecturer of Business Ethics at Dublin City University, on behalf of the Consultative Committee of Accountancy Bodies

Sep 19, 2024
READ MORE
Professional Standards
(?)

Amendments to the approach to confirming compliance with CPD/Code of Ethics

Recent amendments to the Institute’s CPD Regulations have facilitated simplification of how members confirm compliance with CPD requirements and the Institute’s Code of Ethics[1]. Henceforth, by paying the annual membership subscription, or permitting this to be paid on their behalf, or otherwise renewing their membership, members are automatically acknowledging CPD compliance and awareness of Code of Ethics obligations. As a consequence, members generally will no longer have to submit an annual declaration (the Individual Annual Return) in respect of these matters.  Further information on the Institute’s CPD requirements is on the CPD Support & Guidance webpage.  Documents on this page also sets out circumstances in which members may apply for an exemption from CPD requirements; there are no changes in this regard.   Members who have exemptions in this regard are considered to be compliant with the Institute’s CPD Regulations as they are availing of a waiver in accordance with the CPD Regulations. Similarly, there is no change to the Institute’s current approach to substantive testing of CPD compliance whereby a sample of member CPD records is selected for review on an annual basis.  Responsible Individuals (statutory auditors) in audit firms registered by the Institute remain subject to a separate CPD compliance regime based on company law and IAASA requirements. If anyone has any further queries in relation to the above, please contact us at professionalstandards@charteredaccountants.ie. [1] Additional requirements continue to apply to members holding Practising Certificates, and who are Responsible Individuals (statutory auditors).

Sep 05, 2024
READ MORE
Comment
(?)

The ethics and governance of AI

The ethical use of AI and how it is governed today and as it continues to evolve in the years ahead is top of mind for many in the profession. Accountancy Ireland asks three Chartered Accountants for their take on the ethics of AI Owen Lewis  Head of AI and Management Consulting KPMG in Ireland It is crucial for all of us in the profession to ensure the integrity and transparency of solutions driven by artificial intelligence (AI).  We must audit and validate AI algorithms to ensure they comply with regulatory standards and ethical guidelines. Monitoring systems for biases and inaccuracies is also crucial to ensuring that financial data and decisions remain fair and reliable. By providing independent oversight, we can help to maintain trust in AI-driven financial processes and outcomes for clients.  Where AI is used to inform large-scale decisions, it should be supplemented with significant governance measures, such as explainability, transparency, human oversight, data quality and model robustness and performance requirements. This technology is continuing to advance rapidly, and we need to be open to both its current and potential capabilities.  By putting the correct governance mechanisms and controls in place – beginning with low-risk test applications and building from there – organisations can adopt AI safely and obtain real benefits from its use. I am working with organisations to help them think through what AI means for them, develop strategies for its adoption, put the necessary governance and controls in place, scale solutions sensibly and ensure business leaders get real value from their investment.  Whatever their goal may be – more efficient operations, accelerated content generation or improved engagement with stakeholders – we help organisations decide if AI can help, and if it can, how to use it in the right way. >Bob Semple Experienced Director Governance and Risk Management Artificial Intelligence (AI) is one of the most misunderstood, yet transformative, technologies impacting the way we work today. Here are 10 essential steps Chartered Accountants should take to navigate the landscape of AI effectively. Take a leadership role – If we don’t take the lead, we risk missing the golden opportunity AI presents. Conduct an AI “stocktake” –According to a recent Microsoft survey, 75 percent of employees are already using AI. Identifying current AI usage within your organisation is essential. Assess the downside risks of AI – Legislative and regulatory requirements are exploding (e.g. NIS 2, the AI Act, DORA and more) and risks abound (AI bias, explainability, privacy, IP, GDPR, cyber security, resilience, misuse, model drift and more). Organisations must act on their AI responsibilities. Conduct a dataset stocktake – Just as the Y2K challenge was about identifying IT systems, today’s challenge is to catalogue all datasets, as these are crucial for AI functionality. Draft appropriate policies and procedures – Establish clear responsibilities and accountability for AI initiatives. Pay special attention to how AI impacts decision-making processes. Strengthen data curation – Implement new processes to improve how data is collected and used. Identify opportunities for the smart use of AI – Brainstorm and prioritise AI use-cases that can drive efficiency and innovation. Provide training – Ensure that board members, management and staff are all adequately trained on AI principles and applications. Manage the realisation of benefits – Safeguard against excessive costs and subpar returns by carefully managing the implementation of AI projects. Update audit and assurance approaches – Seek independent assurance on AI applications and leverage AI to enhance risk, control and audit processes. As we adopt AI, it is critical that we pay particular attention to distorted agency – i.e. giving too much agency to, or relying unduly on, AI outputs and doubting our own agency to make the most important decisions. Exercising professional judgement is the key to minimising the risks associated with AI and realising its benefits, and that surely is the strength of every Chartered Accountant. *Note: GPT4 was used to assist in drafting this article.   Níall Fitzgerald Head of Ethics and Governance Chartered Accountants Ireland Artificial intelligence (AI) is proving to be transformative, impacting competitiveness and how business is done.  Chartered Accountants Ireland has engaged with members working in various finance and C-suite positions, including chief executives, chief financial officers and board members, to understand how AI is impacting their day-to-day work.  One thing is clear. AI is being used in some shape or form in many businesses across the country.  In 2023, the Institute’s response to the UK’s Financial Reporting Council proposals on introducing governance requirements for the use of AI noted several governance mechanisms that are likely to be impacted by AI currently or in the very near future in many organisations.  We highlighted the focus on corporate purpose and how market forces, emerging threats and opportunities driven by AI, may challenge the purpose of an organisation and its long-term objectives.  AI may impact how organisations decide on their strategic focus in terms of how they deliver their product or service and, indeed, how their product or service is designed in the first instance.  It may also impact these organisations’ values as they consider how to deploy and use AI in an ethical manner. The EU AI Act, which enters into force on 1 August 2024 over a phased basis, introduces requirements for the development of codes of conducts, risk and impact assessments and staff training to ensure adequate human oversight around the use of AI systems within organisations. This has specific resonance for Chartered Accountants who are members of a profession bound by a code of ethics governing objectivity, confidentiality, integrity, professional behaviour and competence and due care. Chartered Accountants must now ensure that they understand how AI uses, analyses and then outputs data.  Organisations must ensure that any AI-driven information they share, and how they deploy the technology itself, satisfies principles of integrity, honesty and transparency.  Chartered Accountants are well-positioned, with their ethical mindsets, to ensure the integrity of AI systems, and their use within organisations.

Aug 02, 2024
READ MORE
12345

Was this article helpful?

yes no
Back to Ethics home

The latest news to your inbox

Useful links

  • Current students
  • Becoming a student
  • Knowledge centre
  • Shop
  • District societies

Get in touch

Dublin HQ

Chartered Accountants
House, 47-49 Pearse St,
Dublin 2, D02 YN40, Ireland

TEL: +353 1 637 7200
Belfast HQ

The Linenhall
32-38 Linenhall Street, Belfast,
Antrim, BT2 8BG, United Kingdom

TEL: +44 28 9043 5840

Connect with us

Something wrong?

Is the website not looking right/working right for you?
Browser support
CAW Footer Logo-min
GAA Footer Logo-min
CCAB-I Footer Logo-min
ABN_Logo-min

© Copyright Chartered Accountants Ireland 2020. All Rights Reserved.

☰
  • Terms & conditions
  • Privacy statement
  • Event privacy notice
  • Sitemap
LOADING...

Please wait while the page loads.