Hosts
Liz Riley, Deputy Editor, and Michael Diviney, Head of Thought Leadership, both from Chartered Accountants Ireland.
Guests
Daniel Milan, Assistant Professor in Business Ethics and Director of the Corporate Governance Lab at Trinity Business School from Trinity College Dublin.
Barry Scannell, a leading solicitor in the technology department of law firm William Fry.
Producer
Tinpot
Published
November 2023
Transcript
Liz Riley: Hi there, I'm Liz Riley, and this is the fourth episode, after a bit of a break, of the Accountancy Ireland podcast Thought Leadership series. The Executive Head of Thought Leadership at Chartered Accountants Ireland, Michael Devaney, and I, with the help of knowledgeable experts, will discuss different aspects of what business and finance professionals can anticipate in the future. Coming up today, artificial intelligence is certainly the 2023 buzzword, but what does it mean for business ethics?
Here to discuss are Daniel Milan, Assistant Professor in Business Ethics and director of the Corporate Governance Lab at Trinity Business School, Trinity College Dublin, and Barry Scannell, a leading solicitor in the technology department of law firm William Fry, specialising in AI, copyright, IP, technology law and data protection. Thank you all for being with myself and Michael today. Just to start off, in its recent 2023 CEO outlook, KPMG reports that despite a willingness to push forward with their investments, global CEOs recognise that emerging technologies can introduce risks that should be addressed.
57% cite ethical challenges as the top concern when it comes to implementing generative AI, followed closely by a lack of regulation. As scrutiny and regulation of AI increases, organisations may need policies and practices they can articulate and apply with confidence. So clearly, organisations want to apply and gain from the benefit of AI, specifically generative AI, but it seems to have ethical concerns about its use.
Daniel, what is it about AI that has brought business ethics to the surface and the top of agendas? And what do you think are the pros and cons and benefits and downsides of this technology that has come about so quickly?
Daniel Milan: I think we want to know whether something is good or bad. There's in philosophy, there's something called the open question argument that says that no matter what you talk about, you can always end the conversation with a question. But is it good? And AI is new and we want to know, is it good or bad?
And that's an ongoing discussion. It reflects to some extent the good versus evil debate that's been part of humanity. And there are risks, there are opportunities and we are grappling with that at the moment because it's new. I think the benefits are there for all to see. It makes our lives easier. Just this morning, I saw the announcement of Microsoft's new AI assistant that will attend team meetings on your behalf and send you summaries of the meetings afterwards.
I get reminded, sometimes frustrated by my phone, when I didn't respond to an email. And then, you know, inevitably, it's correct. I didn't respond to the email. The maps that bring me here, everything like that. It makes our lives easier. AI, to a large extent is driven, is very consumer driven. But at the same time, that brings the downside where people get concerns about privacy, they get concerned about whether we still have free choice, whether we are still in control.
And those are obviously where the ethical concerns and the downside would come into play.
Liz Riley: Barry, can you explain what legislation is proposed at EU level to address a regulatory gap that they were talking about?
Barry Scannell: For sure. So, in the EU, the AI act has been proposed to regulate AI and this was put forward by the Commission in 2021. And a lot has changed in the AI world since 2021. Now the AI act is currently going through the EU's trialogue process. So the Commission, the Council and Parliament have all proposed their texts and now they're negotiating with each other, in order to find a common text.
Hopefully that will be decided on the 6th of December at that vote. And we'll have an agreed text then. And there will be a two-year transition period before the act comes into play. So I think the important thing to recognise about the AI act is that it's a piece of product regulatory legislation. So that's the same as anything you can imagine.
That carries a CE mark that is governed by some element of product regulatory legislation. And it's going to be the same, with the AI Act, and there is going to be the need for certain AI systems to carry that CE mark and to be regulated in that sense. And another important aspect of the AI Act is that it takes a risk based approach to AI, whereas you'll have some AI systems that are unacceptable risks that will be prohibited.
And those would be systems that, for example, carry out, subliminal emotional manipulation, which thankfully is banned and will be banned in the EU. It's not banned in other countries – it's actively deployed in other countries.
Liz Riley: Are you thinking of any countries in particular? Yeah. Right. [Laughter from Liz and Barry]
Barry Scannell: And then there are other systems called high-risk AI systems. Now when the EU Commission did their impact assessment for the AI act way back in the day, they reckoned that about 35% of AI systems would be covered by the AI act, but now it's looking like it's going to be considerably more.
To give a very brief example of why it would be considerably more: one of the – in Annex Three of the AI act that deals with the various, high risk systems that will be caught by the act and one of those is any AI systems used in HR. So it's AI systems used in recruitment processes, AI systems used to create targeted adverts, AI systems that are used to carry out performance monitoring. And very few HR functions these days that use software as a service that use third party vendors like that are now using AI system, are using HR systems that don't incorporate some element of AI.
So there's potential that many organisations across a massive spectrum might actually be considered users of high risk AI systems under the AI Act. And that's just one example of it. So, all companies are going to need to look more closely.
And it's not just generative AI, but it's all different types of AI systems that companies need to start considering.
Liz Riley: What do you think the level of awareness is of these new laws with business leaders and senior executives and boards and seemingly HR departments?
Barry Scannell Well, very fortunately, only this week, the Institute of Directors published findings of a survey that they carried out which said that 75% of senior executives in Ireland are not familiar with what their obligations will be under the AI Act. And certainly in terms of the work we do at William Fry, one of the big pieces of work we're doing is that, I guess, education piece, in terms of, you know, meeting with boards, meeting with the executives and workshopping and the what the various elements and impacts of the AI act would be on their businesses.
But so far, it's only the biggest companies that are actually, you know, that forward thinking and looking at those aspects. So you're talking about major PLCs and even major multinationals that want that type of, you know, assistance. But there will come a point where smaller Irish companies and SMEs are going to need to get to grips with this, because this piece of legislation is basically the GDPR on steroids.
Liz Riley: And when would they need to be ready by, do you think?
Barry Scannell: Well, there's a two-year transition period. It's expected or hoped to pass in December of this year, so it would be the end of 2025. But here's the thing, though, is that with AI systems, with GDPR, everybody, there's this mad rush of any technology lawyer or data protection lawyer who'd have a thousand yard stare when they talk about the time before GDPR came into effect, there was a massive rush.
And I guess the issue is, is that you're not going to be able to do this, in my opinion, three months in advance. You're not going to be protected by a mere paper shield, because it's one thing to come along and say, okay, these are my procedures and we're fine. But with the AI Acts, the obligations, I think, require a much more operationalised approach.
It's not just having the policies in place, but you actually have to have technical measures in place. And, operationalised approaches in relation to, you know, logging of information, technical information, bias monitoring. And even if you're just a user of AI systems, there's going to be elements which you may need to consider beyond just having a lawyer come in and say, okay, there's your policy, off you go. So I think, you know, it's time to start thinking now. And as we've been talking about the ethics thing as well, I think the AI act is the perfect template for any company looking to implement a responsible AI framework, because anything you would consider would be ethically or the right thing to do – the responsible thing to do – is actually set out an explicit detail in the AI Act.
Liz Riley: Can you summarise the discussion or the debate? In the business ethics literature about AI, particularly since the emergence of generative AI, which – was it about a year ago today, or not a year ago today, but it was about a year ago that we first got our first look at ChatGPT? What is the discussion and debate around it?
Daniel Milan: Well, because generative AI is so new, as you point out, it hasn't really found its way into the academic literature. Unfortunately, in my field know, if I submit an article today, it could take up to two years, you know, for it to go through a peer review process before it's published. And, so I would say just anecdotally, the, you know, for generative AI, it's almost a more philosophical question at the moment in terms of originality, you know, that that keeps people busy.
This has been sort of generated by a computer now through an algorithm. In the educational environment where I work, you know, obviously this has a huge implication for questions about plagiarism. Because it's new, you know, it's almost a new offence that's been created. You didn't plagiarise anything because it's new, but you didn't create it. So it's – there's a void that has to be to be filled there.
But overall, I mean, obviously a lot has been written in the academic literature about AI, more generally. And I recently looked at – I just took 2015 as a cut-off point and said, what's been written about AI ethics in top academic journals since then? And through an analysis of that, a couple of issues came out. Trust is number one.
And that's the issue that that is mentioned most often in these articles, followed by bias, which is clearly one of the concerns about unfairness and the fact that that these algorithms would prevent people from getting jobs, that they would receive unfair treatment. The next one, job security, which is linked to some extent to that, privacy was number four – concerns about, you know, handing over your data, benefiting from AI.
But then, you know, having to give something in return. And that's the data that that we don't know how it's being used by the big companies. And accountability was number five. From those, just very briefly, I have identified three main themes, if you like. You know, so what what's being written about on the one hand is design issues.
You know, what is the ethics implicit in the technology itself.
And that's where things like bias would come in. How do you define the algorithms? How do you, you know, prevent bias from being designed into systems? So that's the design side. The second part is what is the impact of the technology? That's where things like job security would come in. Even way before AI, if you think about the broader concept of frontier technology and robotics, you know, this idea that it helped people up to a point in automotive companies, for example, not to lift heavy stuff. So they were happy that the robots were there to help them. But now, you know, they don't have a job anymore because the robots can do it all.
So that's the second one. Impact. And the third one, again, linking with what Barry said, is regulation. How do you protect consumers? How do you make sure that this very powerful technology can be used in a responsible way? And in a way that would benefit not only, you know, the companies that that are due to make a lot of money from employing it, but also the consumers and society at large.
Liz Riley: Do you think that these are not legitimate concerns, but do you think there is anything really to worry about here when it comes to the ethical side of it? I know you brought up that obviously you're in academia and so you have plagiarism to be concerned about. But in a business context, you think that there are things to be concerned about?
Daniel Milan: Absolutely. I think, you know, similar to many other functional areas in business, everything can be submitted to ethical scrutiny. And I think should be. So it's absolutely essential that we should ask these very difficult questions. The answers are not easy because, you know, inevitably it's going to be a double-edged sword because the technology itself is agnostic in a way.
It's how you apply it. It's what your – what your – intentions are when you use something either for good or bad. So I would be, you know, a big advocate for, you know, maintaining a very, very strong ethical focus on this, sort of along with, discussions about regulation, sort of that ethical versus legal, you know, quandary is always going to be with us. And I think it's also relevant here.
Liz Riley: Earlier this year, some AI industry leaders and researchers and influencers, they all signed a statement and their warning in the statement said mitigating the risks of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war. What do you guys think of that?
Barry Scannell: I think it's unmitigated nonsense. I can't speak strongly enough – and what a monumental, monumental waste of time and intellectual effort it is considering those questions at this point. I'd question the, you know, motives sometimes, and I'd take them with a pinch of salt. I think there's far more pressing issues to be concerned about in relation to AI, such as the impact and the creative industry.
As just one example of how employees might be impacted. As another example, you know, your listeners are going to hear a new phrase in the next few months that's absolutely going to eclipse the likes of ChatGPT, which is autonomous agents. And autonomous agents – that's what's really going to change. Autonomous agents are like, imagine that a little swarm of very clever ants who can talk and have access to your emails.
Basically, they're autonomous digital agents, which, you can, let's say if you wanted to organise a work night out and it's such a pain to organise the work night out, because you have to email everyone to find a date, you have to email the venue, you have to book the venue, you have to figure out what people would like to do.
Whereas with autonomous agents, you just tell the autonomous agent to organise the work night out and it'll do everything in the background, it'll send the emails, it'll make the bookings that will identify what people will be happy to do. And that sounds extraordinary. But the technology exists and it very strongly exists. There's going to be issues with that potentially.
I mean, you know what, if an autonomous agent tries to be helpful and sends, you know, somebody's mistress a bouquet of flowers and it appears in a credit card or something like that. So, you know, there's far more mundane pressing concerns and I think talks about extinction and, you know, the existential threat of AI are very much tied to sensationalism and clickbait.
And it's what grabs the headlines. But, look, if you give access to launching nuclear missiles to AI, I totally agree that, things might actually happen if that occurs, but I don't see anybody rushing to do that. So I think those, you know, concerns are very much overwrought. I think, like any technology, it should be regulated.
As it happens, it's going to become one of the most regulated pieces of technology in history.
Liz Riley: So, you're on the fence.
Barry Scannell: Yeah.
Michael Diviney: But you would, and you would recognise that there's a sense of caution, if not fear out there – from a practical perspective, what do you think is behind that?
Barry Scannell: Sensationalist media headline saying that the world is going to end because of AI might have something to contribute to fear around AI. Like, you know, we're at the coalface and dealing with clients, who are deploying AI across their companies, and they're not worried that the AI is going to become sentient and take over humanity.
They're worried about PR risks. They're worried about data protection concerns. They're worried about, you know, the hallucination factor of generative AI. They're worried about IP issues. And yeah, there is a fear and there is a concern because there's a new technology. But it's not stopping companies. It's, as you say, giving companies a cautious approach, which is the correct approach to take, I think, with new technologies.
But I think the other side of the coin is that everybody's going ahead with it. So, I think, you know, moving forward but cautiously is the correct approach to take.
Michael Diviney: So, there's no way of avoiding AI. I mean, we mentioned SMEs and smaller companies earlier and, you know, some people might say, you know, I'm just going to not use AI, you know, and therefore, you know, avoid the reach of the legislation or the need to comply with it. But it's going to be pretty impossible to avoid it, I think.
Barry Scannell: I think a lot of SMEs didn't use the internet.
Liz Riley: How are they doing?
Michael Diviney: When we were chatting the other day, Barry, you said, you know, AI is for a lot of enterprises, it feels an enterprise manager feels like handing the keys of their new Porsche to their 17-year-old to do their driving lessons. And is that still a good analogy for you?
Barry Scannell: You know, I think it doesn't really matter what car you're driving, as long as you start the driving lessons in a car park or somebody far away from where you can cause any damage. And you know, if you're sitting there and if there's proper guidance called guardrails in place, you know, I think that's it's appropriate. But I think, you know, that you have to be practical about these things.
You know, of course, for a 17-year-old driver doing driving lessons isn't going to be much use to most 17-year-olds. And like companies, you're not going to be getting the Porsche equivalent of an AI system. You're going to be getting the Ford Fiesta, that to be able to, you know, do the necessary tasks that you needed to do.
Daniel Milan: For my side as well, I fully agree with, with Barry that this whole extinction debate, I think, is complete nonsense. It's an over exaggeration. It goes back to what I said earlier, you know, that that we have this inherent debate about good versus evil. And then sometimes people will over-exaggerate the evil part of this.
And it's sort of built into this sort of almost, I would call it a fallacy that that one day we're going to find out you know, was this really good or bad? And then these people will say, well, we warned you and we told you so. But that's nonsense because we – it's not either/or, as I said before, you know, it's both good and bad.
It will continue to be good and bad. That's why we need regulation. That's why we need standards and policies in companies as well. Because a lot of people who use this with programmers who build bias into algorithms – they're not evil. It's sort of lack of training, lack of understanding. And that's another big tension I think that we have is that people who understand the ethical implications of these issues quite often don't understand the technology at all.
So, it's very difficult to transplant your ethical knowledge to this whole new area. And at the same time, engineers and programmers and people like that – they were never trained, with few exceptions, you know, in thinking structurally about ethical issues, morality, business ethics, applied ethics, etc. So, there's a bridge that has to be sort of covered and I think many universities and organisations are trying to do that as well.
Liz Riley: Do you guys think this statement was a PR stunt, maybe? Some of the people who signed that statement, when you take a step back and you look at them and their history of business and ethics, you go, well, if he is saying this, then there must be a real issue with this. We will not name names.
Do you think that this was all just a momentary stunt? What do you think? What could have inspired a statement? Michael, do you have any thoughts on this? What could inspired a statement from somebody? But we have two experts here saying, this isn't actually true, this is nonsense.
Michael Diviney: There is, you know, there's quite a few leading thinkers like Max Tegmark springs to mind and, you know, and, Stephen Hawking has famously, you know, been quoted about this extinction risk. But this is very far in the future and there's question marks as to whether it can ever happen, this what they call the superintelligence, tipping point, where, you know, machines can become actually – attain what's called artificial general intelligence, which is human level intelligence, actually teach themselves. And we'll get to the point where we don't actually understand what's going on in their consciousness. I've seen that movie. Yeah. And so it's, you know, you could say it's farfetched, but I think Barry's right.
I think our immediate, near-term – even medium-term – concern should be about the practical regulation and ethical issues around the AI that we have now, which is generative AI.
Liz Riley: Turning to the KPMG CEO survey – the CEOs concerns reflected in that survey until regulations are in place, what practical measures can organisations take to manage and mitigate the ethical and reputational risks around the use of AI? Daniel, is there – even when such laws are in force, do you think they can anticipate all of these behaviours and pitfalls?
Daniel Milan: I don't think you can ever anticipate everything that's going to happen, but I would suggest that from a sort of a company perspective, the one of the most important things would be to ensure that you have the requisite skills and knowledge inside the company, particularly at board level, because that's where, you know, strategy has to be set.
That's where you have to get final sign off on, you know, issues around risks and controls inside the company. So, unless you have the people on the board in executive positions that really not understand the technicalities, but understand at a very high level, you know, what's happening here, the speed at which it is happening and what the major risks and also opportunities are – you need that.
Secondly, I would argue that you have to move beyond this sort of 'simply comply' attitude. So yes, we have to monitor, you know, what's coming up in the regulation. We have to anticipate that we have to make sure as a company that we will comply. But if that's your end goal, I think you're going to miss out on some of the opportunities also, you know that these technologies present. And that's why I think that AI and technology more broadly should be viewed as a strategic issue inside organisations, alongside compliance and risk management.
And finally, just – I'm not a lawyer – so I think that if you adhere to some of the basic principles, and there's a lot of activity at the moment in terms of collaboration also between different, you know, communities to say, can we come up with a set of AI principles, ethical principles that would also underpin that?
I think it does underpin the regulation and legislation that we see around fairness and transparency, etc. If you can ensure that you stick to those minimum ethical requirements, even without regulation, then you should be okay.
Liz Riley: You brought up earlier trust. That was a major concern. Barry, do you think – how do you think AI can gain the trust of the public?
Barry Scannell: I think it's a question of how it's being perceived publicly, you know, and again, we're going back to sensationalist media headlines. But I think the way AI is really going to become ingrained into society isn't like directly via consumers necessarily. It's to the less obvious it is, the more it's begun to become ingrained. And I think it's just a question then of assimilation and being becoming used to it and seeing what its benefits are and seeing that it's look, it's just another type of data processing.
And, you know, we're so familiar with data protection legislation, we're so familiar with, all of the issues that come along with protecting data that there shouldn't be any difference. But I think from a company's point of view, though, if we're talking about public and the commercial enterprise setting, there's really, really mundane and fundamental aspects of, I think, that need to be considered.
And, you know, when there's something so new and so exciting, it's I think the instinct is to go back and thinking and, you know, big thinking and thinking around concepts and look at the regulation. But I think what companies actually need to do, first and foremost is actually just bring it really down to a granular level. So then what you need to consider as a company, as well to your contracts, to adequately protect your IP.
And if you're not able to rely on copyright, say, are you going to be able to look at other aspects such as commercial secrets, confidentiality, in terms of things like the outputs of AI – that on the output side. I think also, you've to consider that all of the AI companies that are getting sued at the moment are getting sued for copyright infringement.
So what you need to consider as a company that if you are using an AI system and like we're seeing kind of cloud-based machine learning platforms as a service type systems, and we're also seeing very bespoke, fine-tuned proprietary models based on open source foundation models and large language models. Like, do your contracts that deal with these or interact with these have the proper third-party infringements, indemnities in place, liability caps in place?
You know, are the reps and warranties in their contracts sufficient to deal with it? So it's actually the really mundane boring stuff is actually currently the most important thing, because the mundane and boring stuff isn't what companies have started thinking. And I'll just give a final example, is that when you're on the phone to customer service to the bank or to your insurance company, whatever, and you get that recording – 'please note that these calls may be recorded for training verification purposes' – well, one of the main uses for AI and large language models, particularly at the moment in businesses, is its deployment and customer service. So you're going to have AI systems that are trained and fine-tuned on customer interactions between customer service agents and AI systems to create help bots.
So do you need so many companies, that are listening are going to be considering this if they're not actively doing it. But have you got a new recording saying, oh, and this might be used to train AI systems? And just moving a little bit further from that, does your privacy notice cover the fact that its company data, the employee data, customer data might be used to train AI systems? Does your standard contractual clauses permitting, you know, transfers of data outside the EU cover it? Does your data processing agreements cover it? And the answer is invariably no, because as we said at the start, this isn't even a year old. This technology, so no matter how advanced your data protection regime was, it's unlikely to be so advanced that has actually already incorporated AI.
So to get back to your original question, how would you incorporate trust? You have to look at the basic building blocks, and it's about baby steps and about building the fundamentals of the framework first. And once you have the fundamentals in place, then you can feel safe because you know, from there risk mitigation and so on flows. So the less harm that's being done, the more trust there will be.
Michael Diviney: Isn't there a new proposed AI Liability directive from the EU which is going to support, I think, what you're saying, the practical aspects of what you're saying there?
Barry Scannell: Well, you know, this is something that companies need to be particularly aware of. So yeah, the AI Liability directive is coming in tandem with the AI Act. So the AI act is a piece of European regulation that will have direct effect, like the GDPR. The directive is more the typical piece of European legislation that you need national transposing legislation to bring into force.
Basically, what the AI Liability directive does is it makes it significantly easier to sue for damages caused by AI.
Liz Riley: Is there any possibility that there can be a globally agreed standard, an ISO standard, that would align with what appears to be a stringent framework that is being proposed by the EU?
Barry Scannell: I think there's currently 22 ISO standards, it might be 29, in the works to deal with AI and AI standards. Do I think it'll be globally? In short, no, but I think the Brussels effect will have a significant impact in much the same way that we saw with the GDPR. And I think it's already starting, because if you look at the bipartisan framework that has been put before the US Congress for their AI regulation, I think it very closely mirrors what the provisions of the AI Act are.
Another thing to consider is that if you want to make your products available in the EU – one of the most valuable markets there are for, you know, technology and technology goods – you just have to comply with it. And it doesn't matter if you're based in Berlin or if you're based in Boston, you're just going to have to comply with us.
Now, you can ignore the European market – good luck to you if you do – and that's part of the approach the EU has taken. I do think if we are to address it fully and properly, that there needs to be more joined-up thinking, but creating a global copyright, for example, copyright convention like the Berne Convention or the WIPO convention takes, you know, decades – literally.
So we're nowhere near something like that. And I'm not even sure if the appetite's there for it, because in the United States, with the current raft of lots of copyright cases and infringement cases going through the courts against AI companies, it's looking that the US doctrine of fair use might actually end up being used to permit text and data mining in the US as well.
So yeah, there's a very fragmented international framework. And yes, it's problematic, but I don't think there's a solution on the horizon.
Michael Diviney: There has been some pushback from companies from the business sector as the Act, the EU Act, has gone through, you know, arguing that it's too stringent, that it's going to hamper business in this region. Is that fair to say that it will? Or do you think it's an advantage having stricter regulation?
Barry Scannell: I think you can name literally any piece of European legislation and what you've said is will be accurate, that, oh, there's been pushback by companies in relation to this piece of European legislation. I'm sure if there's a piece of European legislation coming through that, you know, don't eat babies, that would be a company that would be lobbying on the other side saying, well, yeah.
So, but all jokes aside, there is that question that it might stifle innovation. And I think something that we've seen, and it's been the theme of this conversation that we've been having, is that regulation is actually something that's helping companies deploy AI and deploy it safely and feel comfortable doing it. Sam Altman, the CEO of OpenAI, was before the Congress in the United States calling for more regulation around AI.
Also, I think you need to consider, you know, what is the AI act actually doing? And I think when you look at it, it's not putting European countries at a disadvantage because it's just about AI systems that will be available on the European market. So whether you're an American company or an Irish company, you still have to comply with the same provisions if you want to make it available in the European markets.
So, I think having these guardrails in place actually helps innovation. I think it gives the framework that companies are just striving for and looking for. And, you know, being at the coalface, we see what the early issues companies have when they're deploying AI across the system that there's there are companies. And so for example, bias, as you mentioned, Daniel, is a huge issue, huge issue.
And, you know, companies are saying, what do we do? How do we do this? And what we're experiencing as well – we've developed an AI impact assessment, which, you know, does an assessment of the AI, how it impacts stakeholders, customers, employees, so on. But also it incorporates a fundamental rights impact assessment, which is actually required under the AI Act.
And it also incorporates a data protection impact assessment, which may also be required under GDPR and the AI Act. And these are helping companies just put in place these systems with the level of comfort that they can address to bias issues.
Michael Diviney: And we're hearing that from CEOs. For example, in the survey we mentioned earlier, where they're looking for regulation, you know, they're looking for the, as you say, the guardrails. And there's potential, I think for the EU AI Acts to be an international benchmark or to be a benchmark for regulation elsewhere.
Daniel Milan: Yeah, I agree with what you've said. And perhaps coming in from a slightly different angle, the I think the risk of a of a global agreement is always that it's well, that's just the reality. It's going to be high level and not enforceable to if you think like, you know, examples like the UN declaration for Human Rights, for example.
One more interesting possibility might be the UN Global Compact, which is the world's biggest corporate responsibility initiative. And it's actually precedent there because they used to have nine principles only. And then they added the 10th one on anti-corruption. So perhaps they could add, you know, an 11th one on UN responsible technology or AI, and get companies to sign up to that.
But ultimately, you know, I would say the ideal would be for companies to simply adhere to those sort of basic principles, which means that even with the guardrails, I mean, that would be the guarantee that they stay, you know, within them. Extreme example, perhaps a useful one is, I'm using a platform called Midjourney at the moment, which is, it's great AI images.
You simply prompt them and say, imagine this and this, and then you get the image and you look at their code of conduct. The first line is, don't be a jerk. That's naïve in a sense, but I think it is helpful. I mean, obviously you can't stop there, but if people adhere to, you know, some of those very, very basic ethical principles in terms of fairness and transparency, you can build on that with the regulation, but you need that foundation.
Liz Riley: Don't be a jerk. Yeah, yeah. Before we go, because we're running out of time here and we've taken up enough time of our experts. Daniel, can you tell us about your campus start-up, Integrity IQ, and its use of AI to help organisations address and prevent ethical failures?
Daniel Milan: Yeah, Integrity IQ is going to become a Trinity spin-out company. We received a commercialisation grant from Enterprise Ireland and working with the Innovate Centre, which is a centre hosted by Trinity. It's described as an intelligent integrated integrity system. And it will use all these things that we spoke about in a responsible and ethical way, really to provide ethics, personalised ethics training at scale for organisations.
So, you know, using, you know, many of these tools, and using a framework – and I won't go into detail – that I developed called sort of ABCD that looks at assessment behaviour compliance and disclosure, as you know, some main integrity management components. The idea is that no matter how big the organisation is, each employee will get their own personalised ethics training based on their perceptions, their experience, the level of seniority, the functional area inside the organisation, in a sort of immersive way they create avatars and, you know, it will be fun as well to do it at the same time.
Michael Diviney: So, it has the potential to use AI to help people be ethical with AI, is that it?
Daniel Milan: Absolutely! And I'm sure you know, it will be dilemma based. So there will be many scenarios. And you know, we're working on those at the moment. I've collected thousands of them over the years, you know, working with companies and clearly technology and AI, you know, will be built into this, into the content as well. We're working on that at the moment.
We hope to start pilots early in 2024. And the company will be spun out by the end of next year.
Liz Riley: That sounds brilliant. And, Barry, what is your vision about how all of this disruption will play out and settle down in the long-term when it comes to AI?
Barry Scannell: It'll be grand.
Liz Riley: "It'll be grand." That's the most Irish answer.
Barry Scannell: Yeah. And it will be, I think it's just about, you know, getting to grips with new technology and educating people and learning about it. I think, you know, that's the most immediate hurdle is that education piece about learning about technology, learning what it's capable of doing and, you know, learning about how it impacts the business and what the legal implications are.
I think, most companies don't realise how beneficial the technology actually is. But, you know what, in Ireland we're so well-placed because Ireland really has an opportunity to become the world leader when it comes to AI regulation and to AI, because when you consider that nine of the top ten technology companies in the world have offices in Dublin, all of the big AI companies in the world have offices in Dublin.
And as you've seen with the GDPR, so much of the regulation goes through Dublin because of, you know, the where we sits in terms of those technology companies. And I think with the AI Act, it'll be the same. And actually it's just been reports in the news that the Irish government is actually considering looking to host the AI Office – the EU's AI office – which will oversee European regulation in relation to AI.
And I think that Ireland has a very good chance of getting that. So with the process effects and everything else, I think Ireland could become an international hub for AI development and AI regulation. Like the really important aspect of why Ireland is so important in this space is that we've got the expertise. Ireland per capita has the most AI expertise in the world.
And, you know it's, you know, we talk about the big picture ideas and, you know, myself and Daniel over here talking about grand concepts, but there's people, you know, in industrial estates in Galway and Kerry working on really incredible AI technologies. And they're really forging the future for Ireland's place in relation to AI. So I think the future is extraordinarily bright, I think.
Look, my kids already are – I'm introducing them to AI, so I pay for the €20 a month for GPT plus or whatever it's called. And you have the app on your phone, you can actually have conversations with it. So at bedtime, sometimes, the kids will have a conversation with the AI and tell the AI system what bedtime story they'd like it to tell them.
So, you know, daddy turns into a baby, or there's a poop monster, or something like that. And the AI knows that it's dealing with young children, and it'll create a very entertaining story that I can then read to them. So, you know, even things like that AI and what the education function, you know, that conversational aspect of ChatGPT, for example, like something we do at William Fry at the moment, it's at the trainee kind of milk-round season at the moment, all the applications are becoming due in and we've been visiting the colleges. I was in Trinity this week, actually, and I was in UCD recently. We did a Turing test with the students. So the Turing test is they have to guess – there's three – we ask ten questions and there's three answers to or from a human and one is from an AI.
And it's all done live and interactive. And the students have to vote then on which one was the AI. And UCD and Trinity both, despite the incredible expertise and talented students, overwhelmingly ChatGPT was able to convince it that it was a human like 80% of the time. So I think with the students giving feedback as well, how they're using it.
And there's really good ways that students can use it. Like and I was thinking, gosh, if I had that in college so you could feed ChatGPT your notes and as you're walking in, you have your earphones and having a conversation with ChatGPT, and it'll ask you questions and help you revise and you can actually discuss it. And I've been trying it out myself – I was saying, okay, it's been ages since I looked at the law of negligence. Can you just go through some of the most important negatives and cases with me? And then when we're done, ask me questions.
Another example was I said, okay, I want you to teach me about astronomy, but, I actually know a little bit about it. So I want you to ask me a series of questions to gauge what my knowledge is before you start teaching me. And it started asking me questions that got increasingly more difficult, for example. And then I said, right, well, we'll go in at this level. And, you know, I think that especially when we look at our children and what the educational aspects of AI are, I think that the future's looking extremely bright, and I think Ireland is going to play a really important part.
Liz Riley: I don't know how we get more positive than that. Closing out. So I want to say thank you to both Daniel and Barry for taking the time to chat with Michael and me today, and you can read more about this topic on our website, accountancyireland.ie, as well as going to our Thought Leadership Hub on charteredaccountants.ie.
And you can listen and subscribe to the Accountancy Ireland podcast at accountancyireland.ie, on Apple Podcasts and on Spotify. So thank you for listening and bye for now.
TinPot: Today's podcast was recorded by TinPot Productions. The producer is Darren Moorhouse and the series editor is Liz Riley. This programme is published by Chartered Accountants Ireland.