Organisations adopting AI to streamline processes must provide clear guidance to staff on the dos and don’ts of using the technology, writes Moira Grassick
Artificial intelligence (AI) has gone mainstream this year. It seems that everyone has a story about how they have used ChatGPT, the generative AI tool, to make their personal or working life easier.
From an employer’s perspective, the rapid progress of AI raises difficult questions, however. Although a chatbot on the company website can be a valuable tool for interacting with customers, there are tricky ethical questions and business risks to consider here.
Employers are grappling with issues such as whether staff should be permitted to use AI to make their jobs easier, data protection concerns, and whether the outputs generated by AI tools are accurate enough to rely on.
For employers, the key risk to assess is the scale of any damage their business might suffer if staff do not use the technology correctly.
Many people are familiar with the US lawyer who used ChatGPT to help him prepare a case with disastrous results. The lawyer cited several cases in court filings that were fabricated by AI. The lawyer didn’t consider that the technology would generate fictitious precedents and was unaware that it might produce inaccurate information.
To avoid the embarrassment of making a similar mistake, employers can take some prudent actions to protect their business against the risks posed by employees using AI tools.
Develop an AI policy
To avoid an embarrassing situation like the one suffered by the hapless US lawyer, your business should consider developing an AI policy.
This policy can address specific risks affecting your business. Some of the most common issues arising from the use of AI in the workplace are:
Protection of confidential client and employee information
While many of the tasks that typically involve the use of AI do not pose any obvious risks, employees must be aware that sensitive company data should not be accessed by AI tools.
AI tools analyse vast amounts of data to generate responses to queries, and it’s important that no personal information about your employees or customers is disclosed.
If an employee submits confidential information to ChatGPT or any other AI tool, your business is exposed to a range of privacy, commercial and data protection risks.
Your AI policy needs to clearly define what types of data employees can submit to AI tools.
Intellectual property risks
You also need to consider intellectual property risks.
If your business publishes content online, it is important to ensure that AI-generated content is not subject to copyright.
AI tools typically do not cite the sources of the content they create. Instead, the AI tool may generate output by using existing content that appears on the internet rather than producing original work. Organisations, therefore, cannot check if the publication of AI-generated content will breach someone else’s intellectual property rights.
If AI generates someone else’s content, and an organisation publishes it as its own, it is open to reputational damage for plagiarism.
Safeguarding the organisation
With AI becoming mainstream, now is the time to start preparing your AI policy.
To get the most out of AI technology, you must inform staff about how to use the tools responsibly.
With a strong policy in place, you can ensure your business can reap the benefits of this powerful new technology while safeguarding your operations against confidentiality, intellectual property and data protection risks.
Moira Grassick is Chief Operating Officer at Peninsula Ireland