The explosive growth of AI has transformative potential but also raises critical privacy concerns that must be addressed, writes Pat Moran
The world of artificial intelligence (AI) took a massive leap forward with the emergence of ChatGPT in November 2022. Since then, there has been a surge in the design and implementation of AI use cases across industries such as healthcare, retail, financial services, manufacturing and others.
While the emergence of AI is transformative, this powerful tool is not without its challenges, particularly the profound privacy concerns it raises.
As organisations eagerly harness the potential of AI, it is vital to know the associated privacy risks, such as:
- Data collection and breaches – As AI models evolve, their training datasets will likely grow, increasing the risk of personal and special category data being included. These datasets must be stored and processed securely while training AI systems.
- Algorithmic bias and discrimination – Biased algorithms may inadvertently perpetuate biases and lead to decisions that could negatively impact certain groups of people without the organisation’s intention to discriminate.
- Data subject requests – Once the AI systems are trained and deployed, responding to certain data subject requests becomes increasingly difficult.
- Transparency – As AI systems become commonplace in organisations, users will increasingly unknowingly interact with these systems, including instances where users are affected by automated decision-making.
- Regulatory requirements and industry standards – Even though AI is considered a novel technology, there are existing and upcoming regulations and standards that define and guide its usage. Organisations must demonstrate compliance with these regulations and standards to maintain customer trust and meet procurement standards in the market.
- Misuse of personal data in AI-enabled cyberattacks – Malicious actors have begun leveraging personal data such as audio clips and deep-fake content for advanced phishing attempts and other scams.
- Inaccurate responses – It is common for generative AI programs to respond based on probabilities identified within the data sets used to train the AI instead of actual, accurate data points. This can result in inaccurate responses and may cause issues if users do not verify the authenticity of the system’s responses.
Organisational changes for AI
To successfully traverse the concerns listed above while developing and integrating AI systems, organisations should consider the following best practices:
- AI governance: The teams involved in developing AI governance should be interdisciplinary, including teams in AI development, legal, privacy, information security, customer success and others.
- Privacy by design: The foundation of responsible AI lies in the concept of ‘privacy by design’, which states that data protection and privacy considerations must be implemented throughout the development lifecycle for any AI system. This includes incorporating privacy-enhancing technologies, ensuring appropriate security, compliance with regulatory requirements and other privacy-specific principles. Some AI systems have a ‘black box’-like nature, which makes it harder to detect and fix ethical, privacy and regulatory issues once deployed, increasing the need for privacy by design. Further, there might be other processes that pose too high a risk to move towards automation through AI and will require controls such as “a human in the loop”.
- Transparency: Users must be provided with clear and transparent communication in the form of privacy notices and other means including: confirmation that AI systems are used to process their data (including details of automated decision-making, if present); how their data is collected and processed; how long it will be stored; an outline of their rights, etc. The information helps users provide informed consent and builds trust in AI systems as well as the organisation.
- Fairness: An important step is to perform regular audits of AI systems to test their performance and ensure no bias or discrimination against users. The review should include the automated decision-making algorithm, and the process by which the algorithm makes decisions should be transparent and explainable.
- Data management: Ensure data ingested by the AI system during training is lawfully obtained, high-quality, and rigorous vetting and anonymisation have been performed. Technologies such as pseudonymisation or data aggregation should be implemented to ensure compliance with data minimisation and retention privacy principles. Up-to-date records of processing activities should also be maintained to ensure data is managed effectively throughout its lifecycle. Remember, organisations cannot use publicly available data to train AI systems without a valid lawful basis.
- Risk management, compliance and information security: A risk-based approach, including a data protection impact assessment, should be implemented to assess the level of risk involved before AI systems are deployed. The organisation should also sign off on the risk levels, controls and mitigations. AI compliance monitoring should be incorporated into the organisational, regulatory compliance programme or privacy programme. The wider organisational information security programme should include AI systems and their underlying data to prevent data breaches and malicious attacks. Technical and organisational measures such as encryption, data masking, password management, access controls and network security should be implemented.
- Employee training: As AI is a new technology, employees must be trained periodically on responsible AI usage. Training should include the privacy impact of AI systems, compliance with data protection regulations while using AI, misuse of personal data in AI-enabled cyberattacks and how to guard against it, and data protection best practices.
Conclusion
The advent of AI may be compared to the invention of the combustion engine. While organisations can move faster, they will also require stronger brakes. These brakes may address these multifaceted concerns, which necessitates a holistic approach, combining technological innovation, ethical practices, user empowerment and regulatory adherence.
Organisations’ responsibility will be to innovate and ensure that innovation aligns with the values of privacy, ethics and user trust.
Pat Moran is the Leader of Cybersecurity Practice at PwC.