Nicola Flannery outlines how organisations can navigate the expanding landscape of AI by focusing on ethical deployment, regulatory compliance, and building consumer trust for sustainable growth and innovation
The true societal impact of Artificial Intelligence (AI) systems is yet to be fully realised. However, many already see AI as an engine for productivity and economic growth.
As organisations compete to be the first to unlock and realise AI's full potential, governments and regulators worldwide have started the challenging task of creating legislation and regulatory frameworks around a constantly evolving technology.
While there is still uncertainty around the risks due to AI technologies, some caution must be displayed to truly understand these, particularly where risks and harms to individuals may arise. In addition, privacy and security concerns are still the leading causes of limiting investments in AI-based solutions.
However, with the current buzz around AI, even an organisation not currently considering it will be inclined to do so as the technologies evolve and mature. From this perspective, it is important to start thinking about AI use cases for your business and be ready to implement such solutions in a manner that builds customer confidence and aligns with the regulatory requirements.
There is no doubt that companies that have an issue with how and where they deploy AI technologies will suffer from significant reputational damage.
Trustworthy AI
While the risks of AI technology do exist, there is also no doubt about the benefits that can be realised.
However, the social and economic opportunities of AI may not be fully gained if the public’s concerns about the risks of AI outweigh their perception of the benefits. Therefore, it is crucial to ensure that AI technologies evolve and are deployed in ways that consumers and users can reasonably trust.
Trustworthy AI, also known as ethical or responsible AI, has been proposed to mitigate the risks and enhance consumer/user trust in such systems.
This is an umbrella term that consolidates several components which, according to the independent high-level expert group on AI established by the European Commission, consist of the premise that Trustworthy AI must be:
- lawful, respecting all applicable laws and regulations;
- ethical, respecting ethical principles and values; and
- robust, from a technical perspective, but also considering the social environment.
Applying a human-centric, trustworthy AI-by-design approach will go a long way towards propelling innovative AI efforts while being aware of the risks that must be mitigated.
Six dimensions for trustworthy AI
Fair and impartial
AI systems should make decisions that follow a consistent process and apply rules fairly. They should also incorporate internal and external checks to remove biases that might lead to discriminatory or differential outcomes, helping ensure results that are not merely technically correct but considerate of the social good.
Transparent, documented and explainable
AI systems may not operate as “black boxes”; all parties engaging with an AI should be informed that they are doing so and be able to inquire how and why the system is making decisions.
Responsible and accountable
The increasing complexity and autonomy of AI systems may obscure the ultimate responsibility and accountability of companies and people behind the decisions and actions of these systems; policies should be in place to clearly assign liability and help ensure that parties impacted by AI can seek appropriate recourse.
Safe and secure
Just as we currently depend on the consistent performance of professionals to help ensure that our daily activities are safe and healthy, we should be able to depend on equivalent or even greater reliability as we merge our systems with AI.
Respectful of privacy
As AI systems often rely on gathering large amounts of data to accomplish their tasks effectively, we should ensure that all data is collected appropriately, with full awareness and consent, and then securely deleted or otherwise protected from unsanctioned use.
Robust and reliable
As AI systems take greater control over more critical processes, the danger of cyberattacks and other harms expands significantly. Appropriate security measures should be implemented to help ensure the integrity and safety of the data and algorithms that drive AI.
Nicola Flannery is Director of Data Privacy & Internet Regulation at Deloitte