The proliferation of AI-specific risks has become a significant concern in recent times. As we observe the emergence of new technology trends, it is crucial to acknowledge the potential for harm and the possibilities for threats to exploit our utilization of technology. This blog post is part of the AI Governance Alliance’s Agenda series, which advocates for the responsible global design, development, and deployment of inclusive AI systems.
-
With Improved Technology Comes New Risks
There has been extensive deliberation and discourse surrounding the emerging risks specific to AI. One prominent example is the concern regarding fairness and bias in AI systems, which can stem from the initial training data or the ongoing evolution and learning of the AI model. Additionally, there are apprehensions about the lack of transparency and the challenge of aligning these systems with our values, particularly when it comes to comprehending the decision-making process behind the AI’s outputs, especially in cases where the outputs are unreliable.
Furthermore, organizations face new vulnerabilities as attackers can exploit the AI system itself. These vulnerabilities are already known, and attackers can manipulate the training datasets and model evolution to create potentially unsafe or misaligned models that cater to their malicious intentions. However, the susceptibility of AI systems extends beyond data manipulation and model shifts. We will also encounter runtime software errors, similar to the ones that have plagued computing systems since their inception.
If the AI environment is compromised, these errors can provide opportunities for attackers to gain control over local computers and use them as launching pads to infiltrate the broader business infrastructure. Ultimately, the potential harms of AI systems, like any other digital system, will depend on the specific contexts in which they are used. It can influence individuals, societies, human agency, democratic processes, as well as the governance of various sectors and businesses from government agencies, financial institutions to burgeoning online casinos.
-
A New Breed of Cyber Attacks
The prevalence of AI-system vulnerability and risk is on the rise, prompting the cybersecurity profession to actively develop models aimed at countering these threats. However, there is still a significant capability gap when it comes to protecting our AI systems and the business processes they support. This development takes place within a broader geopolitical and technological context.
Ongoing conflicts between nations such as suspected intellectual property theft by China from the United States, are driving innovation and capacity-building in both cyber-offensive and defensive techniques. It is likely that these advancements will extend to other domains, including cybercrime, in the near future. Consequently, we can anticipate facing more significant cyber-threats, with threat actors possessing enhanced skills and a growing ecosystem.
The global investment in technologies and business models, such as the Internet-of-Things and cloud services, presents a substantial opportunity for the development of AI capabilities. This includes access to algorithms, models, and training datasets, as well as AI-as-a-service offerings and solutions that make the technology more accessible and user-friendly.
-
Why AI Governing Oversight Is in Order
The threat level we are facing is increasing, and our reliance on digital technologies and services is creating a systemic cyber risk. As we employ AI technologies to inspire and develop new solutions for some of the most critical challenges faced by humanity, it is crucial to have a fiduciary and ethical responsibility to ensure that our investments in this technology are not exposed to an unacceptable or unmanageable level of cyber risk.
We can anticipate the cybersecurity profession to provide ideas, practices, and tools, but only if there is a market demand. The business will play a central role in finding a solution, and we will require leaders to guide us towards a future that is safe and secure with the help of AI.
Senior leadership should start by ensuring that existing risk controls, which are already invested in, measured, and performing well, can be extended to an enterprise model that incorporates AI technology. Our insurance providers, investors, customers, and regulators will expect such a stance, and we must have operational controls that not only allow oversight but also effectively defend against motivated threats.