close
close

Mondor Festival

News with a Local Lens

An agent shield? Using AI Agents to Improve the Cybersecurity of Digital Public Infrastructure
minsta

An agent shield? Using AI Agents to Improve the Cybersecurity of Digital Public Infrastructure

By Sarosh Nagar and David Eaves

Growing global interest in digital transformation and artificial intelligence represents a tremendous opportunity to transform the relationship between citizens and their government, as well as highlighted ” in the recent joint press release from the previous, current and future G20 presidents ahead of this year’s summit. Digital Public Infrastructure (DPI) — which aims to stimulate public and private innovation by manufacturing public digital systems essential to society – is at the heart of this push towards digital transformation. PPP actions as a platform layer that ensures inclusion and access to facilitate core activities that can enable more services ranging from e-commerce to telehealth services. To date, the DPI has Understood several digital solutions, the most common being digital payment, identification and data exchange systems. Equally important, IPD requires a set of safeguards and rules to ensure they are inclusive and safe. THE UN universal guarantees for digital public infrastructure presents some of the most comprehensive technical and legal models that define what such safeguards should look like.

As the use of DPI systems becomes more widespread, they also increasingly become the target of hostile actors seeking to undermine the functions of these systems. Thus, improving IPR cybersecurity is a vital priority for governments and businesses around the world, and AI agents are a particularly promising tool in this regard. AI agents are a subset of AI systems capable of business autonomous actions without human involvement, such as interpreting data and reacting to the environment. For example, an AI system that can autonomously plan its user’s vacation would be considered an agent. To date, most of the talk about agents concentrates on systems built by companies for use by consumers or businesses. In contrast, relatively little discussion has focused on what agents might mean for digital public infrastructure, particularly for improving DPI security. And although the universal DPI safeguards in the United States remain silent on the use of AI as a tool, its operating principle states that “the DPI should integrate and continually upgrade security measures, such as encryption or pseudonymization, to protect personal data. A legal framework should fill gaps where technical design may be insufficient to ensure data security. This article presents a vision of how agents could help uphold this principle, improving the security of DPI to facilitate its broader public value.

AI as a shield for DPI

PPP creates public value by facilitating valuable interactions such as digital payment and identification. However, the use of DPI in these important areas also makes it a lucrative target for an ever-changing cybersecurity threat landscape. Hostile state or non-state actors could launch cyberattacks against DPI by facilitating the execution of unauthorized code or exploiting software vulnerabilities to expose individuals’ personally identifiable information (PII) and freeze vital financial transactions, such as first case. arrived in India in September 2023. In turn, the scale of IPR as a national infrastructure means that such large-scale attacks could cause significant damage to national economies and government structures. On a smaller, day-to-day scale, there are also risks that authorized users will abuse DPI to achieve their own goals. This was the case in Estonia, for example, where X-Road, the Estonian DPI platform intended to help government agencies and the private sector exchange sensitive data, has strict security protocols to ensure that only authorized users can access to sensitive data, but has experienced limited incidents where certain users are authorized. users abused their authority, for example when a medical worker abused help the police access their spouse’s records. These incidents, although limited, create breaches of trust which risk making citizens more hesitant to use DPI.

In these areas, however, AI agents could serve as a powerful “shield” to enhance DPI security. For example, against the first class of large-scale attacks, AI agents could serve as a preventative tool to identify critical vulnerabilities that hostile actors could exploit. Google DeepMind, for example, recently used an AI agent to detect a “0 day vulnerability” or security flaw that even the software developer didn’t find. Furthermore, building Through Anthropic’s work with today’s large-scale language models (LLMs), AI agents could also equip or test other software systems, running simulated attacks, which can help organizations better prepare for such attacks in the real world. In addition to this preventive role, AI agents could also defend themselves in real time against hostile cyber operations using their autonomous capabilities, similar to what the Japanese firm Fujitsu did. Such real-time defense agents could identify unauthorized code executions and isolate compromised services in response, allowing them to limit disruption caused by DPI attacks.

Meanwhile, on a smaller but still important daily scale, AI agents could secure sensitive data exchanged between DPI systems to build citizen trust and drive adoption of these systems. For example, to counter abuse by authorized users, as happened in Estonia, AI agents could be deployed on data exchange systems to autonomously examine and alert individuals, businesses and governments of cases in which users attempted to access sensitive data in areas unrelated to their field of activity. work. Some open source software (OSS) offerings are already available perform this function, but integrating AI agents into these workflows would significantly speed up this process, allowing these actors to block unauthorized actions of these users. AI agents could also perform similar functions in areas beyond monitoring user access to sensitive data, e.g. look using digital identification and payment transactions to identify fraudulent actors. The result of these measures would be to increase citizens’ confidence in the security of the data they share via DPI, thereby encouraging wider adoption of these systems.

Of course, there are a number of obstacles to realizing this vision. Immediately, AI agents used in cyber defense are themselves vulnerable to their own classes of hostile cyber operations. For example, backdoor attacks against the retrieval augmented generation (RAG) mechanism that allows agents to recall useful knowledge could significantly hamper the ability of these systems to identify improper access by authorized users to sensitive data or detect unauthorized code executions. Maliciously aligned authorized users could Also launch rapid injection attacks against AI agents, feeding the system with prompts designed to cause the agent to fail. Even without hostile interference, there is a risk that AI agents will hallucinate and fail to effectively perform a function for their user or, even worse, inadvertently perform a harmful function. The impact of this latter risk can also scales with the number of tasks an AI agent is tasked with performing autonomously, as this increases the number of chances an agent cannot function without human supervision. There are also more mundane concerns, like interoperability issues with agents, but these highlight some of the various risks that can emerge with the use of AI agents in cybersecurity.

What governments should do

While the startup’s recent fundraising of $56 million /dev/agents The shows that the private sector is waking up to the immense potential of AI agents, and now governments should too, especially to secure IPR systems. Although technical challenges may hamper the most ambitious applications, given the limited data on AI agents in DPI, government teams like the U.S. Digital Service should begin creating and piloting AI agents simple for tasks such as securing data exchange systems or red teaming for broader DPI systems. . To mitigate risks, these agents should be tested in low-risk use cases and with obvious safeguards: for example, if a cybersecurity agent repeatedly fails to perform a given task, it could be wise to create control systems that allow the agent to stop autonomously. Over time, however, the iterative learning process will help states gradually deploy ever more complex agents to secure their digital public infrastructure.

At the same time, governments and businesses should also recognize that the deployment of AI agents in cybersecurity highlights the need to improve the understanding and security of multi-agent systems themselves. States and businesses should fund more research into multi-agent assessments and agent red-teaming to ensure that cyber defense agents are robust against hostile actors and capable of performing their primary functions. Much agent-based research still remains to be done in this regard, so significant public or private support could play a critical role in catalyzing the growth of this field. Together, the result would ensure that AI agents can form a valuable shield to defend DPI and secure its benefits.