close
close

Mondor Festival

News with a Local Lens

How the law can tame artificial intelligence
minsta

How the law can tame artificial intelligence

When you scroll through your social media feed or let your favorite music app create the perfect playlist, you may feel like artificial intelligence is improving your life by learning your preferences and responding to your needs. But behind this convenient facade lies a growing concern: algorithmic harm.

These harms are neither obvious nor immediate. They are insidious and develop over time as AI systems quietly make decisions about your life without you even knowing it. The hidden power of these systems is becoming more and more a significant threat to privacy, equality, autonomy and security.

AI systems are integrated into almost every facet of modern life. They suggest what shows and movies should you watchhelp employers decide who they want to hireand even influence judges to decide who is eligible for punishment. But what happens when these systems, often considered neutral, start making decisions that disadvantage certain groups or, even worse, cause real harm?

The often overlooked consequences of AI applications require regulatory frameworks that can keep pace with this rapidly evolving technology. I study the intersection of law and technologyand I described a legal framework do exactly that.

Slow burns

One of the most striking aspects of algorithmic harm is that its cumulative impact often goes unnoticed. These systems generally do not directly attack your privacy or autonomy in ways that you can easily perceive. They gather large amounts of data about people – often without their knowledge – and use that data. shape the decisions that affect people’s lives.

Sometimes this results in minor annoyances, such as an advertisement following you across websites. But because AI operates without addressing these repetitive harms, they can spread, causing significant cumulative harm to diverse groups of people.

Take the example of social media algorithms. They are apparently designed to promote beneficial social interactions. However, behind their seemingly beneficial facade, they silently track users’ clicks and clicks. compile profiles of their political beliefs, professional affiliations, and personal lives. The data collected is used in systems that make consequential decisions — if you are identified as a pedestrian pedestrian, considered for employment or reported as a suicide risk.

Worse, their addictive design traps adolescents in cycles of overuseleading to an escalation of mental health crises, including anxiety, depression and self-harm. By the time you realize the full extent of it, it’s too late: your privacy has been violated, your opportunities shaped by biased algorithms, and the safety of the most vulnerable compromised, all without your knowledge.

That’s what I call “intangible and cumulative harm»: AI systems operate in the background, but their impacts can be devastating and invisible.

Researcher Kumba Sennaar describes how AI systems perpetuate and exacerbate bias.

Why regulation is lagging behind

Despite these growing dangers, legal frameworks around the world are struggling to keep pace. In the United States, a regulatory approach emphasizing innovation has made it difficult to impose strict standards on how these systems are used in multiple contexts.

Courts and regulators are accustomed to facing concrete harmlike physical injury or economic loss, but algorithmic harm is often more subtle, cumulative, and difficult to detect. Regulations often fail to take into account the broader effects that AI systems can have over time.

Social media algorithms, for example, can gradually erode the mental health of usersbut as these harms accumulate slowly, it is difficult to remedy them within the limits of current legal standards.

Four types of algorithmic harm

Building on existing knowledge in AI and data governance, I categorized algorithmic harms into four legal areas: privacy, autonomy, equality and security. Each of these areas is vulnerable to the subtle but often uncontrolled power of AI systems.

The first type of harm is the erosion of privacy. AI systems collect, process and transfer large amounts of data, infringing on people’s privacy in ways that may not be immediately obvious but have long-term implications. For example, facial recognition systems can track people in public and private spaces, making mass surveillance the norm.

The second type of harm concerns the infringement of autonomy. AI systems often subtly harm your ability to make autonomous decisions by manipulating the information you see. Social media platforms use algorithms to show users content that maximizes the interests of a third party, subtly shape opinions, decisions and behaviors with millions of users.

The third type of harm is the diminution of equality. AI systems, although designed to be neutral, inherit biases present in their data and algorithms. This reinforces societal inequalities over time. In one infamous case, a facial recognition system used by retail stores to detect shoplifters disproportionately misidentified women and people of color.

The fourth type of harm concerns security. AI systems make decisions that affect the safety and well-being of people. When these systems fail, the consequences can be catastrophic. But even when they work as expected, they can still cause harmlike social media algorithms cumulative effects on adolescent mental health.

Because these cumulative harms often come from AI applications protected by trade secret lawsvictims have no way of detecting or tracing the harm. This creates a lack of accountability. When a biased hiring decision or wrongful arrest is made through an algorithm, how does the victim know? Without transparency, it is almost impossible to hold companies to account.

This UNESCO video features researchers from around the world explaining issues related to the ethics and regulation of AI.

Closing the Accountability Gap

Categorizing the types of algorithmic harm delineates the legal boundaries of AI regulation and presents possible legal reforms to address this accountability gap. I think the changes would help include mandatory algorithmic impact assessments that require companies to document and address the immediate and cumulative harm of an AI application on privacy, autonomy, equality and security – before and after its deployment. For example, companies using facial recognition systems should evaluate the impacts of these systems throughout their lifecycle.

Another useful change would be strengthening individual rights regarding the use of AI systems, allowing individuals to opt out of harmful practices and requiring certain AI applications to participate in them. For example, requiring an opt-in regime for data processing by companies using facial recognition. systems and allowing users to unsubscribe at any time.

Finally, I suggest requiring companies to disclose the use of AI technology and its anticipated harms. As an illustration, this may include informing customers of the use of facial recognition systems and anticipated harms in the areas described in the typology.

As AI systems are increasingly used in critical societal functions – from healthcare to education and employment – ​​the need to regulate the harm they can cause becomes more pressing. Without intervention, these invisible harms will likely continue to accumulate, affecting almost everyone and disproportionately hit the most vulnerable.

As generative AI multiplies and exacerbates AI harms, I think it is important for policymakers, courts, technology developers, and civil society to recognize the legal harms of AI. This requires not only better laws, but also a more thoughtful approach to cutting-edge AI technology – one that prioritizes civil rights and justice in the face of rapid technological progress.

The future of AI is incredibly bright, but without the proper legal frameworks, it could also entrench inequalities and erode the civil rights it is, in many cases, intended to strengthen.The conversation

Sylvia Luuniversity researcher and visiting assistant professor of law, University of Michigan

This article is republished from The conversation under Creative Commons license. Read the original article.