close
close

Mondor Festival

News with a Local Lens

Trinity creates AI Accountability Lab – News and events
minsta

Trinity creates AI Accountability Lab – News and events

Published on: November 28, 2024

The AI ​​Accountability Lab, led by Dr Abeba Birhane, will be hosted at the ADAPT Research Ireland center at Trinity’s School of Computer Science and Statistics.

The new research group, designed to advance AI accountability research, launches today. It will focus on critical questions in general areas such as examining opaque technology ecologies and performing audits on specific models and training datasets.

As AI technologies continue to shape society, AIAL will examine their broader impacts and hold powerful entities accountable for technological harm, advocating for evidence-based policies. The research will specifically focus on companies’ potential capture of current regulatory processes, present the evaluation of justice-focused models, as well as audits of deployed models, particularly those used on vulnerable groups.

Dr. Abeba Birhane and Provost Linda Doyle on Front Square on a dark night.Dr. Abeba Birhane and Trinity Provost and President Dr. Linda Doyle on Front Square.

Speaking of the work of the AIAL, Doctor Birhane said: “The AI ​​Accountability Lab aims to foster transparency and accountability in the development and use of AI systems. And we have a broad and comprehensive view of AI responsibility. This includes a better understanding and critical examination of the broader AI ecology – for example through systematic studies of possible corporate capture, through to the evaluation of models, tools and packages of AI-specific training data.

The AIAL is supported by a grant of just under €1.5 million from three groups: AI Collaborative, an initiative of the Omidyar Group; Luminize; and the John D. and Catherine T. MacArthur Foundation.

AI technologies, despite their supposed potential, have been shown to encode and exacerbate existing societal norms and inequalities, disproportionately affecting vulnerable groups. In sectors such as healthcare, education, and law enforcement, deploying AI technologies without thorough assessment can not only have a nuanced but catastrophic impact on individuals and groups, but can also alter the social fabrics.

For example, in the field of health, a liver allocation algorithm used by the United Kingdom’s National Health Service (NHS) has been found to be age discriminatory. Regardless of severity, patients under 45 currently appear unable to receive a transplant, due to the predictive logic behind the algorithm.

Additionally, the integration of AI algorithms without proper evaluation directly or implicitly impacts people. For example, decision support algorithm used by the Danish Child Protection Services to assist in child protection deployed without formal assessment was found to suffer from numerous problems, including information leaks, inconsistent risk scores and discrimination based on age.

Furthermore, errors in facial recognition technologies led to incorrect identification and the Stop of innocent people in the United Kingdom and the United States. In education, the use of student data for purposes other than schooling has attracted criticism in the UK. Secret deals allowing authorities to monitor benefit applications have raised fears of increased surveillance, an erosion of public trust in technology and disproportionate targeting of low-income families (source Schools Week).

These few examples illustrate the need for transparency, accountability, and rigorous oversight of AI systems, which are central topics that the AI ​​Accountability Lab seeks to address through research and data-driven policy advocacy conclusive.

The AIAL will be hosted in Trinity’s School of Computing and Statistics. Professor Gregory O’Hare, Professor of Artificial Intelligence and Director of the School of Computer Science and Statistics at Trinitysaid: “The new dawn of AI coupled with generative AI has heralded an unprecedented speed of AI adoption. The provenance of such systems is, however, fundamental.

“The AI ​​Accountability Lab will be at the forefront of research that examines such systems; through algorithmic auditability, it will create a national and European center of excellence in this area, providing thought leadership and illuminating best practice.

Professor John D Kelleher, Director of ADAPT and Chair of Artificial Intelligence at Trinity, added: “We are proud to welcome the AI ​​Accountability Lab to ADAPT’s dynamic community of multidisciplinary experts, all dedicated to addressing the critical challenges and opportunities that technology presents. By integrating AIAL into our ecosystem, we reaffirm our commitment to advancing AI solutions that are transparent, equitable and beneficial to society, industry and government.

“With the support of ADAPT’s collaborative environment, the Lab will be well-positioned to conduct impactful research that protects individuals, shapes policy, and ensures that AI responsibly serves society.”

In its initial stages, the SABS will leverage empirical evidence to inform evidence-based policies; challenge and dismantle harmful technologies; hold responsible organizations accountable for the harmful consequences of their technology; and pave the way for a fair and equitable AI future. The group’s research goals include addressing structural inequities in AI deployment, examining power dynamics within AI policymaking, and advancing auditing standards focused on justice for AI accountability.

The lab will also collaborate with research and policy organizations in Europe and Africa, such as Access Now, to strengthen international accountability measures and policy recommendations.