close
close

Mondor Festival

News with a Local Lens

FBI and DEA Deployment of AI Raises Privacy and Civil Rights Concerns
minsta

FBI and DEA Deployment of AI Raises Privacy and Civil Rights Concerns

A required audit Efforts by the Drug Enforcement Administration (DEA) and Federal Bureau of Investigation (FBI) to integrate AI such as biometric facial recognition and other emerging technologies raise important privacy and civil rights concerns that require careful consideration of the initiatives of both agencies.

The 34-page audit report – which was mandated by the National Defense Authorization Act of 2023 to be conducted by the Inspector General (IG) of the Department of Justice (DOJ) – found that the he integration of AI by the FBI and DEA is fraught with pitfalls. regulatory inadequacies and potential impacts on individual freedoms.

The IG said integrating AI into DEA and FBI operations holds promise for improving intelligence capabilities, but it also brings unprecedented risks to privacy and civil rights.

Both agencies’ nascent AI initiatives, as outlined in the IG audit, illustrate the tension between technological progress and safeguarding individual freedoms. As the FBI and DEA address these challenges, they must prioritize transparency, accountability, and ethical governance to ensure that AI serves the public good without compromising fundamental rights.

While the DEA and FBI have begun integrating AI and biometric identification into their intelligence collection and analysis processes, the IG report highlights that both agencies are in the early stages of this integration. and are faced with administrative, technical and political challenges. These difficulties not only slow the integration of AI, but also exacerbate concerns about ensuring the ethical use of AI, particularly with regard to privacy and civil liberties.

One of the main challenges is the lack of transparency associated with commercially available AI products. The IG report highlights that vendors often build AI capabilities into their software, creating a black box scenario in which users, including the FBI, lack visibility into how the algorithms work or take decisions. The lack of a Software Bill of Materials (SBOM) – a comprehensive list of software components – compounds the problem, raising significant privacy concerns, as sensitive data could be processed by opaque algorithms, potentially leading to misuse or unauthorized surveillance.

“FBI personnel… stated that most commercially available AI products do not provide adequate transparency about their software components,” the IG said, noting that “there is no way for the FBI to know with certainty whether such AI capabilities are present in a product unless the FBI receives an SBOM.

The IG said that “SBOMs remain rare” and that “undisclosed embedded AI tools could lead FBI personnel to use AI capabilities unknowingly and without such tools having been subject to governance of the FBI’s AI. Additionally, an FBI official expressed concern that vendors are not required to obtain independent testing of their products to verify the accuracy of data models used in embedded AI capabilities.

The FBI’s AI Ethics Council (AIEC), which was created to ensure compliance with ethical principles and federal laws, faces a significant delay in reviewing and approving AI use cases . This backlog, which averaged 170 days for pending reviews in 2024, highlights systemic inefficiencies that can delay measures to protect against privacy violations. Additionally, although the AIEC ethical framework is consistent with Office of the Director of National Intelligence (ODNI) guidelines, the changing political landscape creates uncertainty, delaying critical decisions and leaving open the risk of non-compliance with emerging regulations.

The deployment of AI in the national security context also raises serious civil rights concerns, particularly regarding the risk of racial or ethnic bias. Tools like facial recognition systemsoften scrutinized for their propensity to wrongly identify individuals from marginalized communities, illustrate these risks. The FBI and DEA must manage the dual mandate of national security and law enforcement, which means that AI applications will often operate in contexts with high stakes for individual liberties.

Although the FBI has taken steps to document AI use cases and develop an overall governance policy, incomplete integration of ethical considerations into operational workflows poses risks. Without robust oversight mechanisms and transparency, AI systems could facilitate unwarranted surveillance, eroding public trust and violating constitutional protections against unreasonable searches and seizures.

The DEA’s use of AI further complicates the situation. With its only externally sourced AI tool, the DEA relies heavily on other elements of the U.S. intelligence community, limiting its control over the design and implementation of the tool. Such reliance not only restricts accountability, but it also exposes DEA operations to risks inherent in third-party AI systems, including bias that could unfairly target specific groups.

Both agencies cited recruitment and retention challenges as significant barriers to responsible adoption of AI. The IG said the failure to attract technical talent, particularly individuals equipped to deal with the ethical and legal implications of AI, leaves gaps in agencies’ ability to mitigate risks. Additionally, “many people with adequate technical skills are unable to pass background investigations,” the IG reported.

Budgetary constraints further hinder the acquisition and independent testing of AI tools, increasing the reliance on commercially available systems with unknown biases or limitations.

The IG said FBI staff emphasized that “it can be difficult to test and deploy a new system without a research and development budget, because it is difficult to justify using limited funds to test a technology unproven when operations supporting the mission are so critical. That contrasts with other intelligence agencies that, according to an FBI official, have research and development budgets that allow them to test and deploy new technologies. FBI personnel submitted proposals to ODNI when internal funding was not available, but these funding sources are not guaranteed.

Another major hurdle is modernizing IT infrastructure. Legacy systems hinder AI integration, and inadequate data architectures exacerbate data quality and security issues. Poorly managed data systems could inadvertently expose sensitive personal information to breaches or misuse, further endangering privacy and civil rights.

“Due to limited resources and a lack of strategic planning, federal agencies often struggle to ensure that data architecture remains modern and instead use outdated information systems, even when those systems themselves require significant resources for their maintenance,” the IG report said. “Such systems can thwart the transition to AI because they can be difficult to integrate with newer technologies, lack features essential to modern data science tasks, struggle to manage the large and complex data sets of today and often require more time and manual effort on the part of their users. FBI staff also noted that moving data and AI tools between classification levels is complicated and requires additional funding.

“Additionally,” the IG said, “capturing quality data is fundamental to enabling an organization to use data to make decisions by implementing processes that ensure incoming data is accurate, consistent and relevant.

The IG outlined a number of actions the FBI and DEA can take to address concerns raised by the audit. On the one hand, both agencies should evaluate how AI can be ethically and effectively integrated to improve intelligence collection while protecting individual rights. Furthermore, strengthening the AIEC and similar mechanisms with sufficient resources to manage the increased adoption of AI is essential to upholding ethical standards.

Mandating SBOM and independent testing for all AI tools would ensure that the FBI and DEA – and other agencies – can verify the security and legality of their applications. Additionally, the IG recommended implementing routine assessments to assess the potential impact of AI tools on civil liberties, particularly in surveillance contexts.

Article topics

AI | biometric identification | biometrics | DEA | ethics | facial recognition | FBI | law enforcement | regulation | US government

Latest news on biometrics

Last year, on October 7, a Hamas attack on Israel plunged the country into a war against Gaza that would have…

Biometric authentication is a common factor in digital identity trends of the week, last year, and perhaps…

On Friday, an attorney for Ascension Health, a major U.S. hospital operator, wrote to Maine’s attorney general saying…

The adoption of the first version of the ISO/IEC 18013-7 standard in October marked the crest of a wave in the field of standardization…

A digital wallet for New Zealand farmers developed by Anonyome and Indicio has won a Constellation Research SuperNova Award…

AI-enabled facial recognition technology is frequently discussed in the context of contemporary law enforcement and fraud. Even less…