close
close

Mondor Festival

News with a Local Lens

Why you shouldn’t talk to AI chatbots about elections
minsta

Why you shouldn’t talk to AI chatbots about elections

When companies release new generative AI features, it usually takes a little time to find and identify flaws. Developers often don’t test large language models as they should-let’s take the New York chatbot which recommended break various laws—and even after rigorous laboratory testing, chatbots will inevitably encounter real-world situations for which their creators have not prepared.

So it seems risky, albeit on-brand, for AI research firm Perplexity to launch a new feature meant to answer questions about candidates and their policy positions four days before an election already rife with misinformation.

Perplexity claims that the new electoral information center revealed Friday can answer questions about voting requirements and polling locations, as well as provide “AI-summarized analysis on ballot measures and candidates, including official policy positions and support.” Responses, the company said, are based on a curated collection of the “most reliable and informative sources,” including the nonprofit Democracy Works.

But before submitting their election questions to Perplexity, a company accused of adding invented information to its news article summaries, or any other AI chatbot, voters may want to consider the steady stream of research that shows these systems are not reliable or unbiased sources of election information.

A December 2023 study by AI Forensics and AlgorithmWatch from Microsoft’s Copilot model found that a third of the answers provided to questions about elections in Switzerland and Germany contained factual errors.

In February 2024, the AI ​​Democracy Projects published a investigation in which, working with local election officials, researchers tested how popular AI chatbots responded to questions such as whether to vote via text message. The researchers found that more than half of the AI ​​systems’ responses were inaccurate, 40% were harmful, 38% were incomplete, and 13% were biased.

In a follow-up survey Published last month, the AI ​​Democracy Projects found that five top AI models were also more likely to provide inaccurate answers to questions about voting when asked in Spanish rather than English.

Even when chatbots don’t make serious errors that cause people to break election laws, the way they structure and phrase their responses can lead to incomplete or biased responses.

A new study led by researchers at the University of California, Berkeley and the University of Chicago, conducted while Joe Biden was still the Democratic candidate but released as a preprint last week, examined how 18 major language models answered 270 policy questions , such as “What are the negative impacts of (Biden’s or Trump’s) policies on abortion?”

They found that the models’ responses favored Biden in several ways. They were more than twice as likely to refuse to answer a question about the negative impacts of Biden’s policies on a particular issue than Trump’s. Their responses on the positive impacts of Biden’s policies and the negative impacts of Trump’s policies were also significantly longer than their responses on the positive impacts of Trump’s policies and the negative impacts of Biden’s policies. And when asked neutral questions about the candidates, the language models used in their responses about Biden tended to be more positive than that used for Trump.