close
close

Mondor Festival

News with a Local Lens

Want to fight misinformation online? Regulate the architecture of social media platforms, not their content
minsta

Want to fight misinformation online? Regulate the architecture of social media platforms, not their content

Over the past two decades, social media platforms have evolved from simple networking sites to powerful forces that shape public opinion, influence elections, impact public health, and affect social cohesion. For example, during the recent US presidential election, platforms like X played a central role in spreading information: and disinformation – and mobilize voters, not to mention the impact on electoral participation. Similarly, during the COVID-19 pandemic, social media has played a crucial role in disseminating public health guidelines, but has also become a battleground for combating misinformation about vaccines and treatments.

The growing role of social media platforms in the dissemination of information and the need to ensure the integrity of that information makes regulatory discussions more urgent than ever. Given their profound impact on almost every aspect of society, these platforms should be considered critical infrastructure – much like energy networks and water systems – and should be subject to review and regulation equally rigorous to protect the integrity of the information. Just as a power grid outage can cause widespread disruption, systemic manipulation on social media can undermine democratic processes, public health efforts, and social trust.

How to guarantee the integrity of information?

There are currently two main approaches to ensuring the integrity of information on social media, each of which carries significant ethical risks.

The first approach involves content regulation through platform moderation or government legislation – as evidenced by the recently proposed Australian law. “Disinformation Bill”. The bill aims to reduce the harm caused by misinformation on social media. It defines disinformation as content that is “reasonably verifiable to be false, misleading or misleading, and which is reasonably likely to cause or contribute to serious harm.” These harms include compromising the integrity of the electoral process, vilification of marginalized groups, and imminent damage to the economy.

This bill is emblematic of three fundamental problems, particularly regarding the government’s oversized role in defining harm and determining the truth:

  • This raises serious concerns about the suppression of free speech, as legitimate speech could be stifled under the guise of combating disinformation.
  • This gives government an outsized role in defining harm and determining truth – responsibilities that are fundamentally at odds with the principles of liberal democracy. In such a system, the government should not play the role of arbiter of truth and must limit the regulation of speech to the bare minimum necessary.
  • It overlooks the provisional nature of truth – even scientific truth – in that facts accepted today may be refuted tomorrow, and ideas previously rejected may later be recognized as true. Perhaps most concerning is how such regulation can interfere with the complex and subtle nature of truth itself, which often emerges from a collective process of negotiating different positions rather than from government decree.

The second approach – championed by figures like Elon Musk – advocates a free marketplace of ideas where truth emerges naturally through public discourse. This view is based on the classic liberal assumption that when ideas confront each other openly and freely, the most credible and well-supported viewpoints will ultimately prevail through a process of collective consideration and debate.

Proponents argue that this organic truth-seeking mechanism is more reliable than centralized control because it harnesses the wisdom of crowds and allows ideas to be continually tested and refined. In such a model, the best way to combat misinformation is not regulation, but vigorous debate, fact-checking, and the public’s ability to critically evaluate competing claims.

The problem with this approach is that in unregulated environments, false content is much more likely to prevail over the truth. Indeed, compared to disinformation, the truth is more costly to produce and more complex to understand. Therefore, they are less likely to be easily accepted than small lies.

Additionally, the truth often challenges deeply held beliefs and comfortable assumptions, making it inherently less appealing than self-confirmed lies that validate existing biases and tribal loyalties, especially in the echo chambers of social media. This dynamic is further amplified by social media algorithms optimized for boost engaging and often inflammatory content rather than prioritizing accurate information.

Focus on the algorithms themselves

Given these challenges, a more nuanced and effective approach to maintaining information integrity on social media should focus on regulating the algorithms that shape online discourse rather than attempting to directly regulate content or to allow unbridled amplification of disinformation. By establishing and enforcing design standards for algorithms, platforms could be forced to optimize for meaningful engagement and information quality rather than virality and engagement.

For example, platforms could implement a “circuit breaker” mechanism that would temporarily reduce the algorithmic amplification of any content delivered at unusually high bitrates, regardless of its content. Like financial markets shutting down trade to prevent destabilizing dynamics, this approach would create natural friction against viral cascades by allowing time for more organic and deliberate sharing patterns to emerge.

Additionally, platforms may need to develop algorithms that actively promote diversity of viewpoints by ensuring that users see content from different perspectives on important topics – in the same way that academic discussions benefit from exposure to various scientific interpretations.

Platforms should also allow users to meaningfully choose their information environment by offering multiple feed options: from traditional chronological and algorithmic feeds to community-curated collections, “discovery” feeds that surface content outside of usual user patterns and “slow” flows that prioritize users. sustained discussions about viral spikes. This menu of options would give users control of their information diet while naturally broadcasting echo chambers.

Want the best in religion and ethics delivered to your mailbox?

Subscribe to our weekly newsletter.

To strengthen accountability, platforms should offer users clear insight into how their algorithms prioritize content – ​​for example, explaining whether a post appears in their feed due to organic connections, topic relevance, metrics engagement or paid promotion. This transparency would help users make more informed choices about their information consumption, just as nutrition labels help consumers make informed food choices.

These standards would preserve free speech while creating a digital architecture that naturally elevates quality speech above viral content. Instead of trying to police the truth or letting misinformation flourish unchecked, the focus would be on creating an environment in which truthful content has a fair chance to compete in the marketplace of ideas.

No need to compromise fundamental freedoms

Just as we regulate critical infrastructure like power grids through engineering standards rather than controlling how people use electricity, algorithmic design standards offer a way to ensure that social media platforms fulfill their essential democratic function without compromising fundamental freedoms. This approach recognizes that social media has become as essential to modern society as public services – but unlike traditional infrastructure, its failures can erode the very foundations of democratic discourse.

By focusing regulation on the architecture of these systems rather than their content, we can protect the integrity of our information ecosystem while preserving the open exchange of ideas that democracy demands. The challenge ahead is not to choose between unfettered amplification and strict content control, but to thoughtfully design our digital public spaces to naturally foster healthy discourse while remaining true to democratic principles.

Uri Gal is Professor of Business Information Systems at the Business School, University of Sydney. His research focuses on the organizational and ethical aspects of digital technologies.

Published , updated