Combating Mis/Disinformation In Uncertain Times
In an age where digital platforms allow any user to generate content, we have all experienced disinformation and misinformation. But now we’re seeing a surge in toxic content across social platforms. With Elon Musk’s recent takeover of Twitter, the future of content moderation is a topic of global concern. Advertisers are pausing spending on Twitter, and many businesses, brands and organizations are considering disengaging from the platform altogether.
The Digital Planet research team at the Fletcher School at Tufts University is closely watching the mis/disinformation crisis. Analyzing content on Twitter since Musk took over, the Digital Planet researchers observed that “the platform is heading in the wrong direction” with increased mis/disinformation, hate speech and toxic rhetoric.
To gain a better understanding of the issue, the Digital Planet team engaged influential reporters from The New York Times and Associated Press. They invited them to attend their conference “Defeating Disinformation: Advancing Inclusive Growth and Democracy through Global Digital Platforms,” sharing their insights alongside thought leaders in content moderation and tech policy as part of an effort to identify solutions to tackling the spread of false and misleading information.
Where does responsibility lie? What is the future of content moderation? How do we ensure informed societies? Below are some top takeaways from that conference:
We all have a role to play
Conor Sanchez, a content policy manager at Meta, describes existing content moderation efforts as a game of “Whack-a-Mole.” Ranging from private sector actors to NGOs and self-regulation, the emerging solutions hinge on widespread societal mobilization. “It requires the whole society,” said Cameron Hickey, Program Director for Algorithmic Transparency. He says we must foster a culture of responsibility and accountability because platform regulation and fact-checking won’t resolve the problem alone.
The battle between content moderation and free speech is just heating up
We clearly need systemic regulation, but how we do that and where the lines should be drawn depends on whom you ask and where they live. A critical debate is emerging between freedom of speech and content moderation; this will become front and center as governments contemplate digital platform legislation. Eric Goldman, law professor at Santa Clara University, highlighted his concerns of censorship. “We’re talking about the government controlling speech online…that’s the end of free speech,” he said. In contrast, Anne Marie Engtoft Larsen, the tech ambassador of Denmark, argued that we can balance regulation with free speech. “We put a man on the moon,” she said, insisting that, although a large feat, we can find a way to provide adequate protections.
Mis/disinformation is getting harder to categorize and many organizations don’t have the resources to catch it
New York Times reporter, Tiffany Hsu—who has written extensively about the work at Digital Planet— finds that misinformation is getting increasingly difficult to define. Not only are we bombarded with misleading written content, but we are also seeing a surge in altered audio, video and images that warp reality. Hsu added that, although The New York Times has a team of misinformation reporters working to report on such issues, many organizations lack the resources to catch mis/disinformation at such a scale. This, unfortunately, lets much of it slip through the cracks.
Final Word
Mis/disinformation isn’t going anywhere. To create a healthier public sphere on the internet, we must scrutinize the content we see online. Global digital platforms are powerful channels of communication, mobilization and community and they have an important role. By sharing the responsibility of identifying and stopping the spread of misleading content, we can collectively fashion an online ecosystem that perpetuates an honest and inclusive view of the world.