Fact-checking Covid-19 posts is not working. There is a better way
Algorithms on platforms like Twitter and Facebook are structured to suppress learning and feed information that reinforces biases. Photo: Bloomberg / Might be easier to just rename the company. Photo: Bloomberg
The right and left may not agree on what constitutes misinformation, but both would like to see less of it on social media. And as the world faces the third year of the Covid-19 pandemic, the threat medical misinformation poses to public health remains real. Companies like Twitter and Facebook have a stake in cleaning up their platforms — without relying on censoring or fact-checking.
Censoring can engender distrust when social media companies expunge posts or delete accounts without explanation. It can even raise the profile of those who've been "cancelled."
And fact-checking isn't a good solution for complex scientific concepts. That's because science is not a set of immutable facts, but a system of inquiry that constructs provisional theories based on imperfect data.
A recent post on Politifact illustrates the problem. The claim at issue: a meme circulating on Facebook that viruses evolve to be less virulent. Politifact deemed it false, but Purdue University virologist David Sanders disagrees. "I would say that it actually is true that viruses do tend to evolve to be less harmful to their host," he told me, though it's a process that can sometimes take decades — or even centuries — from the time a new virus jumps from animal to a human host. Sanders said Politifact had conflated virulence with other things, such as resistance to drugs. When a complex issue is still a matter of scientific uncertainty and debate, rating it "true" or "false" doesn't work very well.
Another limitation of fact-checking: There's so much dubious content floating around Facebook and Twitter that human fact checkers can only get to a miniscule fraction. Consumers may wrongly assume what's left over has been reviewed and is reliable.
"It's not a truth-seeking medium — it's meant for entertainment," says Gordon Pennycook of the University of Regina in Canada.
But he is convinced that Facebook and Twitter can be made less deceptive by harnessing the analytical power of the human brain.
One way is to harness the phenomenon known as "the wisdom of the crowds." If you ask enough independent sources a tough question — like how deep the Pacific Ocean is at its deepest point — people converge on the right answer. But social media misguides our crowd-seeking compasses.
Crowdsourcing only works when each person is thinking independently. On social media, users get cues that lead to mobbing and piling on and fake accounts or automated "bots" can give the illusion that vast crowds are impressed or outraged by a news item.
"It's not necessarily that [users] don't care about accuracy. But instead, it's that the social media context just distracts them and they forget to think about whether it's accurate or not before they decide to share it," said his research partner David Rand, a professor of management science and cognitive sciences at MIT.
Rand admits he fell into that trap himself, sharing a made-up tidbit attributed to Ted Cruz — a statement that he'd believe in climate change when Texas freezes over. "It was the time when there were all those snowstorms in Texas. And I was like, 'Oh my God, that's so good.'"
What Rand and Pennycook found in a recent study, published in the journal Nature, was that people improved the accuracy of their sharing when first asked to rate the accuracy of a headline. The idea was that this would shift people's attention toward accuracy, which people say they believe is important even as they share things based on how popular they're likely to be.
Rand and Pennycook found that combining enough social media users to evaluate news generated a wisdom-of-the-crowds effect and the system yielded answers that matched multiple fact checkers as well as the fact checkers matched each other.
"About 10 or 15 lay people, that's equivalent to about one fact-checker," said Pennycook.
Facebook and Twitter could harness crowdsourcing to elevate the stories most likely to be true. "You could use that to inform your ranking to correspond to the actual accuracy," Pennycook said. "In a certain sense that's taking it out of the hands of the third parties and giving it back to the people."
Instead, algorithms on platforms like Twitter, YouTube, Instagram and Facebook are structured to suppress learning and feed people an informational junk food diet that reinforces existing beliefs and biases, according to a series of models and experiments led by Filippo Menczer, a professor at the Centre for Complex Networks and Systems Research at Indiana University.
"What we are exposed to on social media is strongly affected by our own pre-existing opinions," he told me on my podcast about medical misinformation. And that's one reason seemingly apolitical medical topics become politicised. "Political entities have an interest in using whatever people are paying attention to — for example, a health crisis — to manipulate people."
The "people are getting dumber" myth has been embraced on both the political right and left. We're not getting dumber. We are all struggling to understand what's going on in a complex, fractured world. Censorship and even fact-checking social media won't solve that problem. To do that, platforms can change the system, giving users more power over what they see.
Faye Flam is a Bloomberg Opinion columnist and host of the podcast "Follow the Science."
Disclaimer: This article first appeared on Bloomberg, and is published by special syndication arrangement.