Is Facebook capable of handling disinformation on its platform?
As disinformation in Bangladesh keeps rising, Facebook’s efforts are barely making a dent in the mounting issue due to inadequate moderation, a language barrier and algorithm limitations
The surge in disinformation about Bangladesh has reached such an alarming level that Chief Adviser Dr Muhammad Yunus had to meet with Meta, Facebook's parent company, earlier this month and ask the tech giant to help the country thwart such campaigns.
At the meeting, Miranda Sissions, director of Human Rights Policy at Meta, assured Dr Yunus that the social media company is vigilant against the use of their platform to spread disinformation.
But whether this "vigilance" is working, or if it even has the potential to work, is a difficult question to answer.
The role of independent fact-checkers and Facebook
Misinformation on Facebook is nothing new but it is increasingly being used to undermine elections and influence public perception worldwide in the current volatile political landscape through disinformation campaigns ie the deliberate use of misleading information to achieve certain goals.
"The way misinformation spreads through Facebook has seen a lot of changes recently. Initially in Bangladesh, we would usually see an interaction of false religious and political claims, along with pseudoscience and conspiracy theories as major drivers of misinformation through social media. Things like claims of miracles, etc were dominant," said Sumon Rahman, founder of FactWatch.
As Facebook cannot monitor and moderate misinformation in every country, they work with local fact-checkers. In Bangladesh, Sumon Rahman's FactWatch was the first third-party fact-checking organisation to enter a partnership with Facebook.
"Facebook utilises third-party fact-checkers to detect misinformation. They are rigorous in the vetting process. They screen all agencies and provide guidelines. Facebook only works with organisations that are accredited by the International Fact-Checking Network (IFCN)," Sumon explained.
The way it works is that accredited independent fact-checkers proficient in the local language and culture are given the mandate to flag comments or posts they deem as misinformation.
"Facebook does not manually oversee fact-checkers. The power to label something as misinformation is given to them. If content is labelled as misinformation, then the circulation of that post drops, making it less visible," Sumon added.
He clarified that if a fact-checker's claim is contested, Facebook may get involved, and in case of a fact-checker making multiple mistakes, Facebook can stop working with them.
However, while fact-checking is left to local experts, problems arise due to Facebook's moderation policy. Due to the lack of tools to properly understand Bangla and Facebook's reliance on AI tools, nuances of the language get lost.
"Context-specific presentations of misinformation often escape detection. I think we need more frequent moderation policy updates coupled with greater human intervention to improve moderation," claimed Minhaj Aman, lead researcher at Dismislab.
"Another major hurdle is Facebook's difficulty to recognise repeat offenders. If a post that has been identified as misinformation previously resurfaces again, Facebook does not recognise it properly," Minhaj further added.
How can Facebook improve?
According to Sumon, despite Chief Adviser Yunus' request, as a corporate entity, Meta will likely not go beyond their algorithm and step in to handle a country's concern individually.
However, Sumon noted that Dr Yunus spoke to Meta's Human Rights and Public Policy team, which is different from their fact-checking team, "In some extreme cases, Facebook can and does remove incredibly harmful disinformation. It is rare, but Facebook keeps statistics of how many requests they have kept for different governments. Perhaps, they may come to some terms."
Apart from this, there are a number of ways in which Facebook could tackle disinformation better. Minhaj believes that improving their algorithm could be one such way, "When people interact with fake news, Facebook often suggests similar content, which makes the issue worse. Fixing the algorithm would solve this."
He is also critical of Facebook's allowance of deepfake videos on their platform, citing that they violate misinformation policies, even though Facebook sponsors them for revenue. He believes this needs to be stopped immediately.
Lastly, taking a harsher stance against repeat offenders could also mitigate the problem, "Keeping a list of habitual disinformation spreaders and issuing alerts about them could be a good option, but they have to do it in a way that avoids censorships."
Disinformation today
In August 2024, Dismislab published a report stating that they had discovered a network of over 1,300 Facebook bot accounts being used in a coordinated manner during the Awami League regime to influence public opinion in favour of the party.
The bot network, according to Dismislab, made over 21,000 comments on many Facebook posts over the years, becoming most active right before the elections. The bots would primarily criticise opposition parties and be triggered to make comments by the use of certain keywords such as EC (Election Commission).
This kind of bot behaviour falls in line with what Meta calls Coordinated Inauthentic Behaviour (CIB). As such, Meta has helped remove these accounts multiple times.
However, since the fall of the Awami League government and Hasina's subsequent escape to India, disinformation campaigns have reached new heights, becoming increasingly difficult to counter.
Reports of exaggerated communal violence and attacks and fabricated stories have reached a fever pitch, with Indian media playing a big role in spreading the fire. "Since 5 August, political and religious misinformation has seen an alarming rise, with Indian social media platforms and media outlets playing a critical role," Minhaj said.
According to Minhaj, these news outlets are trying to advance their country's foreign policy narratives at the expense of diplomatic relations with Bangladesh.
An analysis of different fact-checking reports that cover Bangladesh revealed that 50% of the total misinformation over the past 11 months occurred between July to now.
Moreover, the dominant themes of misinformation before the July Uprising were religious glorification and conversion, which have now dropped to make room for the rise in misinformation about communal violence and hatred.
According to the analysis, the major theme has now shifted to portraying Bangladesh as an unsafe country for minorities, especially Hindus.