New AI world, disinfo dilemma: More crises, more lies
Dr Sajnana Hattotuwa, research director at the Disinformation Project, at a webinar on 22 August pointed out a more nefarious aspect to the AI revolution - the embedded racism.
It was almost instantaneous.
Hundreds of thousands of shares, equal amounts of comments and numerous duplicates.
The photo was the same: a child, barely three, keeping himself afloat amid the flood in Bangladesh.
Then the question arose: Was this image even real?
By this time a few images of the flood had already been debunked as being fake.
When the never-ending July finally came to a close, it was as if the world we had known for almost two decades had changed forever.
Now, as we stand on the backs and blood of those who martyred themselves for a righteous cause, what Chief Adviser to the interim government Dr Yunus said, something that resonates with us all: we have been left with a huge mess to clean up.
Throughout the month of July, and also the start of August, as the student revolution was turned violent by a regime seemingly driven sheerly by blood lust, we were not only gripped by panic and worry, but there was also a growing sense of distrust.
Nights would turn to days as we lay awake, our minds boxed within the throes of the latest rumour. And as rumours do, those spread like wildfire, something you can't just put out.
Amid rampant sources of disinformation and misinformation – both legitimised by a government desperate to cling on to power for 15 years – we saw apophenia in truest form take root.
Apophenia is the tendency to perceive a connection or meaningful pattern between unrelated or random things. Once you go down the rabbit hole, you begin to see connecting patterns everywhere, whether they exist or not.
We have seen this on talk shows, when former top government officials kept repeating the same lies, regurgitating talking points which were far removed from reality.
We have seen this on social media, where Indian invasion was just a matter of time and there were robberies going on every minute.
Everyone was busy connecting dots that didn't align, helped by a healthy dosage of propaganda which flew unbridled.
Amid this chaos, we were bombarded with graphic videos and pictures.
Within this cacophony, like a phoenix almost, rose the white noise: people's turn towards media literacy.
This media literacy was one where people wouldn't only be able to access media, but rather analyse the media messaging as well.
Conversations would begin around which media did what report and more importantly why.
In a time when the media world is saturated by sub-par, self-proclaimed news providers with no licence or expertise, the sudden shift towards only relying on media of substance was a breath of fresh air.
The need to substantiate what people were sharing wasn't only reserved for media sites. It also extended to their preference. Any update on any situation would inevitably be followed by a comment asking for the source.
News was coming. But people wanted to know where it was coming from.
Even at The Business Standard, our readers began to reach us through calls, emails and social media messages, asking us to verify bits of information they had come across.
Their zeal to know the truth really drove our journalists to produce more reports than ever before.
Despite a social media and then internet blackout, the pipeline of reports continued to flourish. The message was clear: the truth could not be silenced.
But as the dust settles on turmoil, yet another has sparked. Bangladesh is currently facing one of its worst floods in years.
Even now you can see proliferation of certain videos and photos, often unverified, which seek to politicise the issue.
The problem with verification, however, is further complicated by the use of AI.
The embedded racism problem
Some of the generative AI models have made such strides that it is difficult even to know they are fakes, even with pixel peeping by trained eyes.
In fact, some AI image generation models aren't even able to spot their own generated fakes.
But AI image generation, as we go forward, is not going to be the only cesspool of wrong or biased information we need to be wary of.
Dr Sajnana Hattotuwa, research director at the Disinformation Project, at a webinar on 22 August pointed out a more nefarious aspect to the AI revolution - the embedded racism.
"The AI training data has a white male, cis gendered bias. It's an algorithmic problem," he said.
Simply put, the AI visuals and information were generated through data the AI had been trained on. This data comes from the Western world and its inherent biases in how they view the Global South and what narratives they wish to push.
Although this was still an algorithmic problem, as Sanjana postulated, he said one of the ways to overcome it was to have more people of colour involved in open sourced LLM models where data scientists would be involved, instead of the current reliance on closed anthropic models.
"We have to believe the promises they are making to address structural discrimination around data, but we don't have any way of knowing that. The research should be more academic, women and gendered centres. We have to take this conversation forward as our contexts are being the one being narrated," he said.
The years ahead, with the increased reliance on AI and its biases, are set to be hard.
Soon, people will switch to AI for news related queries and they will be served from a handful of Western media.
The misinformation battle will become a highly nuanced and covert one, focused more on establishing pro-Western narratives instead of promoting outright lies.
The battles are far from done. Hopefully more media literacy will set us in the right direction.
Yashab Osama Rahman is the Web Editor, The Business Standard.