AI colonialism: The quiet invasion of our minds
Western monopoly on Artificial Intelligence might entrench institutional racism. If the global south is not accounted for when constructing new AI, we the southerners, will be on the receiving end of an off-kilter advancement in technology
When Blake Lemoine published a transcript of his conversation with Google's Artificially Intelligent (AI) chatbot, LaMDA, claiming it has gained sentience, there was immediately a wide array of discourse on the topic. The senior Google engineer, along with a collaborator had presented his evidence to Google but was promptly dismissed and put on administrative leave.
Whether his claims are true or not, there is both a sense of terror and excitement when considering the prospect of a sentient AI. The concept of a rogue or sentient AI is a recurring trope in science-fiction, so our eagerness is quite natural. But while our interests have been heavily caught up in one sector, there are far more pervasive aspects to AI that have gone largely under our radar namely the themes of colonialism that surround AI technology.
The concept of colonialism in AI may seem far fetched. The premise of AI technology as pushed by big, predominantly western tech companies is that it will benefit everyone equally and take humanity to the very frontiers of modernity we have been dreaming of. In practice, however, it is not such a rosy concept and seems rooted in racial imbalances right down to its algorithms, governance and practical usage.
Modern society is still largely governed by institutional racism. Essentially AI is being predominantly developed by western companies, also ruled by institutional racism and has in general a racially biassed perception of data and problem-solving. This data and behaviour are fed to the AI to better its responses. The algorithms built on this data in turn end up further reinforcing and replicating the biases of its western engineers.
When AI is used to automate certain processes and decision-making with this kind of a data loop being fed to them, we end up with a result that is far from just. The perception society holds of AIs is that it rarely makes mistakes, after all, it is high-end technology. The result we get from an AI should therefore be accepted without fault.
Karen Hao, senior AI editor at MIT Technology Review, details the long reaching effects of this in her recent project on AI Colonialism. One of her case studies took her to a few wealthy predominantly white suburbs in South Africa, where security came in the form of mass AI surveillance. The security company in charge of placing this CCTV network, Vumacam, has an aggressive approach to using AI as security and preventing crimes.
Those who have purchased Vumacam's services can even upload licence plates of suspicious vehicles into Vumacam's shared database, completely bypassing any police interference. Users can then track cars using the footage Vumacam's security cameras are constantly capturing. Add all this with shoddy facial recognition technology and the sufferers of false identification end up being overwhelmingly black.
It is not just South Africa where surveillance inevitably falls into the same old racist patterns.
A study conducted by ABC on racial disparity in the United States Criminal Justice System shows how black Americans are arrested at a rate five times higher than white Americans. Arrest databases reflect this as they hold a disproportionate number of black people, whose faces and data are then used for future surveillance.
Timnit Gebru, former co-lead at Google's AI Ethics team published a paper showing how Google's facial recognition software is less accurate at identifying people of colour, which increases their chances of being discriminated against. If we go back to LaMDA, Google's chatbot or large language model (LLM), its algorithms are constantly being trained on data – which includes even the faulty, inaccurate information.
Those who work on data collection and labelling for tech companies are on the other hand, mostly from the global south. Labour exploitation is yet another frontier to AI colonialism. Researcher Phil Jones, the author of the book 'Work Without the Worker', exposes just how tech companies exploit cheap labour to better their technologies.
AIs are reliant on collecting large amounts of data but this has to be processed and labelled first. This is where the need for a human labour force comes in. Jones calls this microwork; poor, underpaid workers in poor countries are recruited into working gruelling hours. As he puts it simply in his book, "In the hour it takes Jeff Bezos to make $13 million, a refugee earns mere cents teaching his algorithms to spot a car."
AI colonialism has long been bleeding into our everyday lives. If there is no reflection of the global south in AI technology, then what we consume is a wholly western point of view. The only differences between the colonial powers of the past and tech companies are that there is no physical invasion of our lands, just the hijacking of our minds.