AI: The future of crime prevention?
With the increasing integration of artificial intelligence in the criminal justice system, ethical concerns regarding its implementation have begun to emerge and need to be resolved through proper regulatory action
Although the notion of Artificial Intelligence (AI) was first developed in dystopian fictions, it has since become very much a reality, although not yet to an extent typically depicted in science fiction stories.
AI is regarded as an umbrella term that encompasses subfields such as machine learning (ML), which also collaborates with a subfield known as deep learning to emulate human-like decision-making.
From healthcare to autonomous cars, AI has begun to penetrate every sphere of our lives, including in the formal functions of the state such as taxation, border security, and even public order.
Although the use of AI in criminal justice is meant to maintain law and order in society, it can also induce negative externalities, amplify pre-existing prejudices and errors, and consequently, undermine the efficiency of justice and law enforcement.
Crime prevention, on the other hand, may be characterised as acts and inventions aimed at reducing crime, the likelihood of crime, and the negative consequences of crime.
In the context of crime prediction and prevention, the term AI can be understood as the growing use of technologies that apply algorithms to large sets of data to either assist human police work or replace it, according to a Chinese study published in the International Society for Photogrammetry and Remote Sensing.
In the policing and security arena, there are currently three types of AI systems being used by law enforcement agencies: biological data recognition systems, predictive risk assessments, and predictive policing. And to run these systems, AI software needs to feed an enormous amount of data, which is called 'large data'.
To run the aforementioned three systems, machine learning uses a method known as data mining to extract usable data from large datasets to detect patterns in a specific data collection.
On the other hand, deep learning allows for decisions to be made with little or no human involvement by using an autonomous information extraction process once specific criteria have been established.
Here, the large dataset includes variables on crime and criminal history (time, weather, location, age, literacy rate, education, annual income, etc.), and biological data (fingerprints, photos, iris scans, DNA, body structure, and many more).
Researchers from the University of Manchester and the University of Madrid collaborated in 2018 to build an artificial intelligence system that is based on behavioural biometrics and can accurately identify people by analysing their walking patterns with just a 0.7% mistake rate.
Similar kinds of technologies are now being widely used by Chinese authorities to run their mass surveillance on their population.
In modern-day policing, especially in crime detection, 'descriptive artificial intelligence', primarily generated from descriptive analytics, is spreading its sphere of influence as it can consider several separate variables at a time and provide the fastest possible result.
Police in New Orleans, USA, with the collaboration of Motorola Solutions, have adopted this advanced technology to help track suspects, vehicles, or places. Law enforcement in Russia and the Chinese government in Hong Kong appear to be using similar kinds of programmes to identify anti-government protesters.
While 'PredPol' is the most commonly used AI-based predictive policing algorithm in the United States, 'Dextro', an AI software that analyses scenes to detect objects and activities, is being used by different police departments. This technology will help identify crimes in progress for live observation and intervention as well as to support investigation later on.
In Brazil, the artificial intelligence tool "Rosie" is being used to uncover instances of corruption and unusual expenditure by the country's elected lawmakers.
Automated licence plate readers (ALPR) and facial recognition technology (FRT) are already being used in law enforcement.
Courts in the United States have been using an AI-based predictive analytics tool, Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), to determine the likelihood of an offender to re-offend if released on bail.
Furthermore, researchers at the University of Cambridge pushed AI facial recognition to a new level by developing the Camouflage Facial Recognition System, which can recognize any face even if it is hidden by a mask, veil, or glass.
Even though nations and organisations are spending billions on building and enhancing AI, there are very few initiatives to regulate it.
As of 2018-19 data, China, France, the United Kingdom, Japan, and India spent $2 billion, $1.8 billion, $1.2 billion, $700 million, and $408 million respectively on creating AI technologies for their use.
Throughout 2018–2019, the European Commission, the European Union's executive body, has allocated $24 billion in public and commercial funding for artificial intelligence development and research to maintain competitiveness with other parts of the world and 'avoid brain drain'.
Even as artificial intelligence gains traction in law enforcement and crime prevention, there are relatively few rules and procedures in place to manage and regulate this cutting-edge technology so that privacy can be safeguarded and prejudice can be mitigated in the judicial process.
Most countries have comprehensive federal data policy legislation on par with the General Data Protection Regulation (GDPR) of the European Union (EU) or the General Data Protection Act (GDPA) of the United Kingdom.
In March of this year, China passed a new law that requires tech firms to file details of the algorithms with the country's cyberspace regulator.
The EU submitted a draft AI legislation in April 2021, which, according to them, focuses on "The specific utilisation of AI systems and associated risks". But it is still years away from being adopted.
As we are progressing towards Artificial General Intelligence, the superior stage to our current Artificial Narrow Intelligence, all governments throughout the globe must lay down policies and regulations before it is too late, because the boundary between its use and abuse is dangerously thin.
Md Mahmud Hasan is a student at the Department of Criminology, University of Dhaka.