National AI Policy: Who is responsible when AI goes wrong?
While a positive step, the draft National AI Policy needs to evolve into a more comprehensive set of guidelines in its final version and address ethical concerns like AI accountability with better rigour
The ICT Division of the Bangladesh Government published a draft version of the National Artificial Intelligence Policy 2024 last April, setting equity, accountability, safety, sustainability, collaboration, and human rights among its fundamental principles.
While some have raised concerns about the lack of involvement of civil society and other stakeholders in the policy-making process, despite collaboration being one of its key principles, the policy must be applauded for its attempt to address issues like AI transparency and accountability.
AI accountability involves determining who is responsible when AI malfunctions. As the government plans to implement AI in areas such as the legal system, surveillance, disaster response, transportation, finance, manufacturing, education, healthcare, etc., the decisions made by AI will have an increasing impact on individual lives, the economy, and society.
However, as AI systems begin to take on more of our tasks and make decisions on our behalf, it raises the question of how we assign moral responsibilities to them, considering they lack general intelligence like humans. In other words, who should be held accountable when AI makes a mistake?
This is actually a very challenging question to answer that any comprehensive AI policy needs to address. This is not a mere question of policy but also of philosophy. Formally speaking, an agent is responsible for an action if firstly, the agent has taken the action by its own will and secondly, the agent knows the consequence of the action.
The idea of these conditions of moral responsibility goes back to Aristotle, but is still widely cited today. As AI systems do not have free will and consciousness, holding an AI responsible for any mistakes is impossible. This is not just a philosophical conclusion but one that has legal implications.
In March 2018, a pedestrian in Arizona, USA, was hit and killed by a self-driving Uber car. A human supervisor was present inside the vehicle but was distracted by her phone and failed to intervene before the autonomous vehicle struck the pedestrian. The supervisor was held accountable after the case went to court.
However, Uber as a company did not face any criminal charges because, as previously mentioned, its AI system could not be held morally responsible, even though an error in the system caused the accident.
This may seem strange, but because an AI lacks free will and consciousness, the responsibility for its mistakes falls on humans, similar to how parents and pet owners are responsible for the actions of their children and pets.
However, using a human scapegoat is not always ideal for several reasons. First, AI often makes decisions too quickly for a human to intervene. Second, unlike the scenario mentioned earlier, it is not always clear which person or group should take responsibility for a mistake made by an AI.
AI systems are typically developed, operated, and managed by a team of individuals, often across different organisations. How should the responsibility for errors made by AI be distributed among these people?
Additionally, AI algorithms created for specific purposes are frequently used in entirely different contexts later on by others, making it even more challenging to assign responsibility.
The moral responsibility of artificially intelligent systems is further complicated by the fact that many of these systems, especially those using deep learning methods, are like "black boxes." Although human developers understand the general functioning of deep learning systems, they cannot explain how these systems make specific decisions.
For example, in 2016, Google's AlphaGo defeated world champion Lee Sedol in Go, an extremely complex game arguably more challenging than chess. This was a remarkable feat, considering that AlphaGo learned how to play Go by analysing a database of Go matches and playing against itself.
Unlike IBM's Deep Blue, the chess engine that defeated legendary grandmaster Garry Kasparov in chess in 1997, AlphaGo did not follow a set of given rules to win. It learned to play Go on its own. This is an example of a deep-learning AI. The engineering team at Google's DeepMind understands how they designed the AlphaGo system to learn to play Go.
Still, they cannot know or explain the specific calculations that occur inside the machine to arrive at a particular move. The machine's reasoning behind any specific output is unexplainable and uninterpretable, not by design, but due to the very nature of how deep learning systems learn.
This means it is impossible to hold human developers or implementers responsible if a deep learning system makes a sudden mistake, as those people do not know what the AI is doing internally and are, therefore, not responsible.
This issue has led some policymakers to propose that AI researchers find a way to explain the decisions made by deep learning systems or develop more transparent alternatives. The National AI Policy carries the same sentiment. Section 3 of the policy document, which contains guidelines for AI implementations, asks to "ensure that the AI's decision-making process is explainable and interpretable, allowing users and stakeholders to understand how AI arrives at decisions and can challenge them."
However, this requirement is not always practical. Almost every headline-grabbing AI we see today, including transformer-based systems like ChatGPT and Google Gemini, utilises deep learning. We do not yet have a better-performing, transparent alternative to it.
Trading off performance for transparency is not easy in a rapidly developing, competitive field of technology like AI. Thus, AI accountability will continue to be a challenge for policymakers. This is especially true for developing countries like Bangladesh since most large-scale consumer-focused AI systems depend on or belong to a few major international corporations such as Microsoft, Amazon, and Google for their computing infrastructure.
In addition to accountability, there are many other serious issues, like bias and privacy concerns, that plague the artificial intelligence landscape.
The draft National AI Policy, while a positive step, needs to evolve into a more comprehensive set of guidelines in its final version, alongside the National Strategy for Artificial Intelligence introduced in 2020. It must address ethical concerns like AI accountability with better rigour.
The government needs to act swiftly in enacting and updating these policies because, as AI becomes more widespread and influential in our daily lives, addressing fundamental ethical questions about it will become increasingly challenging later on.
Amio Galib Chowdhury is a graduate research student at Texas State University.
Disclaimer: The views and opinions expressed in this article are those of the authors and do not necessarily reflect the opinions and views of The Business Standard.