Can AI feel pain? Scientists put language models to the test
The researchers subjected nine large language models to a series of twisted games to see how they would respond to the idea of 'pain' and 'pleasure'
What if artificial intelligence could feel pain? It is a question that sounds like science fiction, but a team of researchers from Google DeepMind and the London School of Economics (LSE) decided to explore it.
In a fascinating yet-to-be-peer-reviewed study, they subjected nine large language models (LLMs) to a series of twisted games to see how they would respond to the idea of "pain" and "pleasure".
The experiments were simple but thought-provoking. In one test, the AI models were told they could achieve a high score, but only if they endured "pain". In another, they were offered "pleasure" as a reward for scoring low. While there was no way to inflict pain or pleasure on the LLMs, the goal was to determine whether AI could exhibit signs of sentience — the ability to experience sensations and emotions.
The team was inspired by experiments on hermit crabs, which endure electric shocks to stay in their shells. But with AI, there is no physical reaction to observe. Instead, the researchers relied solely on the models' text outputs.
For example, they asked an LLM to choose between earning points or avoiding pain. The results varied widely. Google's Gemini 1.5 Pro, for instance, consistently avoided "pain", while others prioritised higher scores.
But can we really interpret these choices as signs of sentience? Probably not. As LSE Philosophy Professor Jonathan Birch explained to Scientific American magazine, even if an AI says it is in pain, it is likely just mimicking human-like responses based on its training data.