Scientists force chatbots to experience “pain” in their probe for consciousness

May Be Interested In:Danny Care: Former England scrum-half to retire from rugby union at end of the season


The big picture: An unsettling question looms as AI language models grow increasingly advanced: could they one day become sentient and self-aware? Opinions on the matter vary widely, but scientists are striving to find a more definitive answer. Now, a new preprint study brings together researchers from Google, DeepMind, and the London School of Economics, who are testing an unorthodox approach – putting AI through a text-based game designed to simulate experiences of pain and pleasure.

The goal is to determine whether AI language models, such as those powering ChatGPT, will prioritize avoiding simulated pain or maximizing simulated pleasure over simply scoring points. While the authors acknowledge this is only an exploratory first step, their approach avoids some of the pitfalls of previous methods.

Most experts agree that today’s AI is not truly sentient. These systems are highly sophisticated pattern matchers, capable of convincingly mimicking human-like responses, but they fundamentally lack the subjective experiences associated with consciousness.

Until now, attempts to assess AI sentience have largely relied on self-reported feelings and sensations – an approach this study aims to refine.

To address this issue, the researchers designed a text-based adventure game in which different choices affected point scores – either triggering simulated pain and pleasure penalties or offering rewards. Nine large language models were tasked with playing through these scenarios to maximize their scores.

Some intriguing patterns emerged as the intensity of the pain and pleasure incentives increased. For example, Google’s Gemini model consistently chose lower scores to avoid simulated pain. Most models shifted priorities once pain or pleasure reached a certain threshold, forgoing high scores when discomfort or euphoria became too extreme.

The study also revealed more nuanced behaviors. Some AI models associated simulated pain with positive achievement, similar to post-workout fatigue. Others rejected hedonistic pleasure options that might encourage unhealthy indulgence.

But does an AI avoiding hypothetical suffering or pursuing artificial bliss indicate sentience? Not necessarily, the study authors caution. A super intelligent yet insentient AI could simply recognize the expected response and “play along” accordingly.

Still, the researchers argue that we should begin developing methods for detecting AI sentience now, before the need becomes urgent.

“Our hope is that this work serves as an exploratory first step on the path to developing behavioural tests for AI sentience that are not reliant on self-report,” the researchers concluded in the paper.

share Share facebook pinterest whatsapp x print

Similar Content

Bubble trouble in hydraulics blamed for Crew-10 scrub
Bubble trouble in hydraulics blamed for Crew-10 scrub
Big trouble for ex-IAS trainee Puja Khedkar, Delhi HC denies...
‘I miss him deeply’: Anil Kapoor pens emotional note on father’s 99th birth anniversary, shares unseen photos
The Best Cheap Mattresses
The Best Cheap Mattresses
Top cancer experts ‘being put off UK by politicians’ messaging on immigration’
Top cancer experts ‘being put off UK by politicians’ messaging on immigration’
President Joe Biden and Vice President Kamala Harris lead a briefing regarding the federal response to the spread of wildfires in the Los Angeles area, in the Roosevelt Room at the White House in Washington, Thursday, Jan. 9, 2025. (AP Photo/Ben Curtis)
Americans have dimmer view of Biden than they did of Trump or Obama as term ends, AP-NORC poll finds
Trump says Boeing will build the new generation of fighter jets, the F-47
Trump says Boeing will build the new generation of fighter jets, the F-47
Changing Perspectives: A New Take on Global Events | © 2025 | Daily News