News, Stories, Issues, Opinions, Data, History

AI Chatbots Perpetuate Racial Biases in Pain Assessment

A new study reveals that artificial intelligence (AI) chatbots, once hoped to eliminate human biases in medicine, may actually reinforce racial disparities in pain assessment. The research, led by Dr. Adam Rodman from Beth Israel Deaconess Medical Center, exposes the flawed beliefs about race encoded in large language models (LLMs) used in healthcare settings.
 
The study, published in JAMA Network Open, replicated a 2016 experiment that examined racial biases among medical trainees. Researchers applied a similar setup to two AI models, Google’s Gemini Pro and OpenAI’s GPT-4, comparing their pain assessments of Black and white patients to those made by human medical trainees.
 
Results showed that both AI models and human trainees consistently underassessed pain in Black patients compared to white patients. Alarmingly, the Gemini Pro model exhibited the highest percentage of false beliefs about racial biology (24%), surpassing even human trainees (12%).
 
Dr. Rodman warns that as hospitals increasingly adopt AI for clinical decision support, these biases could lead to further healthcare inequalities. The study highlights a concerning trend: when AI systems confirm pre-existing human biases, clinicians are more likely to agree with them, potentially entrenching disparities even further.
 
This research underscores the urgent need for careful consideration of AI implementation in healthcare. As Dr. Rodman notes, “These models are very good at reflecting human biases… If the system is biased the same way humans are, it’s going to serve to magnify our biases or make humans more confident in their biases.”

Scroll to Top