News, Stories, Issues, Opinions, Data, History

AI Chatbots Perpetuate Racial Bias in Pain Assessment

A new study reveals that artificial intelligence (AI) chatbots, once hoped to eliminate human biases in medicine, may actually reinforce racial disparities in pain assessment. Researchers from Beth Israel Deaconess Medical Center and Georgetown University found that AI models, like their human counterparts, consistently underestimate the pain levels of Black patients compared to white patients.
 
The study, led by Adam Rodman, replicated a 2016 experiment that examined racial biases among medical trainees. Researchers applied a similar setup to test two popular AI models, Gemini Pro and GPT-4. The results showed that both AI and human assessors assigned lower pain ratings to Black patients, highlighting a persistent racial disparity in pain evaluation.
 
“These models are very good at reflecting human biases—and not just racial biases—which is problematic if you’re going to use them to make any sort of medical decision,” Rodman explained. The study also found that AI models, particularly Gemini Pro, exhibited a higher percentage of false beliefs about racial biology compared to human trainees.
 
As hospitals increasingly adopt AI for clinical decision support, these findings raise concerns about the potential for chatbots to exacerbate inequalities in health care. Rodman warns that the interaction between humans and AI systems in clinical settings could lead to confirmation bias, further entrenching existing disparities.
 
The research underscores the need for careful consideration and further study as AI becomes more integrated into medical practice. It serves as a reminder that technology alone may not be the solution to deeply rooted biases in health care.
 
Scroll to Top