News, Stories, Issues, Opinions, Data, History

AI Healthcare Models Harbor Dangerous Racial Biases

Artificial intelligence systems increasingly used in healthcare settings are perpetuating dangerous racial biases that could worsen existing disparities in medical treatment. New research from Northeastern University reveals how large language models incorporate stereotypes about Black patients into their medical decision-making processes.

Hiba Ahsan, a Ph.D. student and lead researcher, noted that prior work has found Black patients are less likely to be prescribed pain medication even when experiencing levels of pain similar to white patients. AI models, she warns, could just as easily make the same biased decisions.

Using a tool called a sparse autoencoder to examine how AI systems process information, researchers discovered troubling patterns. The technology found a high incidence of references to Black individuals alongside stigmatizing concepts like incarceration, gunshot wounds, and cocaine use. These racial biases were embedded within the AI system’s fundamental decision-making framework.

A substantial body of research shows that large language models exhibit racial bias when used in healthcare settings, outputting different answers depending on patient race, often in cases when race is not clinically relevant. Byron Wallace, Ahsan’s advisor and an interdisciplinary associate professor, emphasized the urgency of improving interpretation methods if these models are to be used safely in healthcare.

The research demonstrates that AI systems operate like black boxes, making it incredibly difficult to understand factors leading to their decisions. This lack of transparency becomes particularly dangerous when physicians rely on these tools for treatment recommendations, potentially amplifying existing racial disparities rather than reducing them.

See: “New research decodes hidden bias in health care LLMs” (January 21, 2026)

Topics