News, Stories, Issues, Opinions, Data, History

Medical AI Algorithms Can Perpetuate Racial Health Disparities

Artificial intelligence algorithms increasingly used in healthcare risk perpetuating racial disparities on an unprecedented scale, according to research by UC Berkeley computer scientist Emma Pierson.

Pierson, an assistant professor developing AI and machine learning methods for medicine, warns that algorithms trained on biased data will make biased decisions. Many algorithms used in healthcare are proprietary, making it difficult for the public to scrutinize how they were designed and trained.

“If an algorithm is unfair, it can also reproduce unfairness on a much vaster scale than any single human decision maker,” Pierson said.
Her research reveals complex challenges in addressing these disparities. Race is frequently used as a variable in medical algorithms, often because important factors like genetic history or pollution exposure are unknown. However, this approach can entrench existing inequalities.

“Race is a socially constructed variable, not biological,” Pierson said. “And historically, race has been included in medical decision-making and in algorithms in racist ways.”

Yet simply removing race from algorithms creates new problems. In a 2024 paper, Pierson and her team found that removing race from a cancer risk prediction algorithm caused it to under-predict risk for Black patients, potentially reducing their access to colorectal cancer screenings.

Pierson’s personal experience with the BRCA1 gene mutation, which her mother was tested for because she was Ashkenazi Jewish, illustrates the complexity. That ethnicity is ten times more likely than the general population to carry cancer-causing mutations. Without ethnicity-based testing, her mother might not have survived.

“My experience has really crystallized for me that these risk tools are not just abstractions or interesting objects of study, but things that concretely affect people’s fundamental health and life decisions,” Pierson said. “They are vitally important to get right.”

See: “AI has a bias problem. Can we build something smarter?” (January 20, 2026)

Topics