News, Stories, Issues, Opinions, Data, History

Study Reveals Racial Bias in Medical Testing Skews AI Models

A study by University of Michigan researchers has uncovered a significant racial disparity in medical testing rates, potentially amplifying health inequities through artificial intelligence (AI) models. The research, published in PLOS Global Public Health, reveals that white patients are up to 4.5% more likely to receive diagnostic tests than Black patients with similar medical profiles.

This testing bias, partially attributed to higher hospital admission rates for white patients, has far-reaching implications for AI in healthcare. When AI models are trained on such biased data, they risk underestimating illness severity in Black patients, perpetuating and potentially exacerbating existing health disparities.

Jenna Wiens, associate professor of computer science and engineering at the University of Michigan, emphasizes the critical need to acknowledge and address these data flaws when developing AI models. “If there are subgroups of patients who are systematically undertested, then you are baking this bias into your model,” Wiens explains.

To combat this issue, the research team developed an innovative algorithm that identifies likely ill patients based on race and vital signs, even when diagnostic tests are lacking. This approach, presented at the International Conference on Machine Learning in Vienna, allows for more equitable AI predictions despite biased input data.

The study’s findings underscore the urgent need for healthcare systems to recognize and rectify racial biases in medical testing and decision-making. As AI continues to play an increasingly significant role in healthcare, addressing these underlying disparities is crucial for ensuring equitable and accurate medical care for all patients, regardless of race.

See: “Accounting for bias in medical testing could prevent AI from amplifying health disparities” (October 16, 2024)

Scroll to Top