News, Stories, Issues, Opinions, Data, History

Biased medical testing can result in AI models underestimating illnesses in Black patients

Health disparities continue to pose a significant challenge in the United States, particularly affecting minority communities. Recent research from the University of Michigan has highlighted how inequitable medical testing practices can result in AI models underestimating the severity of illnesses in Black patients. Some individuals from these communities are being misclassified as “healthy,” primarily because they have not received critical medical tests that their white counterparts are more likely to undergo.

Studies indicate that medical testing rates for Black patients can be up to 4.5% lower than those for white patients, despite having similar characteristics and medical needs. This discrepancy is partly attributed to hospital admissions, where Black patients face lower likelihoods of being deemed ill enough to warrant further testing. According to Jenna Wiens, an associate professor at U-M, such systematic under-testing embeds bias into AI models.

Fortunately, the research team has developed an algorithm that identifies likely illnesses among untested patients based on race and vital signs, thereby adjusting for this bias. By employing this approach, the models can achieve accuracy comparable to an idealized, unbiased dataset. As AI technology becomes more prevalent in healthcare, addressing these biases is imperative for ensuring equitable health outcomes, particularly for the communities that have historically faced neglect in medical testing and treatment.

See “Accounting for bias in medical data helps prevent AI from amplifying racial disparity” (October 30, 2024)

Scroll to Top