Artificial intelligence is reshaping healthcare, but its algorithms may be deepening racial disparities. According to research co-authored by Fay Cobb Payton of Rutgers-Newark, AI systems often rely on data that overlooks the lived realities of Black and Latinx patients. “It doesn’t account for the cost of fresh produce,” Payton said. “It may not account for the fact that someone does not have access to transportation but is working two jobs.”
These algorithms, built on “big data” like medical records and imaging, often ignore “small data” such as social determinants of health. That omission can lead to treatment plans that are unrealistic for patients juggling multiple jobs or lacking access to healthy food and reliable transit. “It’s assumed that they’re not adhering because no one talked to them about the why,” Payton explained.
The lack of diversity among developers and physicians compounds the issue. In 2018, only 5% of active physicians identified as Black, and 6% as Hispanic or Latinx. Without diverse perspectives, algorithms may reinforce stereotypes and fail to recognize the severity of illness in minority patients. “Black females experience more severity in breast cancer,” said Payton, yet biased algorithms may still delay their treatment.
Most patient data comes from just three states—California, Massachusetts, and New York—leaving rural communities underrepresented. Payton urges human oversight and inclusive innovation to ensure AI serves all populations equitably. “Some form of human intervention is needed throughout,” she said.
See: “AI Algorithms Used in Healthcare Can Perpetuate Bias” (November 14, 2024)


