Many hospitals are utilizing artificial intelligence (AI) tools to enhance patient care, but a concerning gap remains in bias testing for these technologies. New research indicates that while two-thirds of U.S. hospitals employ AI, just 44% proactively test these systems for bias. Such oversight could have detrimental effects on marginalized communities. Historically, minorities have faced inequities in healthcare access and quality, leading to worse health outcomes.
Paige Nong, a researcher at the University of Minnesota, expressed her alarm over the potential harms caused by biased AI systems. Many of these technologies are trained on incomplete or non-representative data, which can worsen existing disparities. For example, algorithms can unintentionally misclassify patients of color as being lower risk, preventing them from receiving essential care.
Despite the clear risks, many hospitals lack robust governance structures to evaluate these AI tools effectively. Only four out of 13 academic medical centers considered racial equity in their governance processes. As the use of AI burgeons in healthcare settings, unregulated deployment could entrench historical biases deeper into clinical practice.
Addressing these issues necessitates a concentrated effort. Hospitals need policy-driven frameworks to ensure that AI tools assist rather than harm vulnerable populations. Without these measures, patients will continue to bear the brunt of systemic inequality in healthcare service delivery.
See: “Lots of Hospitals Are Using AI. Few Are Testing For Bias” (February 27, 2025)