Artificial intelligence is rapidly transforming neurological care, but a new report warns that it could intensify racial and ethnic health inequities if deployed without strong protections. The authors describe AI as a “double-edged” tool—one that can speed diagnoses yet also magnify the disadvantages faced by communities already underdiagnosed and underrepresented.
Researchers note that AI systems depend on large datasets, many of which fail to reflect the diversity of the U.S. population. That imbalance raises the risk that stroke assessments, seizure detection algorithms, or tumor-classification tools may perform less accurately for Black, Latino, American Indian/Alaska Native, and other marginalized groups. These same patients, the report emphasizes, are already more likely to face delayed diagnoses and barriers to specialty neurological care.
Yet the report also highlights AI’s potential to narrow gaps if intentionally designed for equity. It describes how clinics in resource-limited areas could use AI to recognize early signs of neurologic disease, generate medication instructions in patients’ primary languages, or track whether certain groups are being left out of clinical trials. “The technology exists,” said Dr. Adys Mendizabal, the study’s senior author. “We just need to build it with equity as the foundation.”
Mendizabal warns that the moment is pivotal: “The decisions we make now on how to develop and deploy AI in healthcare will determine whether this technology becomes a force for equity or another barrier to care.”
The report’s call for diverse community input, clinician education, and strong oversight reflects a central message: without deliberate action, AI could widen the very disparities it aims to solve.
See: “AI in neurological care could widen health inequities, new report warns” (Nov. 21, 2025)


