An algorithm used by major hospitals and healthcare providers misjudged how sick black patients were compared with their white counterparts, allowing healthier white patients to get additional medical supports more often.
That’s according to the findings of a study published Thursday in the journal Science that looked at patient care based on medical practitioners’ use of the algorithm produced by Optum, an arm of UnitedHealth Group, the Wall Street Journal reports. The algorithm is used by more than 50 healthcare organizations around the country to help analyze patient health care needs.
“What the algorithm is doing is letting healthier white patients cut in line ahead of sicker black patients,” Dr. Ziad Obermeyer, the study’s lead author and an acting associate professor of health policy at the University of California, Berkeley, told the Journal.
How? Researchers said the algorithm was not overtly racist, but its use of a key, seemingly race-blind metric resulted in a racially disparate outcome.
That metric used was how much a patient was likely to cost the healthcare system in the future. But, as the Washington Post reports, wealthy, white patients tend to utilize the healthcare system more often and thus incur larger bills—not necessarily because they are indeed sicker, but because they enjoy the privilege of greater access to healthcare.
Indeed, when researchers devised a different algorithm, the findings revealed 47 percent of black patients studied were in need of extra care, opposed to the 18 percent found by the Optum algorithm, according to the Journal.
The use of algorithms as a technological diagnostic tool was meant to help lower the nation’s healthcare costs by helping medical providers keep people well.
However, as the Post notes, if a system is already historically biased, it’s easy for a new technological tool to inherit those biases.
“I am struck by how many people still think that racism always has to be intentional and fueled by malice,” Ruha Benjamin, an associate professor of African American studies at Princeton University, told the Post. “They don’t want to admit the racist effects of technology unless they can pinpoint the bigoted boogeyman behind the screen.”
Benjamin, according to the Post:
drew a parallel to the way Henrietta Lacks, a young African American mother with cervical cancer, was treated by the medical system. Lacks is well known now because her cancer cells, taken without her consent, are used throughout modern biomedical research. She was treated in the Negro wing of Johns Hopkins Hospital in an era when hospitals were segregated. Imagine if today, Benjamin wrote in an accompanying article, Lacks were “digitally triaged” with an algorithm that didn’t explicitly take into account her race but underestimated her sickness because it was using data that reflected historical bias to project her future needs. Such racism, though not driven by a hateful ideology, could have the same result as earlier segregation and substandard care.
Indeed, researchers in the Science study said they didn’t think the Optum algorithm was intentionally racist—the outcome highlighted a disparate racial impact. They also said they doubted the Optum algorithm was the only one with such a disparate impact.
Optum told the Journal that its diagnostic tool isn’t meant to be used alone:
Optum advises its customers that its predictive algorithms shouldn’t replace physician judgment, a company spokesman said. Efforts to use analytics in health care have only scratched the surface of their potential and should be continually reviewed and refined, he said.