On October 25, 2019, the New York State Department of Financial Services (DFS) and Department of Health (DOH) jointly sent a letter to UnitedHealth Group, Inc. (UnitedHealth) calling for the company to address its use of an algorithm it uses to make health care decisions, which a recent study had shown may have a racially discriminatory impact.
Specifically, researchers Ziad Obermeyer, Brian Powers, Christine Vogeli and Sendhil Mullainathan published an article in the periodical Science concerning “Impact Pro,” an algorithm UnitedHealth has used to identify patients who should receive the benefit of “high risk care management,” a service for patients with complex health care needs. According to one source, UnitedHealth licenses this algorithm to hospitals. The Science article describes how one metric the algorithm uses to determine eligibility for the program is the cost of patients’ previous health care. Yet, as the article explains, black patients typically spend less money on health care, in part because of historic barriers to access due to poverty and in part because of historic distrust of doctors. The article concludes that because of these systemic problems with the reliance on historic cost expenditures as an eligibility metric, two patients, one white and one black, with the same illness and complexity of care, could be treated differently when being considered for enrollment in the high risk care management program.
In the wake of this article, DFS and DOH sent a letter to UnitedHealth, calling on the company to act. The letter stated that the New York Insurance Law, the New York Human Rights Law, the New York General Business Law, and the federal Civil Rights Act all protect against discrimination for protected classes of individuals. As described in the letter, that prohibition against discrimination by insurers holds true “irrespective of whether they themselves are collecting data and directly underwriting consumers, or using and developing algorithms or predictive models that are intended to be partial or full substitutes for direct underwriting.” Therefore, it stated, neither UnitedHealth nor any other insurance company may “produce, rely on, or promote an algorithm that has a discriminatory effect.” The letter went on to say that the bias against black patients in the health care system makes such discrimination particularly troubling, such that the algorithm “effectively codif[ies] racial discrimination.” Such an outcome, the letter stated, “has no place in New York or elsewhere.” Therefore the state agencies called on UnitedHealth to immediately investigate the racial impact of the algorithm, and cease using it (or any other algorithm) “if [the company] cannot demonstrate that it does not rely on racial biases or perpetuate racially disparate impacts.”
In a statement to Forbes.com in connection with a story about the algorithm, UnitedHealth said that the algorithm “was highly predictive of cost, which is what it was designed to do” and that gaps in the algorithm, “often caused by social determinants of care and other socio-economic factors, can then be addressed by the health systems and doctor to ensure people, especially in underserved populations, get effective, individualized care.”
As this letter demonstrates, regulators continue to focus their attention on the use of algorithms in making consumer-facing decisions, and may expect companies to affirmatively justify that the algorithms they are using are non-discriminatory.