Less Discriminatory Algorithms
In discussions about algorithms and discrimination, it is often assumed that machine learning techniques will identify a unique solution to any given prediction problem, such that any attempt to develop less discriminatory models will inevitably entail a tradeoff with accuracy. Contrary to this conventional wisdom, however, computer science has established that multiple models with equivalent performance exist for a given prediction problem. This phenomenon, termed model multiplicity, suggests that when an algorithmic system displays a disparate impact, there almost always exists a less discriminatory algorithm (LDA) that performs equally well. But without dedicated exploration, developers are unlikely to discover potential LDAs. These observations have profound ramifications for the legal and policy response to discriminatory algorithms. Because the overarching purpose of our civil rights laws is to remove arbitrary barriers to full participation by marginalized groups in the nation’s economic life, the law should place a duty to search for LDAs on entities that develop and deploy predictive models in domains covered by civil rights laws, like housing, employment, and credit. The law should recognize this duty in at least two specific ways. First, under disparate impact doctrine, a defendant’s burden of justifying a model with discriminatory effects should include showing that it made a reasonable search for LDAs before implementing the model. Second, new regulatory frame-works for the governance of algorithms should include a requirement that entities search for and implement LDAs as part of the model building process.
Continue reading Less Discriminatory Algorithms.
Black-et-al._LessDiscriminatory