Barriers to Adopting Predictive Algorithms: A Criminal Justice Field Experiment
Artificial intelligence, machine learning, and algorithmic prediction tools have made huge advances in recent years. Some argue that we are on the precipice of a major revolution in our economy and society, in which eager adoption of these new technologies will transform how work is done.
This Article argues that change might come more slowly to the legal sphere than is commonly thought. We present the results of a criminal justice field experiment in which we provided novel sentence prediction software to public defenders. In some regards, the experiment was a failure. Usage of the prediction software was so low that we were unable to evaluate its impact on sentencing. This is despite strong a priori expressions of interest and tests showing that our algorithm is more accurate than the public defenders at predicting sentences.
However, this failure produced valuable insights about why predictive AI might face headwinds in the legal profession. Extensive interviews, a prediction “quiz,” and our empirical results revealed the following takeaways. First, attorneys place a high bar on adopting new technology due both to workflow inertia and skepticism about benefits. Second, some attorneys were distrustful of an algorithm that did not have all of the information they had—even if the algorithm still provided more accurate information than their intuition. Third, algorithm design entails challenging ethical questions that can reduce trust and use among users. We discuss these issues in detail and suggest some possible paths forward.