AI is not a battle of humans against machines
- ➢ Home
- ➢ Blog posts
Should the findings of an algorithm have the force of law? And if so, what sort of algorithm?
That was the topic of discussion at the Berkman-Klein Center’s event on Programming the Future of AI, held in conjunction with HUBWeek. It was a panel discussion with two major axes: one was Professor Christopher Griffin’s evaluation of the Public Safety Assessment. The PSA is a scoring algorithm that evaluates a defendant’s likelihood of flight risk, and risk of committing a new crime; it is now being used in two states and 29 counties, including some of the nation’s largest cities.
The other axis was Professor Margo Seltzer’s research into what she calls “transparent systems.” (For what it’s worth - I believe Prof. Seltzer was my undergrad advisor, if memory serves.) I haven’t heard the term before - but as Prof. Seltzer explained it, a transparent system is one which makes decisions which are readily comprehensible to a layperson. As explicated by fellow panelist Professor Cynthia Dwork, these sound a lot like heuristic algorithms, which are, you might say, the ancient ancestors of today’s machine learning algorithms. I think there is probably much more to be said about transparent systems, since heuristic algorithms are generally thought to be much less reliable than well-developed machine learning algorithms - but the panelists spent most of their time discussing heuristic algorithms.
In any case - the discussion largely centered around the question, how are we to embed AI in the law? What sort of algorithms should be allowed, and how should they be used? Prof. Seltzer, naturally, argued in favor of transparent systems, and all the panelists enthusiastically agreed that such systems should be open source and readily audited by the government or the public. Interestingly, while the data set and code which generated the PSA model doesn’t seem to be available for inspection, the model itself does seem to be rather transparent.
Probably the most interesting question that went unanswered was Prof. Griffin’s opener: what is the counterfactual? If we don’t use algorithms like PSA, then do we just go back to error-prone humans making decisions by themselves? Shouldn’t we compare the outcomes of these algorithms to the outcomes we get from “raw” judicial assessments?
It’s a reasonable question, to be sure, although I think it rests on a common and flawed assumption. AI is not a technology that pits humans against machines, much though it may be conceived and portrayed that way. It’s a technology that pits one group of people against another - much like every other technology that’s preceded it.
The question posed by the PSA, and other algorithms like it, is not really “is this algorithm better than a person at evaluating a defendant”, because to begin with judicial decisions are never really made by a single person. The judicial system is a complex institution, and there are a whole host of procedures that precede any pretrial hearing. The judge may be the person who makes the pivotal decision, but that decision is only the most visible, crystalizing moment in a very complex series of events.
The same could be true of any other situation in which you care to consider the involvement of AI. Consider face recognition technology: there’s a sort of raging debate about whether or not it should be used to find suspected criminals in a crowd. When we think of this debate we tend to think of the pivotal moment - a security guard behind a desk sees a flashing red light on a monitor, indicating a suspected criminal somewhere in the area; the guard stands up, apprehends the suspect, and makes an arrest. Maybe you think of this moment as a wonderful triumph for law-abiding citizens, or maybe you are horrified at the prospect of law enforcement arresting someone who has done nothing wrong. Either way you must admit: this hypothetical drama is preceded by a whole host of other events, not the least of which is the decision, by some law enforcement agency or another, to focus on the problem of suspected criminals who may or may not be passing by.
To return to the counterfactual, the best answer is probably this one: “the judicial system would evolve in a slightly different way to meet the challenge of pre-trial assessments, and that way might be better than the current path, or it might be worse”. Without the PSA, other measures to address the pretrial assessment problem would surely arise; some of those would be technological in nature, and some of them would involve institutional reform, and so forth - and it’s nearly impossible to assess whether or not they’d be better or worse than the PSA. The PSA is a tool within the judicial system, not some pseudo-sentient being.
The larger point is that AI is a technology that is deployed by one set of people, basically to control or regulate their interactions with another set of people in some way. That is true whether we’re talking about the PSA, the security guard wielding facial recognition software, or the postal clerk using OCR software to turn illegible handwriting into zip codes.
At the end of the day, AI is no more inherently good or bad than radar or lasers. What makes technology good or bad is the way it’s used, and the way in which it regulates interaction between one group of people and another - even lasers have some potentially frightening applications. Despite decades of robots-taking-over fantasies in the movies, there is nothing special about AI.
The panelists agreed universally that the PSA’s scores should not be determinative - that judges should have the leeway to find for or against the defendant, even in contravention of the PSA’s scores. Perhaps, they suggested, a judge who has a fuller picture than the narrow lens of the PSA would be more likely to take mercy.
That’s all to the good, but of course there are plenty of vindictive judges as well, and in their hands the PSA may well be an instrument for condeming more and more suspects to involuntary confinement. In short, the use of AI, or really any technology, requires regulation, norms and rules that limit the boundaries of how that technology may be used. More than that, we can’t simply put technology in the hands of powerful people and hope for the best - we also have to make sure that those people have the values necessary to use that technology well.
Image courtesy of Joseph Chan