pexels-visionpic-net-319968

Slow Down the Machine Police

Suppose an intelligent machine deems you guilty of a crime. Suppose the police were to treat the machine’s judgment as evidence of your guilt. Would it matter that you are actually innocent?

This hypothetical was once a plot device of dystopian novels and films. As law enforcement agencies increasingly rely on traffic cameras, cell phone data, and other information technologies we should take care that fiction does not become reality.

We should deliberate especially about the use of artificial intelligence, or “AI,” a category of machine functions that mimic human intelligence to solve information-rich problems. Technologies that have emerged under the name “AI” enable machines to read license plates and facial features, match persons and items to profiles, and correlate actions with likely outcomes.

The time to deliberate about uses of AI has arrived. A bill filed in the Alabama Senate for the 2021 session would authorize law enforcement agencies to employ some applications of information technologies along public highways. Although the bill places some limits on the use and storage of data, it does not prohibit officers from using output data—the results of machine computations—as evidence to establish probable cause for searches and seizures or to prove guilt. Machines extrapolate from data that do not establish guilt or innocence. So, the spectre of profiling hangs over the measure.

According to an Alabama Daily News report, the bill’s sponsor has promised to file another bill that would forbid law enforcement agencies to use “the results of artificial intelligence or a facial recognition service as the sole basis to establish probable cause in a criminal investigation or to make an arrest.” The logic of that promised bill would as readily justify a general ban on the use of AI output data as evidence (while allowing police to use those data in investigations).

AI certainly has potential uses in law enforcement. And the benefits of particular applications may outweigh the risks of abuse. But anyone proposing to authorize new uses should bear the burden of persuading their fellow citizens that the gains for potential victims will be great and the threats to civil liberties benign.

Above all, we must preserve the presumption of innocence. To be American is to believe the best about each other, not the worst, and to remain at liberty so long as we are not proven to have committed criminally culpable acts.

The presumption of innocence is one of the great achievements of Anglo-American jurisprudence. It is arguably the most fundamental safeguard of liberty and legal justice. But it remains vulnerable. Those who want to engineer a perfect society are often impatient with the slow pace and limited reach of our criminal justice institutions. It took Western jurists centuries to achieve the presumption of innocence. Recent history shows that it can be lost in a matter of months, not only under totalitarian regimes but also in republics and democracies.

The presumption has three parts. First, no accused person may be punished unless he committed an act that the law declared culpable. Second, he must have acted with a culpable intention, not by mistake or for an innocent purpose. Third, the government must prove beyond a reasonable doubt both the wrong act and the culpable intention in a legal proceeding that satisfies all the requirements of due process.

Machine-based policing threatens all three elements. AI algorithms are used to predict likely outcomes, as if actions and outcomes are inevitable. But unlike machines, human beings have free choice. We should not be presumed to have acted wrongly on the basis of a prediction.

Because machines process only verifiable data, they cannot judge human intentions. Unlike machines, humans can tell the difference between intending to cause some bad outcome and accepting that bad outcomes happen in an imperfect world, even when people act with pure intentions. Machines cannot judge the culpability of human action because machines are not moral agents, as we are.

Finally, we cannot always examine machine computations for constitutional violations as we do the decisions of human officers. AI algorithms are generally trade secrets. Unlike patented technologies, which are described in public filings, trade secrets are not open to public inspection. To the contrary, companies that own commercially valuable algorithms go to great lengths to cloak them from view.

In the 2002 movie Minority Report, law enforcement officers employed mutant psychics to predict future crimes, then arrested the would-be criminals before they could act. The movie raised the troubling question: Should we be willing to give up the presumption of innocence, due process, and other fundamental rights if by sacrificing those rights we could create a crime-free society?

To a free people who love the rule of law more than utopian ideals, the answer to that question is obvious. The difficult question is whether we are still that kind of people.

Adam J. MacLeod is Professorial Fellow of the Alabama Policy Institute and Professor of Law at Faulkner University, Jones School of Law. He is a prolific writer and his latest book, The Age of Selfies: Reasoning About Rights When the Stakes Are Personal, is available on Amazon.