Artificial Intelligence for Federal Prisons?
Artificial intelligence (AI) can play a major positive or perhaps dystopian role in federal prison reform. Consider Tool to predict recidivism in federal inmates could make more prisoners eligible for early release (Phys.org, April 30, 2021):
Using PATTERN, roughly half the population sampled was identified as being immediately eligible for early-release time credits. A substantial portion of the other half could become eligible if they participated in re-entry programs, did not incur infractions, and exhibited positive behavior change when reassessed, the study found. In all, almost 72 percent of men and about 96 percent of women could be eligible for early-release credits during the course of their incarceration, according to the study’s findings.
Tool to predict recidivism in federal inmates could make more prisoners eligible for early release (Phys.org, April 30, 2021)
Critical of current (and expanded) A.I. use in criminal justice system is: AI is sending people to jail—and getting it wrong (MIT Technology Review, January 21, 2019):
Under immense pressure to reduce prison numbers without risking a rise in crime, courtrooms across the US have turned to automated tools in attempts to shuffle defendants through the legal system as efficiently and safely as possible. This is where the AI part of our story begins. …
But the most controversial tool by far comes after police have made an arrest. Say hello to criminal risk assessment algorithms.
Risk assessment tools are designed to do one thing: take in the details of a defendant’s profile and spit out a recidivism score—a single number estimating the likelihood that he or she will reoffend.
WILL ARTIFICIAL INTELLIGENCE HELP IMPROVE PRISONS? (Pacific Standard, May 3, 2019) looks at the rollout of A.I. at prisons in Hong Kong and China.
In mainland China, the government is finishing up construction on a new “smart” surveillance system in Yancheng Prison that aims to monitor every one of their high-profile inmates in real time via networked hidden cameras and sensors placed in every cell. According to a report in the South China Morning Post, the network will stream the data it collects to “a fast, AI-powered computer that is able to recognize, track, and monitor every inmate around the clock” and, “At the end of each day, generate a comprehensive report, including behavioral analysis, on each prisoner using different AI functions such as facial identification and movement analysis.” Like in Hong Kong, these systems are also designed to flag suspicious behavior and alert human guards when it finds any activity it registers as abnormal. An employee at Tiandy Technologies, the surveillance tech company that helped develop the surveillance system, claimed that with the new technology, “prison breaks will be history,” and suggested that unethical behavior from guards, such as taking bribes, might become a thing of the past.
U.S. and Chinese governments plan wider deployment of A.I. tools
THE PANOPTICON IS ALREADY HERE
Xi Jinping is using artificial intelligence to enhance his government’s totalitarian control—and he’s exporting this technology to regimes around the globe. (The Atlantic, September, 2020):
The [Institute of Automation,] is a basic research facility. Its computer scientists inquire into artificial intelligence’s fundamental mysteries. Their more practical innovations—iris recognition, cloud-based speech synthesis—are spun off to Chinese tech giants, AI start-ups, and, in some cases, the People’s Liberation Army.
Back in the U.S., the National Institute for Justice (NIJ) outlines its A.I. programs and plans: Tapping Into Artificial Intelligence: Advanced Technology to Prevent Crime and Support Reentry (NIJ, August 6, 2020). Although A.I. tools for criminal justice are now deployed mostly to “help community supervision staff triage their limited resources to focus on offenders most in need,” NIJ is looking to expand A.I. deployment:
The National Institute of Justice (NIJ), the research and development agency within the U.S. Department of Justice, is seeking to expand the use of AI beyond structured risk assessments. The applications NIJ plans could use machine-learning algorithms to provide real-time guidance to community supervision officers and to intervene with offenders in periods of crisis. The precision of machine learning, coupled with the latest mobile communications and wearable technology, can give community supervision officers the ability to identify those most at risk and tailor timely interventions, thus preventing recidivism in real time. …
In fiscal year 2019, NIJ requested proposals from researchers to develop AI tools to assist community supervision officers and prevent recidivism.
Is there a Minority Report for the National Institute for Justice’s Artificial Intelligence…Prevent Crime and Support Reentry?
In the Tom Cruise movie, Minority Report, the program director (Max von Sydow as Precrime founder, Lamar Burgess) suppressed evidence of an early murder. Precrime did stop crimes, but it wasn’t perfect. Thinking to protect “the greater good” and his program, Lamar Burgess would do anything to cover up mistakes. As with the mounting deaths with early communism, the justification was, “in order to make an omelette, you first half to break a few eggs.”
Minority Report was based on a short story by Philip K. Dick. though the story was changed for the movie. Minority Report: How the Movie’s Story is Different From the Book (ScreenRant, April 3, 2020):
The third and final minority report is that John will learn about the second minority report (in which he does not kill Kaplan) and change his mind again, choosing to kill Kaplan in order to save Precrime. This is ultimately what happens…. Witwer doesn’t die in Philip K. Dick’s story. Instead he gets his wish and takes over John’s job, after John is sent into exile on a colony planet as punishment for murdering Kaplan. Witwer’s new job comes with a concerning caveat, though; he inherits a system that he knows to be flawed and vulnerable to manipulation, and John’s parting words are to warn Witwer that he could easily be set up in just the same way that John was.
Artificial Intelligence systems could better protect federal prisoners from each other and from prison guards, and better protect prison guards from prisoners. Pervasive surveillance could be too much for guards to monitor and that’s where A.I. tools could help. But A.I. tools could also be coded to cover up footage of bribes and violence. Without transparent A.I. tools and independent third-party organizations, prison officials and private contractors will have strong incentives to hide mistakes and real world Minority Reports.