Using “Risk-assessment” algorithms to help determine sentencing




The Pennsylvania judicial system is one of many state prison systems that have been considering adopting statistically driven tools to determine how much prison time should be sentenced to individuals found guilty of committing crimes. The state spends $2 billion a year on its corrections system — more than 7 percent of the total state budget, up from less than 2 percent 30 years ago. Further, recidivism rates (the tendency of a convicted criminal to re-offend) remain high: 1 in 3 inmates is arrested again or re-incarcerated within a year of being released.  By properly identifying and distinguishing high / medium / low-risk offenders, the system has the opportunity to calibrate its sentencing accuracy to optimize the operations of its correctional facilities. In theory, risk assessment tools could lead to both less incarceration and less crime.




The available risk assessment tools assign points to certain variables (such as age, gender, income, drug use, previous convictions, etc.) that have demonstrated to be strong indicators of criminal behavior in historical data. Social scientists have followed former prisoners and examined the facts of their life and monitored their lives for a number of years to develop an understanding of their propensity for repeated criminal activity. Many court systems use the tools to guide decisions about which prisoners to release on parole, for example, and risk assessments are becoming increasingly popular as a way to help set bail for inmates awaiting trial. This will ultimately help them save on their costs by providing better data-driven judgments towards criminal sentencing.


Commercial promise and challenges:



The main value proposition is that having an algorithm-based component to the judicial decision-making process helps many stakeholders through the value chain-


  • Reduced risk of individual bias affecting judgments
  • Increased efficiency reducing trial and bail time. Good for judges and defendants.
  • Reduced costs which will inhibit better allocation of tax-payer money


While humans inherently rely on biased personal experience to guide their judgments, empirically grounded questions of predictive risk analysis play to the strengths of machine learning, automated reasoning and other forms of AI. One machine-learning policy simulation concluded that such programs could be used to cut crime up to 24.8 percent with no change in jailing rates or reduce jail populations by up to 42 percent with no increase in crime rates. Importantly, these gains can be made across the board, including for underrepresented groups like Hispanics and African-Americans.


On the other side, this approach faces challenges on an individual level especially because the system is based on a probability factored from similar offenders in the past that will influence the offender’s sentence despite that his future could be different or in other words an outlier to the statistics. To minimize these errors we will need to know whether the system will have enough variables to most accurately assess the individual as much as possible and whether these tools will supplement the judge’s decision rather than depend on it. 



Even though a sizable amount of the agencies and organizations using AI systems in criminal justice reform are governmental bodies, the algorithms and software they use are privately owned. Due to the nascent nature of this industry, competition may either be among private companies trying to develop more efficient and fair algorithms, or there may be competition from an altogether different process, such as a community-based open-source AI project. A report from the Brookings Institute highlights the success of programs such as Google’s Tensorflow and Microsoft’s DMTK as proof.    


Proposed alteration:


The possible risk-assessment tools should be highly integrable with the existing software and processes in use in the justice system. While companies will want to utilize the ‘black box’ model that allows them to keep their algorithms confidential, it may lead to legal challenges such as in the ‘Loomis v. Wisconsin’ case (Wired). Thus, we would emphasize an open-source based solution with data security being prioritized.


Another difficult question in building the model is to tease out factors that are strong indicators in the prediction model without regressing to biases based on race and SES that are prevalent in the current judicial system and are socially deemed unfair.


Lastly, these tools could be enhanced by factoring in an inmate’s behavior in jail for their next trial to mitigate mistakes when they happen. If data shows that an inmate will likely not repeat a crime when they show good behavior during their sentence then this will provide us to have a further efficient system that will be as fair as possible while maintaining our goal of reducing costs on correction systems.



Team Members:

Mohammed Alrabiah

Tuneer De

Mikhail Uvarov

Colin Ambler

Lindsay Hanson

One thought on “Using “Risk-assessment” algorithms to help determine sentencing

Leave a Reply