Should the widely understood justice system (courts, police, penitentiary system, and the related government agencies), be banned by law from collecting Big Data and using Artificial Intelligence?

Back in 2013,  I was part of a data analytics project for State Police. On April Fool’s we received a hilarious hoax – an obviously fake internal announcement that police analytics could now predict crimes before they actually happen, just like in Philip K. Dick’s story Minority Report. Five years ago, the joke was still funny.

A few facts (2018). West Midland Police (UK) in this project  are going to flag individuals likely to commit crime and schedule them for preemptive interventions.  Chinese authorities already today detain people in Xinjiang , thanks to predictive big data analysis, according to this Human Rigths Watch story. In the United States, risk assessment algorithms such as Compas are commonly used to assist judges and schedule preemptive detention, as discussed by Wired earlier this year. Many more examples can be quoted worldwide.

There is an ongoing public debate on these issues. Besides the articles quoted above (all of which express some concern) there is discussion about possible bias in crime predicting software, as expressed in this article by Smithsonian. Substantial research is being done on fairness in machine learning (lately sumarized in this excellent TWiML podcast interview with Richard Zemel). Others promote the auditability and transparency of any such software, example being Cathy O’Neil’s book
Weapons of Math Destruction: how big data increases inequality and threatens democracy. And regarding the stated UK Police project, here is a follow-up ethical discussion that was by the way commissioned by police themselves.

Shouldn’t police and courts be simply banned by law from using AI? Here I do not mean  the General Data Protection Regulation, but automatic reasoning about citizens in general in the sphere of justice. Respecting the fact that many others have done substantial work on the subject, here are a few additional thoughts that come to my mind.

  1. an idea of government (known as Big Brother) mass-controlling the individuals by the use of data is not revolutionary. It has been here long before computers and Big Data came into being – most notably in totalitarian regimes, such as the former Soviet Union, North Korea and the former East Germany, a 16-million country employing 2 million informers of secret police STASI. In contrast, the concept has not been well received in Western democratic societies, where human rights and personal freedom are considered central values (lately impersonated by the European GDPR)
  2. In many countries, the public sector (including central governmental agencies, police, prison and justice) happen to be ineffective, prone to corruption, suffering for lack of adequate control, lack of transparency and fuzzy personal responsibility of officers or even judges. Hence international voices advocating the reduction of central beaurocracy: since we are unable to form efficient, just, transparent, auditable and accountable governments, the least we could do is limit their resources (such as head count and funding of ministries, agencies and various initiatives) to the minimum. Following this logic, governments should not fund Artificial Intelligence projects out of our tax money.
  3. The particular problem I could see with crime prevention algorithms is that people employed to define, or influence rules of those algorithms (middle-class state-employed officials) are not likely to be in the groups identified by the algorithm as high-risk (low-income, poor neighborhood, immigrants, pathological families). If the policy maker is allowed to discriminate a group he or she does not belong to, this could lead to all sorts of abuses.  
  4. The idea of a judge using an AI assistant to assess defendants is even more frightening. In countries where judicial system suffers for inadequate funding, often emloying mediocre, badly paid individuals permanently pressed for time, this is a scary concept. For instance, I could easily see a judge fully relying on an automatic recommendation, for the lack of better idea. The judge might not even be aware of the need of validating the algorithm, the quality of the training data, and assessing the certainty of a recommendation.
  5. In contrast, the idea of using AI to identify individuals that need social support or counselling, and pushing this to nonprofits and local community support groups might seem interesting. Someone could do this. However, the government or police just does not seem the right entity for the job and for the access to the stated information.
  6. This could lead to a quick conclusion to ban using Artificial Intelligence in justice. However, I could put forward a counter-argument. If justice, police and intelligence are stripped off the right of using advanced Machine Learning, they will become handicapped in information gathering in confrontation with other entities. Police will start losing battle against organized crime groups certainly equipped with the most advanced technology. Also, it will not be able to fight cybercrime. Governments will not stand up in courts against multinational corporations. Counterintelligence will not prevent foreign powers messing in the elections
  7. To strengthen this argument even further, one could say that Big Data in the hands of large multinational corporation is more dangerous than in the hands of a government
  8. If we cannot stop the inevitable, we can certainly aim to control it. Artificial Intelligence software used by the justice should be transparent, auditable and perhaps open source. I was inspired by
     Cathy O’Neil lately interviewed by DataCamp’s Hugo Bowne-Anderson where she explores these ideas with much greater depth 
  9. Taking yet another angle: decisions made today by judges, policemen and state officials are biased already, not because of technology but because of the human nature. It is an entertaining idea to imagine that future AI systems could actually be less biased and thus fairer than today’s human-driven decisions. Some work here is already in progress. For instance, as explained by Richard Zemel, one can build alternative representations of data consumed by AI, obfuscating the source data in such a way that bias against certain categorical variable (such as age, gender or ethnicity) becomes impossible.
  10. The growing debate on fairness in Machine Learning also brings some hopes of reaching interesting perspective which, perhaps, could not be reached by unassisted humans. There are several definitions of fairness in the ML community, one of them tells the algorithm to treat different groups equally. Unfortunately, human judges don’t do very well in this respect. For example, an outcome of a divorce case may correlate with the gender of the judge. Could AI improve this, in some distant future? Richard Zemel thinks it could, proposing a workflow in which a machine could outrun, or preempt ruling of a downstream human decision maker, known to be discriminatory against a particular group.

Thinking of the current and future use of AI in justice, I am far from any strong opinion. My thoughts expressed above revolve around a number of strong short-term concerns, and some cautious long-term hopes. In general, it looks to me that the stronger push for commonplace Data Literacy would be a good thing. We all need better education and insight here. Lawmakers, judges, police officers and jury members will do a better job in using (or not using) AI assistants, if they grasp the threats and challenges associated with data-based judgements.

Should justice use AI?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.