At first, lawyers seemed to be the ones benefiting from machine intelligence, using the technology to reinforce their client’s cases as well as identify the weak-points of the opposing side. But in the recent times, artificial intelligence has found its way right into the courtroom, sitting next to the judge as the main adviser.
In fact, in the near future – it’s possible that technologies like the recently developed tool that reads the mind, are going to be used to detect lies in courtrooms. But the big question is, now that AI is slowly becoming the main tool looked upon even by judges to decide on the fate of the defendant, is the technology that trustworthy?
AI in Court for Scrutiny
A while ago, there were assumptions that artificial intelligence cannot be taken to court because the technology has no morals. But it’s now clear that AI has had its day in court, not as a judge but as the offender. How? Towards the end of last year, the ACLU filed an amicus brief that sought to dissect a controversy linked to algorithms and AI’s use in criminal law.
This emanated from a case where Billy Ray Johnson was being tried on allegations of sexual assault and burglaries. The defendant denied committing the crimes, but using an algorithm called TrueAllele the prosecutors found their way to have him sentenced to life imprisonment without bail.
Well, it’s obvious that most people would have likewise denied such charges even when true. Nonetheless, it’s completely non-ethical to assume that the victim is in the wrong without a thorough trial and validation of the evidence. As in, there should be enough tangible and validate-able proof, to declare the person guilty – because this is about the freedom of a citizen as well not just punishing a crime.
But in Johnson’s case, what TrueAllele returned as results turned out to be the one major point of reference in deciding his fate – which his attorneys claim they were denied access to the algorithms source code, to examine possible chances of biased feeding. The prosecutors managed to convince the judges that algorithms remain protected as a trade secret.
No Technology is Fool Proof
While machine intelligence was seen as a great tool to help unclog the justice system from allegations and complaints of unfairness, experts have of late voiced their concerns on why AI needs slow and careful implementation in the justice system.
The decision to sentence Johnson for life based on the machine’s recommendation raised a serious red flag to experts. “No technology is foolproof” Jessica Gabel Cino, an associate professor of law and academic affairs dean said.
In simple words, it should be noted that these machines are developed by humans who are characterized by making errors and sometimes biased intentions. That means if any technology is to be deployed – with the potential to influence the fate of a serious matter like a person’s freedom and so on… it doesn’t make sense to deny access to the source code for data validation. In fact, doing that definitely dilutes the integrity of the tech.
How to Make People Trust AI in the Courtrooms
Okay, there is no denying that complex mistakes in the justice system occur. For example, the judge reports to work in bad mood (after quarrels at home,) he or she is sick or something close to these two, which may affect the verdict.
But when it comes to using artificial intelligence, there is room to ensure the technology adopted in law practice is clean and far from blames of biases, both the defendant and the plaintiff should have access to the deployed algorithm source codes to be confident that their case was handled with fairness.