Artificial Intelligence Sentences Man to 6 years part II

This is blog 2, part 2 of AI sentences Man to 6 years in prison…

                Last time I left you we were talking about Wisconsin State versus Lomas where artificial intelligence sentenced a man to 6 years in prison – yes artificial intelligence sentenced a man to 6 years in prison… I said we’d be taking a deeper look into bias, not the bias that you probably have come to know, but the bias in computer code and AI in this blog. Remember, or if you are not familiar – go back to blog 1 to familiarize – what the term black box is relating to AI; I likened it to your cell phone. Yes, you know how to turn it on, you know to turn it off, you know how to make a phone call, but you don't know the inner workings of the algorithms that connects its inner workings to simplify it for your use.

                The same thing can be said for when a judge – in the State vs. Loomis case – looked at the AI Compass application that she used to sentence Mr. Loomis to 6 years in prison. She only had charts, summaries and graphs to refer to – according to what I read - and three categories to sentence Mr. Loomis – I’m simplifying here, but I have a copy of the case on this site, if you want to dig deeper – The three types of sentencing guidelines that the AI recommended would be under: bad, really bad and really really bad.

                Mr. Loomis came back under the really really bad category. This took him out of being eligible for probation and forced the judge to sentence him to a 6-year prison sentence. The PROBLEM is that when the judge did this, did she violate Mr. Loomis’ Due Process Rights? Mr. Loomis appealed under that very question. The Court of Appeals certified (which means that they said, “Hey Supreme Court of Wisconsin, take a look at this and let us know if the judge violated Mr. Loomis’ rights by using AI, and not telling Mr. Loomis what the AI used to determine the sentencing guidelines (because the private company that owns the trade secrets to this AI won’t let the court or the defendant know what’s in the algorithm that made the judgment call on sentencing him to 6 years).

                Ultimately, the Supreme Court of Wisconsin said no, there was no violation of Mr. Loomis’ due process rights. But just in case there may have been some wrongdoing, in the future, we will have the courts that use AI sentencing warn the defendant that the court is using this AI to sentence them.

                This flies in the face of precedent; what is precedent? Precedent is used by courts - it basically guides courts by directing courts from prior decisions. Typically courts usually follow this case law precedent. That’s basic and fundamental to our legal system – until such a time that there is something of major significance that needs to be overturned. The PRECEDENT for Wisconsin in sentencing and for pre-sentence investigation reports (the reports used by a judge to determine an appropriate sentence (among other things) says that the defendant has a RIGHT to review and verify the information that is used to calculate/justify his/her recommended sentence. BUT in the current case with the AI used, the Supreme Court decided that precedent doesn’t apply here. What? It went against its own precedent!

                There are a few questions and issues that come to mind after their decision in this case. First, the company that actually created the AI were denied an amicus brief – what does that mean: an amicus brief is a friend of the court (literally that's what it means in Latin) and it allows a company or someone outside the case [an expert] to go ahead and give information that can help the judge (s) take a look at facts and research they may not be aware of to help them make a better decision.

                Ironically, there was an amicus brief that was allowed, and it was completely one-sided and totally for the use of Artificial Intelligence in sentencing. This appears to be biased to me. What do you think?

                Why is it bias? Let’s change lanes and put our civil law hat on. In a recent decision by Amazon in 2018, (after the 2016 ruling in Loomis) they decided and developed an employment algorithm – it taught itself, YES, IT TAUGHT ITSELF bias… bias against women because the input information (algorithm) that the human coder put into - let's say a metaphorical black box –  spit out computations after it taught itself how to evaluate employees. And it determined that the male candidates are more qualified than female candidates. Amazon, wisely, ultimately pulled that software… Now back to the State vs. Loomis. If there’s no way to know what was put in the ‘black box’ that determined Mr. Loomis’ sentence, then how, based on Amazon’s results, can anyone say there was no bias?

                Let's take that further – prior to the Court of Appeals certifying the case, there was a motion for new trial, where Mr. Loomis’ appeal for this case began – at the Circuit Court. An expert testified at the Motion for New Trial hearing and made some interesting observations. First, this type of AI should not be used in sentencing or in the evaluation in the severity of sentencing - which is exactly how it was used in this case. Second, even it is used, how do you update it? And how do you know how that it was actually updated if it’s all kept secret? Also, what were the standards used to update this AI? Moreover, this AI doesn't look at the individual specifically – it just looks at groups and then it categorizes that individual into a group, not knowing if that group was affiliated with citizens of say California citizens, or Ohio citizens, or Texas and that has a huge factor on how it should develop its algorithm.

                Another interesting thing about this case is that Mr. Loomis’ attorneys did not argue the confrontation clause nor did they argue the equal protection clause (and I’m not saying that this would’ve made a difference, because I don’t think it would’ve, but for the sake of argument, I’ll expand on this a little….). The equal protection clause means that we all have the same rights. And by denying Mr. Loomis, it’s my belief that the equal protection clause was biased and created a violation of it. What about the confrontation clause argument? The CC is where you have a right to confront your accuser under the constitution and I think that anytime you're going to be sentenced to 6 years you should be able to look at the information used by the AI – at least that’s what the precedent would have you believe – before being sentenced.

                In closing, I’ll leave the above to debate, research, Q&A… look forward to your comments. Before I go,  my quote for the day is from Mohamed Ali it is, “Me, we.” Simple but effective. The next blog post will be discussing ROBOT BROTHELS. Yes, they exist. Coming to a town near you (hope not.) If Skynet doesn't take over before my next blog, I look forward to the next blog. Take care.

 
Previous
Previous

Robot brothels coming to a town near you

Next
Next

AI fights corona virus