Who would have thought that big businesses care about the ethical implications of their actions? It is refreshing to see that companies like Axon take ethical conduct so seriously. I had an ethical dilemma recently as well and enjoyed a month of free Spectrum packages because of a payment glitch. I am embarrassed to admit that I didn’t inform them about it when I found out. So learning that Axon declined substantial police contracts based on ethics was unbelievable.
So What Exactly Happened with Axon?
Earlier this year, in April, Axon formed a special internal board. It is called the AI & Policing Technology Ethics Board. The purpose of the board was to analyze the ethical implications of selling early versions of developing the technology. Things like AI and facial recognition are still considered developing technologies. They have a long way to go before these technologies can mature.
In June, this year, the board released its first report. It stated that it was unethical for Axon to continue selling FR to the police. The reason stated was that the technology was still in the early stages. Because of which it had some real-world issues. The report stated that on people of colour, this technology does not work that well as compared to how it works on whites.
How did Axon React?
On 27th June 2019, Axon released a statement, “Current face-matching technology raises serious ethical concerns. In addition, there are technological limitations to using this technology on body cameras. Consistent with the board’s recommendation, Axon will not be commercializing face matching products on our body cameras at this time”.
Axon further committed to being more responsible when working to develop technologies in an ethical manner. They would team up with top independent researchers to “de-bias training data and algorithms”. Axon’s CEO, Rick Smith said in an interview with Quartz earlier this year, that, at the time, other companies were continuing to trade with law enforcement agencies. Companies like Motorola were selling facial recognition software.
You might be wondering why there is so much publicity about the ethical implications of technology use. Especially if we consider that we are talking about law enforcement agencies. That is largely because there have been recorded instances where the Police have abused the technology.
Recently there was a publicized case where the police used facial recognition to apprehend a shoplifter. In fact, it was later revealed that the police has abused the software in many instances. So this raises the question, what is so wrong about the way the Police use facial recognition software?
How does Police Abuse Facial Recognition Software?
The way facial recognition software works is you upload a picture. The software then analyses it. It measures facial features like the distance between the eyes, the size of the nose. It even measures jawlines and cheekbones.
So let’s look at how the police caught the shoplifter mentioned above. Eyewitnesses to the incident reported that the shoplifter looked like actor Woody Harrelson. The police were unable to run facial recognition through the CCTV footage of the accused. So they tried something else. They uploaded the picture of the actor instead. And lowered the accuracy filter in the facial recognition software. That led them to the suspect who was later arrested for petty larceny.
Many companies are still selling facial recognition software to law enforcement agencies. Amazon is one of them. in fact, Amazon’s ‘Rekognition‘ software is cloud-based. As is Microsoft’s ‘Face API’. So with the current Spectrum internet speeds, law enforcement agencies can still use the service. For this, both Amazon and Microsoft are receiving criticism from AI researchers. As well as from their shareholders.