Quantum Crime

There’s two types of crime: a classic crime that happens regardless of whether the police was there to see it, say a burglary or fraud. And then there is quantum crime that wouldn’t happen were there no police to see it. I mean the crime itself would happen, but it would never get into police data, so it’s almost as if it never happened in the first place. An example of this is speeding, biking without a helmet, this sort of thing.

With Predictive Policing there is an AI system telling police officers where to go. The system’s objective is to send the officers to places where the crime is most likely to occur. You can see how this can easily turn out really bad: PredPol sends the officers to a particular neighborhood, the officers respond to all sorts of crimes, the data is collected, and now when the system is retrained, it doubles down on sending the officers to the same place, because that’s where the crime happens.

For some reason this mechanism is called racial bias reinforcement and people are worried.

Though it is sold as “crime prediction” and officer allocation tools for under-resourced agencies, one of the reasons predictive policing tools are so controversial is the contention that they are built on and reinforce racial bias. Using data generated by unfair or questionable policing strategies to train a computer system can simply result in automation of that bias.

On one hand, this is good for people like me who break the law every day, but are not the type likely to be bothered by the police. On the other hand, it’s not particularly fair to the people in the locations predicted to have crimes happening there, and you would think that with such a clear problem built into the systems, officials would be likely to put some kind of checks and evaluations in place, but no:

MuckRock has submitted requests to more than 50 agencies known to have used the technology, asking for information on their predictive policing training and use. Of those that have responded, a few have been able to send their contracts, input data, scientific papers written by PredPol’s makers, annual mayoral presentations, and even some training materials, but none have been able to provide validation studies.

There is two ways to interpret this. One is that the police never validate how the AI is working for them and their communities, and prefer to just keep paying and hoping for the best and that’s why when you ask for the studies, they have nothing. The other interpretation is that any agency that has validated such a system has always abandoned the program, like Oakland, so the only ones left to ask for studies are those who haven’t done one. I’m not sure which is the case, but it looks like I there may be a bias right here in the study about bias.

Pray to Siri

Meanwhile, artificial intelligence spurs religious debate. Where else than in Seattle, the scene of Fall; or, Dodge in Hell, a recent fantasy epic about dead Seattleites being brain-scanned and simulated in a cyber world, where they reenacted the Biblical book of Genesis.

Some of these ideas are quite remarkable:

It doesn’t surprise James Wellman, a University of Washington professor and chair of the Comparative Religion Program, that people of faith are interested in AI. Religious observers place their faith in an invisible agent known as God, whom they perceive as benevolent and helpful in their lives. The use of technology evokes a similar phenomenon, such as Apple’s voice assistant Siri, who listens and responds to them.

Apparently, for some people, God is like, a personal assistant? I can kind of understand that the emotion of a prayer not granted may feel like Alexa always playing a different song than I asked for, but the basic relationship has always been the opposite: people serving God and listening to his word, not the other way round. I guess the lesson here is to never underestimate the diversity of religion.

And then, this:

Last April, The Ethics & Religious Liberty Commission — the public-policy section of the Southern Baptist Convention — published a set of guidelines on AI adoption that affirms the dominion of humans and encourages the minimization of human biases in technology. It discourages the creation of machines that take over jobs, relegating humans to “a life of leisure” devoid of work, wrote the authors.

Machines that take over jobs, that’s all machines, isn’t it? But I don’t know, maybe in these surveys about what people think are the biggest threats to humanity, the authors should add humans being relegated to “a life of leisure” as an option, just to be sure we are not missing this.