Why we should (or should not) fear AI / by Chris Shaffer

Our society has no lawyers; it has no formal criminal code. When a citizen is accused of wrongdoing, they are brought before the King. The King acts as judge and jury. The King decides whether they are guilty, and the King alone decides the punishment.

Doesn’t sound like a great system of government, does it?

The King is wise and fair.

Feel better now? Maybe if you’re reading a children’s story, but not if this is real life.

The King got a 1450 on his SAT.

What about now? What if it were a 1600?

The King is literally the smartest person ever to live.

Still no good…

The King has superhuman intelligence. He is not only undefeated against all of the chess grand-masters, but checkmates them using only pawns, that is how incomprehensibly smart he is.

The King’s IQ doesn’t matter… that’s missing the point.


I’m going to talk specifically about the fear of “creepy” AI being intentionally given control over aspects of our lives. There are other fears, of course — the age-old economic fear of displaced workers, the fear of weaponized AI becoming the rogue nuclear state of the next generation — but they’re off the menu for this article.

A concrete example of how AI in a position of power can go wrong: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

We fear (with evidence) that learning machines may learn (or “learn”) from the worst of human behaviors, including racism. When we can’t put our finger on precisely what the computer is getting wrong, we think of Minority Report. The patterns “smart” computers pick up on may reinforce generations of human wrongdoing, or may be wrong in completely new ways. We fear opaque life-altering decisions with no means for appeal or questioning of the decision-making process.

These “smart” computers trouble us most when they’re given authority and powers we wouldn’t completely trust other people with in the first place. Is what’s truly terrifying about this the fact that the algorithm is “in a computer” or the fact that it can’t be cross-examined? Would we not be just as uncomfortable with a human psychologist who stated someone was a likely re-offender and was taken as gospel? It’s not so much that we fear the machine itself, but the erosion of rights that comes with it.


As a counterexample, take bail reform in New Jersey: https://www.nytimes.com/2017/02/06/nyregion/new-jersey-bail-system.html

A computer-generated risk score is central to the reform, but debate focuses on the typical law-and-order fodder — this risk score is uncontroversial.

What’s the difference?

One answer may be that NJ’s new system punishes (non-convicted) people less, but if that were the case, you’d expect at least a few “Trojan Horse” columns. The real reason is this:

Our team at the Laura and John Arnold Foundation partnered with leading criminal justice researchers to create the tool, which was developed using the largest dataset of pretrial records ever assembled. The researchers identified nine factors that best predict risk. They are all related to the defendant’s age, criminal history and current charge.¹

It doesn’t creep people out because it’s just, at heart, a scorecard. A computer might be spitting the number out, but the algorithm is based on known inputs that have been agreed upon. There is no machine learning and no neural network. It can be audited. Mistakes and incorrect inputs can be found and demonstrated.

It might not work perfectly all the time, but our legal system rarely does; what’s important is that there are paths to challenge results and improve the system.

The risk score is a “dumb” computer — it does something you could do with paper and pencil, but faster. It’s like a cash register — we don’t worry about the machine “deciding” what price we should pay; a GPS isn't "deciding" where you go.


AI in positions of power is scary if and when it represents a breakdown in the rule of law. The rule of law is central to free, democratic societies; it’s central to most of human civilization’s progress.

Societies are wealthier and freer today than in the middle ages because we’ve developed separation of powers, elections, written laws, trial by jury… not because we bred smarter monarchs.

We decide what medicines to use by using double-blind safety and efficacy trials, not based on the beliefs of the wisest sages. Advances in medicine and technology came from applying the scientific method; not because we were gifted with successively smarter generations of scientists.

The salient difference between the “good” and the “bad” computer algorithms in the examples above is that one is subject to the established rules of a court of law, and the other is an attempt to evade them. There’s no IQ score that exempts a human witness from cross-examination; the suggestion that there would be for a supercomputer is what’s problematic.


Humans have had access the most powerful computing device ever to exist — the human brain — since the dawn of our evolution. Since then, however, our most valued progress has not come from making that device “better,” but by addressing its weaknesses. Not by finding and promoting smarter leaders, but by building more trustworthy institutions. We may soon develop a more powerful computing device than any human brain… but we’ve done that dozens or maybe hundreds of times already — between population increase, improved nutrition and education, the Flynn effect — a new “smartest brain on Earth” has a lot of precedent. To all of a sudden reverse the fundamental decisions we’ve made about giving a black box power over society simply because the next one plugs into an outlet is a non-sequitur.

In the coming decades, an artificial neural net will likely emerge that is better than a human soldier at distinguishing between enemy combatants and civilians in a war zone. Facial recognition technology will replace eyewitness testimony. Medical software may be able to iterate and design molecules to attack a particular pathogen faster than humans can.

Such systems, under proper supervision, can save lives. But we should never let opaque programs decide who the enemy should be, whether and with whom we are at war, who the criminals are, or what drugs we should all be taking.