close
close

Mondor Festival

News with a Local Lens

Feels wins: why AI will never send you to prison
minsta

Feels wins: why AI will never send you to prison

In his latest for B&T, DDB Australia Managing Director of Strategy and Growth, Leif Stromnes, explains that although humans are completely fallible (like machines), we are willing to forgive humans for their mistakes and even prefer to stick to a set of rules ” morally right”, even if it could have bad consequences.

In his founding volume Think, fast and slowDaniel Kahneman illustrates the fallibility of human decision-making by studying the results of parole decisions made by Israeli judges just before and after lunch. He found that when judges were “starved,” that is, right before their lunch break, their parole approvals dropped to virtually zero. Once they ate, approvals increased by 65 percent.

It’s really alarming. If you apply for parole and you are the last case before lunch, you are 65 percent less likely to be released than the lucky one who is the first case after lunch. Your greatest crime might actually turn out to be your spectacular lack of timing.

Unlike humans, AI-based computers do not feel hungry or tired. In fact, they don’t even need a lunch break. An ethical AI could, in principle, be programmed to reflect the values ​​and ideals of an impartial agent. Freed from human limitations and biases, such machines could even be said to make better decisions than we do. So what about an AI judge? Unfortunately for pre-lunch parole seekers, that won’t happen anytime soon. The problem is not with the machines, but with our own psychology.

Leif Stromnes

Artificial or machine decision-making is based on an algorithm of costs and benefits and the decision with the best overall consequences (known as consequentialism) is the decision that the machine will always make. But humans are different. By default, we follow a set of moral rules in which certain actions are “just wrong,” even if they produce good consequences.

Our distaste for consequentialism has been demonstrated in several psychological studies in which participants are presented with hypothetical dilemmas pitting consequentialism against more rule-based morality. In the “footbridge dilemma,” for example, participants learn that a runaway train is about to kill five innocent people trapped on the tracks. Its progress can be safely stopped by pushing a very large man, who happens to be standing on a small footbridge overlooking the train tracks, to his death below (where his body will stop the cart before it kills the five others). The vast majority of people think it is wrong to push this man to his death in this matter, despite the good consequences.

But that’s only half the story. The minority of people in the study who were willing to quietly sacrifice their lives for the greater good were deemed untrustworthy by the rest of the participants. This discovery was validated by nine other experiments involving more than 2,400 subjects. It would seem that humans have a fundamental distrust of machines when it comes to morality, because artificial machines do not possess the very characteristics that we use to infer the trustworthiness of others. We prefer an irrational commitment to certain rules, regardless of the consequences, and we prefer those whose moral decisions are guided by social emotions like guilt and empathy. Being a stickler for morality says a lot about your character.

Another quirk of the psychology of humans versus machines is our ability to forgive human errors, but our almost complete lack of tolerance for the same mistakes made by a machine.

A Cruise robo-taxi sinks into wet cement.

Empathy is extremely powerful in mitigating anger in a human-to-human interaction, but completely useless in a self-service technology breakdown. This is why we get angrier at robots and automated self-service systems when they bother us than we do at humans. And why are we outraged when an autonomous motor vehicle kills an innocent pedestrian when this is a daily reality for human drivers.

This has profound implications for brands and marketing. The inexorable rise of AI and the automation of most customer service tasks in the name of efficiency and cost control means that the default interaction is human-machine. But as we’ve learned, we don’t like the way machines make decisions, and we’re much less forgiving of the mistakes they make.

Although the use of machines will almost certainly increase efficiency and reduce errors, the result could be a lack of confidence in the integrity of the decision and, ironically, lead to lower customer satisfaction rates. Even if machines were able to perfectly imitate human moral judgments, we would know that the computer does not arrive at its judgments for the same reasons we do.

This idea came to fruition with the launch of a robotic barista café in Melbourne in 2017. From a rational point of view, it made sense. The robot brewed perfect cup after perfect cup, didn’t call in sick, and didn’t require overtime on weekends. But every little imperfection was unforgivingly magnified and after a year the cafe closed its doors. As one customer elegantly put it: “I just didn’t trust the robot barista to know how much I really liked my coffee.” »

Although AI and automatic decision-making, with their efficiency and low error rates, will undoubtedly win out, an emotionally satisfying customer approach might involve prioritizing human-to-human contact in social interactions with high added value. And automate everything else.

After all, to err is human, to forgive divine.