We want those around us to be good—but not too good.
A version of this article has been published by Nautilus
At the beginning of the final debate of the 1988 presidential election, Democratic candidate Michael Dukakis, a lifelong opponent of capital punishment, was asked whether he would support the death penalty if his wife, Kitty, were raped and murdered. He quickly and coolly said no.
It was a surprising, deeply personal, and arguably inappropriate question, but in demonstrating an unwavering commitment to his principles, Dukakis had handled it well. Or so he thought.
“The reporters sensed it instantly,” wrote Roger Simon about the scene at the debate immediately after Dukakis gave his response. “Even though the 90-minute debate was only seconds old, they felt it was already over for Dukakis.”
The former governor of Massachusetts saw his poll numbers plummet after the debate. His campaign never recovered, and George H. W. Bush became the 41st President of the United States.
Why were voters so put off by his response to the question? His stance on capital punishment was well-publicized at the time, so it shouldn’t have come as a surprise. And it may well have been worse had he gone the other way and advocated a friends-and-family exception to his death penalty opposition.
The reason his response did him in, it seems, is a quirk of human psychology: We don’t really want to be friends with people who have exceptionally high moral standards. And in politics, it’s well-documented that seeming like a friend is at least as important as seeming competent.
So why isn’t exceptional morality tightly correlated with friendliness? After all, it would seem like an evolutionary advantage to pick friends who have a strict sense of morality—friends who feel obligations to you, to the community you share, to upholding certain standards of behavior. Dukakis, by refusing to add a selfish exception to his stance, met those requirements. By deciding, in that hypothetical scenario from the debate, not to kill a person who raped and murdered his wife, he appeared capable of consciously suppressing vindictive gut instincts and adhering to higher principles.
The problem for Dukakis, however, is that according to a recent study, people who make instinct-based moral judgments are perceived by their peers to be more moral and more trustworthy than those who rely on reasoning. In other words, we want friends who go with their gut when faced with a moral dilemma. The reverse is true as well: We tend to be wary of people who react to moral dilemmas by calculating costs and benefits—it’s a large part of why we’re so reluctant to trust robots.
Most modern politicians have learned from Dukakis’ mistake and try their best to project warm, personable images. Yet no matter how much politicians cater to it, our tendency to judge people based on whether they act according to their intuition is flawed. The unfortunate fact is that moral intuition often gets it wrong. Consider, for example, how psychopaths can feign emotional expression to manipulate their peers, or how empathy—an instinctive, emotion-laden process—can distort our morals.
Moral psychologists and philosophers generally agree that our capacity for moral intuition didn’t evolve to properly handle complex contemporary issues like, say, geopolitical conflict or, in Dukakis’ case, the criminal justice system. Rather, it probably evolved to help us cooperate with each other on a smaller, local scale—to steer us toward or away from certain people in our vicinity, for example. That’s why emotional expression matters so much to us, even at the expense of virtuousness; it’s a quick and easy way to decide whether to trust somebody.
The humble evolutionary origin of our moral intuition arguably makes many of society’s problems harder to solve. Dukakis’ case illustrates that even in the face of sound reasoning, instinct can prevail. It’s a powerful force, and politicians often cater to it.
Even the practice of philanthropy has suffered from a conflict of reason and emotion. Effective altruism, for example, is a philosophical movement that seeks to coordinate charitable giving “based on reason and evidence.” It operates according to the utilitarian premise of weighing the costs and benefits of choices in order to maximize overall well-being. Despite its noble intentions, however, the movement has some trouble with its image. Peter Singer, one of the figureheads of effective altruism, has aroused controversy for his calculating views on topics like infanticide and disability. Despite his contributions to philanthropy, Singer’s views are, for many people, an example of reason prevailing over emotion to an unsettling extent. Although he and his colleagues may well have outlined a good strategy for minimizing suffering worldwide, their lack of emotional appeal seems to be dissuading people from supporting their cause.
How, then, can effective altruists—and other well-meaning people who tend not to let their emotions take control—get their message across?
One solution is simply for them to pretend to be more influenced by emotion than they actually are. “Fake it,” said David Pizarro, a professor of psychology at Cornell University and an author of the aforementioned study on intuition and trustworthiness. “People want to see that you’ve thought a lot about a tough moral decision. They want to see that you’ve experienced some conflict between reason and emotion and deliberated through it.”
Pizarro and his colleagues argue that an outward display of emotion functions as a signal to others that you’ve incorporated emotional information into your moral decision. Without that signal, an audience might get the impression that you haven’t experienced any emotion at all—a possibility most people find pretty disturbing. In Dukakis’ case, the right response may have been to mull over the emotional weight of the debate question for longer than he did. Even if he’d reasoned his way to a solid, unconditional opposition to the death penalty long before that debate, simply appearing to experience some conflict between emotion and reason before giving his answer probably would have helped his image.
But will we ever get to a point at which it’s okay to be impartial and calculating during moral decisions? Will our receptiveness to unfiltered reason someday grow such that logical arguments don’t have to be infused with emotional rhetoric?
There is emerging evidence that the average person is becoming more and more likely to calculate costs and benefits during moral dilemmas. So there’s a chance that we’ll eventually be able to leave unnecessary emotional appeals—the “faking,” the boastful name-dropping, and the gratuitous dabs—in the rearview mirror, and focus on the facts.
Still, Pizarro is skeptical. “I think there will always be a tension there (between reason and emotion),” he said. “It’s part of human nature.” So, until further notice, it may be a good idea to practice some convincing facial expressions in the mirror to complement your argument.