Why don’t we like virtuous people?

 We want those around us to be good—but not too good.


A version of this article has been published by Nautilus

At the beginning of the final debate of the 1988 presidential election, Democratic candidate Michael Dukakis, a lifelong opponent of capital punishment, was asked whether he would support the death penalty if his wife, Kitty, were raped and murdered. He quickly and coolly said no.

It was a surprising, deeply personal, and arguably inappropriate question, but in demonstrating an unwavering commitment to his principles, Dukakis had handled it well. Or so he thought.

“The reporters sensed it instantly,” wrote Roger Simon about the scene at the debate immediately after Dukakis gave his response. “Even though the 90-minute debate was only seconds old, they felt it was already over for Dukakis.”

The former governor of Massachusetts saw his poll numbers plummet after the debate. His campaign never recovered, and George H. W. Bush became the 41st President of the United States.

Why were voters so put off by his response to the question? His stance on capital punishment was well-publicized at the time, so it shouldn’t have come as a surprise. And it may well have been worse had he gone the other way and advocated a friends-and-family exception to his death penalty opposition.

The reason his response did him in, it seems, is a quirk of human psychology: We don’t really want to be friends with people who have exceptionally high moral standards. And in politics, it’s well-documented that seeming like a friend is at least as important as seeming competent.

Dukakis1988rally
Michael Dukakis

So why isn’t exceptional morality tightly correlated with friendliness? After all, it would seem like an evolutionary advantage to pick friends who have a strict sense of morality—friends who feel obligations to you, to the community you share, to upholding certain standards of behavior. Dukakis, by refusing to add a selfish exception to his stance, met those requirements. By deciding, in that hypothetical scenario from the debate, not to kill a person who raped and murdered his wife, he appeared capable of consciously suppressing vindictive gut instincts and adhering to higher principles.

The problem for Dukakis, however, is that according to a recent study, people who make instinct-based moral judgments are perceived by their peers to be more moral and more trustworthy than those who rely on reasoning. In other words, we want friends who go with their gut when faced with a moral dilemma. The reverse is true as well: We tend to be wary of people who react to moral dilemmas by calculating costs and benefits—it’s a large part of why we’re so reluctant to trust robots.

Most modern politicians have learned from Dukakis’ mistake and try their best to project warm, personable images. Yet no matter how much politicians cater to it, our tendency to judge people based on whether they act according to their intuition is flawed. The unfortunate fact is that moral intuition often gets it wrong. Consider, for example, how psychopaths can feign emotional expression to manipulate their peers, or how empathy—an instinctive, emotion-laden process—can distort our morals.

Moral psychologists and philosophers generally agree that our capacity for moral intuition didn’t evolve to properly handle complex contemporary issues like, say, geopolitical conflict or, in Dukakis’ case, the criminal justice system. Rather, it probably evolved to help us cooperate with each other on a smaller, local scale—to steer us toward or away from certain people in our vicinity, for example. That’s why emotional expression matters so much to us, even at the expense of virtuousness; it’s a quick and easy way to decide whether to trust somebody.

The humble evolutionary origin of our moral intuition arguably makes many of society’s problems harder to solve. Dukakis’ case illustrates that even in the face of sound reasoning, instinct can prevail. It’s a powerful force, and politicians often cater to it.

Even the practice of philanthropy has suffered from a conflict of reason and emotion. Effective altruism, for example, is a philosophical movement that seeks to coordinate charitable giving “based on reason and evidence.” It operates according to the utilitarian premise of weighing the costs and benefits of choices in order to maximize overall well-being. Despite its noble intentions, however, the movement has some trouble with its image. Peter Singer, one of the figureheads of effective altruism, has aroused controversy for his calculating views on topics like infanticide and disability. Despite his contributions to philanthropy, Singer’s views are, for many people, an example of reason prevailing over emotion to an unsettling extent. Although he and his colleagues may well have outlined a good strategy for minimizing suffering worldwide, their lack of emotional appeal seems to be dissuading people from supporting their cause.

How, then, can effective altruists—and other well-meaning people who tend not to let their emotions take control—get their message across?

One solution is simply for them to pretend to be more influenced by emotion than they actually are. “Fake it,” said David Pizarro, a professor of psychology at Cornell University and an author of the aforementioned study on intuition and trustworthiness. “People want to see that you’ve thought a lot about a tough moral decision. They want to see that you’ve experienced some conflict between reason and emotion and deliberated through it.”

Pizarro and his colleagues argue that an outward display of emotion functions as a signal to others that you’ve incorporated emotional information into your moral decision. Without that signal, an audience might get the impression that you haven’t experienced any emotion at all—a possibility most people find pretty disturbing. In Dukakis’ case, the right response may have been to mull over the emotional weight of the debate question for longer than he did. Even if he’d reasoned his way to a solid, unconditional opposition to the death penalty long before that debate, simply appearing to experience some conflict between emotion and reason before giving his answer probably would have helped his image.

But will we ever get to a point at which it’s okay to be impartial and calculating during moral decisions? Will our receptiveness to unfiltered reason someday grow such that logical arguments don’t have to be infused with emotional rhetoric?

There is emerging evidence that the average person is becoming more and more likely to calculate costs and benefits during moral dilemmas. So there’s a chance that we’ll eventually be able to leave unnecessary emotional appeals—the “faking,” the boastful name-dropping, and the gratuitous dabs—in the rearview mirror, and focus on the facts.

Still, Pizarro is skeptical. “I think there will always be a tension there (between reason and emotion),” he said. “It’s part of human nature.” So, until further notice, it may be a good idea to practice some convincing facial expressions in the mirror to complement your argument.

4 Replies to “Why don’t we like virtuous people?”

  1. I’d be curious to consider the wrinkle of if these principles hold up when hwe communicate online in text form only. In that arena (where so much communicating is done these days), it would be more difficult to convey the more subtle hints of emotion. So would logic and reasoning based thinking be more easily accepted in that field? Or alternatively, would people still want to see you weighing your emotional considerations? I’m picturing the question to Dukakis being asked in an online forum and him having to respond.

    1) No.
    2) It me sick to think about happening to my wife, but I think I’d still not support it.
    3) Capital punishment is not shown to have a deterrent effect on crime in 77% of rape Andy murder cases (stats made up, obvi).

    I can’t help but think my mind would respond most to the stat based argument, but you could probably chalk that up to me wanting to think the best of myself (or maybe just me being a small sample size).

    Like

    1. That’s a good question. I think the ubiquity of text interaction has conditioned us to accept lower levels of emotional expression online. The visceral aspect of morality and trustworthiness–e.g., our internal recoil when we hear a monotonous voice or see a deadpan face–is less relevant in text-based interactions, so I’d guess emotional appeal during an argument would less effective. But a side effect of a reduced reliance on emotional information is that we seem more likely to be insensitive jerks online. Sometimes emotional information is useful, like when the pain on someone’s face tells you that what you just said or did was hurtful and wrong.

      I, like you, like to think I respond well to untreated facts. I also like to think that I don’t need emotional feedback to know how not to be a jerk online. But it can be hard to tell how much of our tact and kindness, and how much of our behavior in general, depends on that feedback.

      Like

  2. I don’t agree with the premise here that sticking to your beliefs in itself makes you a good person. The definition of “good” is not so relative. You could call sticking to one’s belief “integrity”, but you can have integrity to a value set that is incorrect. A lot of Nazis seemed to believe firmly in their actions. That didn’t make their beliefs “good”. Am I misunderstanding your intent?

    Like

    1. I completely agree with you that being consistent isn’t in itself a sign of being “good.” The type of “goodness” or “high moral standards” I was referring to in the post (and perhaps I didn’t make this very clear) was partly about adherence to principles but also, more importantly, about seeking to maximize the net positive effects of one’s actions. I deemed Dukakis’ stance on capital punishment “good” because abolishing capital punishment appears to be the best policy choice from a utilitarian perspective. That is, capital punishment ends lives but does not produce the desired outcome of deterring crime, so we shouldn’t rely on it as a form of criminal justice.

      That said, there are plenty of ways you could argue against utilitarianism, both regarding the death penalty and in general. I took the lazy way out in the post and basically just implied “utilitarianism = morally right” without bothering to defend that position 🙂

      Like

Leave a comment