Why do people make immoral decisions? Do they recognize the immorality of a choice but decide for pragmatic reasons to do it anyway? Do they lack sufficient self-control to resist powerful but morally problematic temptations? Do they reason poorly and apply moral principles incorrectly? Or do they fail to realize the moral import of their decisions in the first place?
It is likely that people experience moral failures for all of these reasons. Popular intuitionist models of morality (see @jonhaidt among others), which posit that most moral judgments arise from automatic, emotional and often uncontrollable reactions, are easily congruent with the first two. Research inspired by these models has further shown that deliberative reasoning about moral principles frequently occurs post hoc - justifying moral judgments and decisions after they have been made rather than causing them. However, these models have difficulty accounting for the last type of moral failure. If moral evaluations are automatic, there should be little difficulty recognizing that the decision one faces has moral ramifications. If intuitions are inevitable, the morality or immorality of an act should be self-evident.
And yet, very often, the moral implications of our choices are not so salient or so obvious. Driving solo to work today? An easy pragmatic move. But have you considered its moral worth relative to carpooling with your (annoyingly talkative) neighbor? Didn't wash your hands in the restroom? A bit disgusting, but no harm done. Oh really?
As philosopher Anthony Appiah (2008) has put it, "In the real world, the act of framing - the act of describing a situation and thus of determining that there's a decision to be made - is itself a moral task. It's often the moral task."
In a new paper published in PLOS ONE, Jay Van Bavel, Ingrid Haas, Wil Cunningham and I examine how the construal of actions in moral vs. non-moral terms affects their evaluation. We asked participants to rate the same set of ~100 actions in different ways, which means that any differences in resulting evaluations must be due to the way in which they were evaluated rather than reactions triggered by the actions themselves. Across a series of three studies, we found that actions processed in moral terms were evaluated more quickly and more extremely than when they were evaluated in pragmatic or hedonic terms. We further found that rating the morality of actions faciliated subsequent judgments of universality - whether everybody or nobody should engage in these behaviors. Further, these differences emerged even when people rapidly switched between different forms of evaluation on a trial-by-trial basis.
These data demonstrate that it matters how actions are construed. Just because people are in situations where they should consider the moral implications of their actions, does not mean that they will actually do so. And the questions one asks oneself - will this feel good? is this pragmatic? is this moral? - result, perhaps unsurprisingly, in different answers.
Read our open-access paper here: 'The Importance of Moral Construal: Moral vs. Non-Moral Construal Elicits Faster, More Extreme, and More Universal Evaluations of the Same Actions'
While dwelling on these issues, this fascinating article, 'The Architecture of Evil' by Roger Forsgren about Nazi architect Albert Speer is also well worth reading. Speer's writings after the war provide a remarkable articulation of how morality can become dangerously compartmentalized and separated from other forms of judgment.
"...it seems to me that the desperate race with time, my obsessional
fixation on production and output statistics, blurred all considerations
and feelings of humanity. An American historian has said of me that I
loved machines more than people. He is not wrong. I realize that the
sight of suffering people influenced only my emotions, but not my
conduct.... I continued to be ruled by the principles of utility."
Speer's case further suggests that failures to consider the moral implications of decisions can be motivated, and that moral blindness can be willful. See Reicher and Haslam recently in PLOS Biology for similar thoughts.