Print This Post Print This Post

Do moral judgments form a psychological natural kind? Lately, Stephen Stich and his colleagues have been arguing on the basis of empirical evidence that the features psychologists have identified as key to moral judgment do not, as a matter of fact, cluster together in a lawlike fashion. In particular, they argue that harm attributions do not always evoke the signature moral response pattern of authority-independence and generality, and conclude that since the purported nomological cluster breaks down, moral judgments do not form a natural kind. Their argument, of course, leaves open the possibility that there is some other cluster to be found. I am not a big believer in nomological clusters, but I will propose an alternative content feature that does seem to pair with the signature moral pattern in a lawlike fashion. Namely, it seems that whenever people take a piece of behaviour to express, in context, any of a set of attitudes that ranges from disrespect to debasement, the signature moral pattern is evoked. (As usual, I’ll just focus on wrongness judgments.) In short, people are intuitive deontologists, and for all that Stich says, there may be a psychological natural kind of moral judgment. My alternative model involves commitment to a commonsense cultural relativism, but one of an entirely innocuous kind that poses no threat to moral objectivism. To distinguish it from standard or deference relativism, I’ll call it significance relativism.

The psychological state we’re in when we judge that something is morally wrong is, on the face of it, different from the state we’re in when we judge that something violates a conventional rule. The essential features of the moral stance have long been the object of philosophical inquiry, but they have also attracted the attention of psychologists working on children’s moral development. The best known tradition on the psychological side was initiated by Elliot Turiel and his colleagues thirty years ago. Drawing in part on philosophical work, Turiel proposed that moral judgment has four distinctive marks. The formulations differ, but the starting point is that we judge a rule violation to be morally wrong if certain counterfactuals hold: we would judge it to be wrong independent of what any de facto authorities say, and we would judge it to be wrong in any other time and place as well. These two features, authority-independence and generality, are key to what Stich labels the signature moral pattern. It contrasts with responses to conventional rule violations, like not wearing a tie to a job interview, which we may actually judge to be wrong, but would not if, say, the university interviewed for had a policy of not wearing ties. Nor would we judge it to be wrong in all other times and places, like ancient China. (Turiel’s third content-independent criterion is seriousness, but it clearly plays a secondary role.)

The fourth distinctive feature of moral judgment for Turiel was justification in terms of harm, injustice, or rights violation. This relates to the content of moral judgment: unless justifications are merely ex post facto rationalizations (which is a real possibility), presumably people judge (what they take to be) harmful, unjust, and rights-violating actions to exhibit the three distinctive formal features, if Turiel’s view is correct. This, at any rate, is how Stich and his colleagues read him. (It is quite possible that this is a misreading: Turiel need not be committed to claiming that people judge what they take to be harmful things to be morally wrong, only that people justify their judgments with reference to harm. I won’t press this point here.) Leave aside beliefs about injustice and rights violation, as they plainly already constitute moral judgments. The question that Stich is interested in is whether harm attributions are necessary and sufficient to evoke the signature moral response pattern. If they did, he believes, we would have a nomological cluster sufficient to underwrite the claim that there is a psychological natural kind here. So he cites and produces empirical data that suggests harm attributions are neither necessary nor sufficient for the signature moral pattern.

The Non-Existence of the Harm Cluster

Let us begin with the data. Why aren’t harm attributions necessary for evoking the signature moral pattern? The clearest experimental studies concern the role of affect. In the well-known Haidt, Koller, and Dias (1993) study, it was found that particularly low-SES subjects both in the US and Brazil judged certain harmless but disgusting behaviours to be wrong authority-independently and generally. For example, many people think masturbating with a dead chicken or cleaning the toilet with an old and disused national flag to be morally wrong. (Here it is important that ‘harm’ is understood fairly concretely in these studies, and does not include any kind of symbolic harm.) Thus, thinking that something is harmful is not necessary for evoking a signature moral response.

But are thought about harmfulness sufficient? This is what Kelly, Stich, Haley, Eng, and Fessler (2007) set out to investigate. Here are two of their best scenarios (best for their case, that is). The first examines the generality of harm-based judgments:

Whipping/Time

Three hundred years ago, whipping was a common practice in most navies and on cargo ships. There were no laws against it, and almost everyone thought that whipping was an appropriate way to discipline sailors who disobeyed orders or were drunk on duty.

Mr. Williams was an officer on a cargo ship 300 years ago. One night, while at sea, he found a sailor drunk at a time when the sailor should have been on watch. After the sailor sobered up, Williams punished the sailor by giving him 5 lashes with a whip.

The questions that subjects were asked to answer were “Is it OK for Mr. Williams to whip the sailor?” and “On a scale from 0 to 9, how would you rate Mr. Williams ’ behavior?”. In the comparison case, the time was changed to present:

Mr. Adams is an officer on a large modern American cargo ship in 2004. One night, while at sea, he finds a sailor drunk at a time when the sailor should have been monitoring the radar screen. After the sailor sobers up, Adams punishes the sailor by giving him 5 lashes with a whip.

Parallel questions were asked. The results were clear: half the subjects thought whipping was OK 300 years ago, while only 10% thought they were so in the present. Present whipping was also rated much worse (7 vs. 4) on the Likert scale. (In other scenarios, such as the contrast between ancient and modern slavery, the differences in people’s judgments were negligible, though statistically significant.)

The second set of questions in the Kelly et al. study tested whether harm attributions are sufficient for authority-independence.

Military/Authority
For many years, the military training of elite American commandos included a simulated interrogation by enemy forces in which the trainees were threatened and physically abused. Most people in the military believe that these simulated interrogations were helpful in preparing trainees for situations they might face later in their military careers. Though no one was ever killed or permanently disabled by the physical abuse they received during these simulated interrogations, the trainees often ended up with bruises or injuries that lasted for a week or more.

Recently, the Pentagon issued orders prohibiting physical abuse in military training. Sergeant Anderson is a soldier who trains elite American commandos. He knows about the orders prohibiting physical abuse and his immediate superiors have ordered him not to do it. Nonetheless, he regularly threatens and physically abuses trainees during the simulated interrogations that he conducts.

The second condition in this within-subjects study was the following:

Now suppose that the Pentagon had never issued orders prohibiting physical abuse in military training, and that Sergeant Anderson’ s superiors had told him that the use of physical abuse was acceptable in simulated interrogations.

The results were that when abusing trainees was described as prohibited, less than 10% said it was acceptable, while if it wasn’t prohibited, almost 60% said it was acceptable. Thus, most subjects were willing to change their judgment about a clearly harmful transgression depending on what local de facto authorities said. Stich’s conclusion is that harm attributions do not cluster together with authority-independence any more than with generality; hence, the features identified by the Turiel School don’t form a homeostatic property cluster, and moral judgment isn’t a psychological natural kind. (This latter conclusion is emphasized by Stich’s more recent work on the definition of morality.)

The Intuitive Deontologist Hypothesis

Turiel’s harm condition is, I believe, derived from the utilitarian tradition that was prominent in English-speaking moral philosophy when he began to work on children’s moral development. His claim, of course, isn’t normative, but he as it were projects on people a tendency to make moral judgments on a kind of consequentialist basis. (In fairness, he also includes the categories of injustice and rights violation, but this introduces circularity that’s better avoided, as already noted.) But the studies by Stich and colleagues suggest this is false. So it’s worth considering whether another tradition in normative ethics might come closer to capturing how people actually think. I’m thinking of a broad church going back to Smith, Kant, and Hegel and continuing today in original work by Elizabeth Anderson, Stephen Darwall, and Axel Honneth, among others.

In this broadly deontological recognition-theoretic tradition, what makes something wrong isn’t the harm it causes, but the attitude it expresses. The harm, in the concrete sense of the Turiel tradition, is the same if I accidentally elbow you in the stomach as it is if I deliberately do so. But morally speaking, there is all the difference in the world between these actions. Depending on the context, deliberately elbowing you in the stomach can express disrespect or disesteem or disregard, failure to recognize you as an equal, and so on. Of course, if you ask for it to test your stomach muscles, or recognize it as a fit punishment for what you did, it may express respect or benevolence toward you. Importantly, this is not just a matter of the agent’s intentions and attitudes: whatever my actual attitude, if I have seen too many hip-hop videos and innocently use the n-word to greet a black friend, I may thereby express an offensive attitude in a particular cultural setting. The moral status of a piece of behaviour depends on the attitude it expresses (its significance or meaning), and the attitude it expresses depends on the context, of which the agent’s intentions are only a part.

So suppose that ordinary people are intuitive deontologists rather than consequentialists. (This happens to be a part of my version of moral psychological sentimentalism, but it could be accommodated by very different kinds of account.) In that case, their judgments will manifest the signature moral pattern if and only if they take behaviour to express one or another of the offensive attitudes. They judge throwing sand in another’s eyes to be wrong not because of the harm it causes, but because deliberately causing such harm means treating the other as a mere plaything. They judge lying to be wrong even if no one is harmed by it, because it expresses a willingness to undermine the victim’s ability to make informed choices about the matter – an attitude that any impartial spectator would resent in the shoes of the victim. And so on.

Why Disrespect Attibution Is Sufficient for the Signature Moral Pattern
What does the intuitive deontologist hypothesis predict about Stich’s cases? Start with whipping sailors and generality. According to the hypothesis, people are sensitive to the significance or meaning of the action, so before they form a judgment, they have to ask themselves: what does whipping someone say in its context? Unlike a straightforward harming action like throwing sand in someone’s eyes, the significance of whipping may change from context to context. (Think of whipping an ecstatic masochist.) If I think about someone being whipped on a ship today, I can pretty straightforwardly ask myself how I would feel were I in such a person’s position, and why. Given the expectations that contemporary sailors have, it’s hardly much different from my own boss whipping me against my will in terms of significance, so I can confidently take it to express a demeaning attitude, and consequently judge it to be wrong independent of authority and regardless of time and place (I will qualify the latter below).

But what about 300 years ago? On the intuitive deontologist hypothesis, to judge this behaviour, I must first form a view about its meaning in context. The stereotype I personally have, and I suspect many people will share, is that those were different times and different men, coarse and used to harsh treatment, and moreover enlisted in the full expectation of such. In the world of rum, sodomy, and the lash (as Churchill supposedly described the British naval tradition), whipping must have had a different significance than it does in today’s rule-focused world in which even soldiers are mollycoddled. If there was any doubt about this, Kelly et al. explicitly state that almost everyone thought it was appropriate (I’ll return to the significance of this stipulation). Hence, it wasn’t much of a shame to be whipped, nor did whipping express contempt or moral devaluation. So I predictably hesitate to judge it to have been wrong, and so will many others, as my take on the meaning of the action is hardly anything special. The intuitive deontologist hypothesis thus predicts the observed responses.

So far, I’ve suggested that the signature moral pattern clusters with the attribution of expressing something like disrespect in the original cultural context. When I briefly discussed this hypothesis with Stich last week, he claimed I was flat out contradicting myself – how could one both hold that judgments are sensitive to cultural context and that they involve commitment to place- and time-transcendence (generality)? The way out is to go fine-grained about the object of judgment. The truth in cultural relativism means that it is imprecise to say that people take whipping, for example, to be wrong regardless of time and place. Rather, the object of the judgment is whipping-in-cultural-context-type-C, such as whipping in a cultural context relevantly like ours. (For relevant similarity assessments, the attitudes expressed by the practice will surely be crucial.) Thus, holding x’s wrongness to depend on its cultural context and commitment to culture-transcendent validity are consistent with each other: the commitment is to the generality of judgments about of x-in-context-type-C. This isn’t trivial, as there may well be several relevantly similar cultural contexts. For example, the view predicts that whipping sailors-in-contexts-like-ours would be judged wrong regardless of time and place – for example in science fiction, provided the action would be taken to have the same significance in the context.

The kind of cultural relativism that falls out of the intuitive deontologist hypothesis has nothing to do with what we might call deference relativism, according to which what is right or wrong to do in a cultural context depends on the de facto accepted moral standards of that context (or, according to a subjectivist variant, the standards embraced by the agent herself). Significance relativism is a form of moral objectivism: if a behaviour expresses an objectionable attitude in its context, it is wrong regardless of what anyone anyplace thinks of its moral status. The intuitive deontologist hypothesis says that ordinary people are significance relativists but not deference relativists.

At this point someone might object that responses to the Kelly et al. scenarios track (to an extent) what is described as morally acceptable in the culture in question, both when it comes to generality and when it comes to authority-independence. Surely this suggests that people are deference relativists?

My response is that this is actually a confound. There are non-accidental connections between the attitudes expressed by behaviours and the local norms concerning them. The mere fact that an action is forbidden by a social norm changes its significance. Smoking inside a bar now says something very different than it did when it wasn’t prohibited, not to mention what it meant when people were unaware of the connection between smoke and cancer. In part this is because we reasonably assume that norms don’t change without reason. Take physical abuse in military training. When Pentagon forbids it, we presume it’s because it’s no longer considered effective, or perhaps not worth the psychological toll it takes, or inconsistent with the dignity of the trainees. Whatever it is, once the change is made, continuing with the practice now suggests insubordination and possibly sadism. Its wrongness looks authority-dependent, but this is only because significance can be, in part, authority-dependent. (Hypothesis: if it was made clear in the story that Pentagon only forbids rough treatment out of effete sissiness and Sgt. Anderson is a battle-hardened veteran with nothing but the best interests of the trainees in mind and a proven track record of excellence, his methods would be approved by many more.)

Influence also goes the other way: socially accepted moral norms change when the significance of behaviour changes. Take norms concerning neighbours. In a rural society, it was offensive not to greet a neighbour on the road, as it would have suggested an attitude of superiority or, depending on the situation, perhaps shame for one’s own class. With urbanization, the meaning of being a neighbour has dramatically changed. Failing to greet a neighbour need not carry any particular attitude. Hence, unsurprisingly, our norms concerning it have changed, and many behaviours are no longer morally disapproved.

Since significance and local norms are linked in both directions, it is not trivial to decouple them. But in principle, we can surely come up with a scenario in which a behaviour or practice is locally morally accepted, but nevertheless (from our point of view anyway) expresses an objectionable attitude. Take another of the Kelly et al. scenarios, spanking in school. They found that many people judged it to be acceptable in a cultural context in which it was locally accepted (as a result of regulations by a de facto authority), but not when authorities forbade it. (Local moral acceptance can obviously vary independently of what authorities say, but the two may also go together.) So it looks like (many) subjects defer to locals. But they may also be making assumptions about the significance of spanking in this context. To tease these apart, all we need to do is have people read a Dickens novel involving corporal punishment of schoolchildren. Here we have a society in which the practice enjoys de facto local moral acceptance, but nevertheless plausibly manifests cruelty, smugness, superiority, and other objectionable attitudes towards the children. If subjects, appreciating the significance of the practice, still go with local norms, they are deference-relativists; if they go with the significance, they are objectivists. Note that I’m not joking when I talk about reading an entire book (or maybe watching a movie): conveying the significance of an action in a different cultural context is not something you can do in a vignette – even if the significance is the same as it would be in our culture! (This spells trouble for experimental testing, but fortunately that’s not my worry.)

Why Disrespect Attribution May Be Necessary for the Signature Moral Pattern
So far, I’ve argued that the evidence is consistent with the thesis that if people consider a violation disrespectful, they judge it to be wrong in an authority-independent, serious, and locale-transcendent manner. This analogue of the weaker Turiel thesis is thus empirically supported. But is there an analogue of the stronger thesis – that is, whenever a subject’s judgment manifests the signature moral pattern, she also takes the agent to express an inappropriate attitude? I’ll take a quick look at the Haidt et al. disgust cases. Start with cleaning the toilet with a disused and tattered national flag. Think about this from the perspective of the low socioeconomic status conservatives who respond to it with the signature moral pattern. Are they not precisely the people who find such usage disrespectful toward those who gave their lives to protect the flag and what it stands for? Or take the lonesome dead chicken-seducer. Don’t the conservatives take his behaviour to manifest a perverted attitude toward sexuality, animals, and cooking? This seems extremely plausible. Further, the liberals who think these behaviours are OK might well think that while they are disgusting, they are not disrespectful or offensive.

To be sure, the fact that the disgust cases fit nicely fit the intuitive deontologist hypothesis doesn’t suffice to show that offensive attitude attribution is necessary for manifesting the signature moral pattern. The point is just that the cases that are counterexamples to the necessity of harm attribution (and thereby the Turiel paradigm) are not counterexamples to the intuitive deontologist hypothesis. That’s why I say that (broadly speaking) disrespect attribution may be necessary for the signature moral pattern, for all present studies show.

Conclusion
So it looks like there might after all be a cluster of features that moral judgments share, at least for all that present empirical research shows. Does this mean that moral judgment is a psychological natural kind? I suppose so, if natural kinds come so cheap. In my view, however, we do not need a content criterion for moral judgment, whether it is harm or disrespect; I’m perfectly happy to say that people are moralizing even if they pay no attention to attitudes or consequences of actions, as long as the judgment plays the right kind of functional role. Conceptual analysis, I claim, tells us that the key elements of such functional role are subjective normative authority (which includes not only authority-independence but also desire- and goal-independence), emotional resonance (internal connection to guilt, moral shame, and indignation, among others), felt intersubjective validity (expectation and demand that others, suitably informed, share the moral verdict), and commitment to practical consistency (judging like cases alike). But even if you don’t believe in conceptual analysis, the intuitive deontologist hypothesis suggests that you can still believe that moral judgment forms a natural kind.


Comments

  1. 1. Posted by Angel Pinillos | April 29, 2010 6:58 pm

    Interesting Antti. thanks. I had similar thoughts when I read that paper. The issues are a little tricky though.

    You say: ” Hence, it wasn’t much of a shame to be whipped, nor did whipping express contempt or moral devaluation. So I predictably hesitate to judge it to have been wrong, and so will many others”

    I guess, to be convinced, I would need to see some evidence that people today think that back then there was no shame in being whipped or that they think that whipping didn’t express devaluation. Just because people think that it was normal back then for this sort of thing to happen, it doesn’t mean that people today will reason that there was no shame in being whipped or that it didnt devalue. I assume that people are aware that there are plenty of societies in which “normal” practices devalue people. I assume that people are aware that many societies sanction and endorse slavery (or sexism/racism).

    It would be interesting to do a follow up experiment to test your idea.

    BTW, perhaps a utilitarian folk morality could help explain the data here. Maybe people think that whipping was an effective method for curbing bad sailor behavior back then, but it isn’t now. No need to appeal to deontology.

  2. 2. Posted by Antti Kauppinen | April 30, 2010 11:17 am

    Thanks, Angel! I completely agree with the first point. My view isn’t that significance (from our perspective) is determined by what’s normal in the local culture, any more than it is by what’s considered morally acceptable in local culture. (See what I say about the spanking case, for example.) But I believe such beliefs do have an indirect influence on our judgments. This is consistent with it being the case that were it to be made vivid enough to people that whipping back then expressed much the same sort of attitudes as it would today (even if it was normal), people’s judgments would flip. Of course, half of them already think it was wrong then, too.

    The utilitarian hypothesis is neat! I don’t think it would explain all the cases, though. It would also be harder to extend it to the non-harmful cases – though I can see how a rule-utilitarian story would go: allowing getting intimate with chicken would have harmful consequences down the line, according to the judges. But you’re right, there’s room to develop such an alternative. Which is really not that surprising, considering that consequentialists in normative ethics have a variety of strategies for accounting for on the face of it deontological intuitions.

  3. 3. Posted by Scott Harris | May 29, 2010 2:04 pm

    Shame and Guilt
    People feel shameful when they commit an act in which they felt society felt was inappropriate. on the other hand, guilt is felt when a person feels that they have committed what they believe to be a malicious act upon another member of their community. one cannot consider anyone a moral agent except for himself, for we cannot be that relativistic on our values and morals. for more on moral agents email me at harri304@gmail.com

Post a comment

Name: (required)

Email Address: (required) (will not be published)

URL:

Comments:

(Spamcheck Enabled)

This work is licensed under a Attribution-NonCommercial-NoDerivs 3.0.