Print This Post Print This Post

Everybody’s heard about Joshua Greene’s fMRI studies of moral judgement. Many have also heard about the study by Koenigs, Young, Adolphs, Cushman, Tranel, Cushman, Hauser and Damasio of patients with prefrontal damage. In a communication I co-authored with Nick Shackel and which has just come out in Nature, we criticise the methodology used in these studies.

In his now famous fMRI studies of moral judgement, Joshua Greene reported a striking neural difference between utilitarian and non-utilitarian responses. Put simply, the findings were that non-utilitarian responses issue from areas of the brain associated with emotion, whereas the rarer utilitarian responses issue from areas associated with cognition. Greene’s work received very wide coverage in the media. It has been much discussed within and without philosophy—it is of course prominently featured in Appiah’s Experiments in Ethics.

In a recent Nature study by Koenigs et al, patients with damage to the ventromedial prefrontal cortex (VMPC) were given the set of moral dilemmas Greene had used. Koenigs et al claimed to show that such patients respond in an a distinctly utilitarian manner. Greene’s work can only support claims about correlation between emotion and certain kinds of moral judgement. Koenig et al’s findings might thus supply a missing piece by establishing causation: VMPC patients are known to have deficiencies in social emotion.

I remember attending a conference a couple of years ago and hearing a devout utilitarian announce that neuroscience has finally refuted deontology. Not everyone puts this so bluntly, but both Greene and Peter Singer have reached similar, if more nuanced conclusions. Much, then, might be at stake here.

There is a lot to be said about how one might get from neuroscientific reports about a dozen or so subjects to such dramatic claims in normative ethics. This however, is a topic for another occasion. In our communication we focus on some methodological issues. We make several points, but the main one is really simple: the battery of ‘personal’ moral dilemmas used by Greene and subsequently adopted by Koenigs et al and others, is simply ill-suited for testing claims about utilitarian or deontological judgement. Greene started out with familiar and very appropriate examples from the philosophy literature such as the trolley and footbridge problems. These are the examples people usually mention when they cite his studies. He also added some further dilemmas that are merely minor variants of these. But he also added plenty of other scenarios which are by no means good tests of utilitarian vs. non-utilitarian choice—for example, the choice of throwing to his death an obnoxious architect. In a subsequent study by Greene and in Koenigs et al’s study, some attempt was made to correct for this by also considering a subset of ‘personal’ dilemmas. In the Koenigs study, a subset of ‘high conflict’ dilemmas was singled out on the basis of great disagreement and response time in normal subjects. In Greene’s Neuron study, a subset was similarly selected but on a subject-by-subject basis. As we point out in our communication, this really doesn’t address the problem. Instead of relying on such purely psychological measures, we asked several moral philosophers to label each of the two choices in each of the dilemmas in Greene’s battery. Even the distinction between ‘high’ and ‘low conflict’ dilemmas didn’t quite distinguish between those dilemmas involving clear utilitarian/non-utilitarian choice and those that didn’t. We therefore argued that the reported findings provide no grounds for grand claims about utilitarian and non-utilitarian judgement—and this applies with equal force to Greene’s findings.

In their reply, Koenigs et al claim that both their and our classification are ‘defensible’. Needless to say, we strongly disagree. Neuroscientists can’t define ‘utilitarian’ to mean just whatever they want. More importantly, they report that when they run the statistics using our classification, they still get the same basic results, though we remain sceptical about their explanation for these results (and note that VMPC patients don’t, for example, take the utilitarian choice in the transplant case). We’d be very eager to find out whether the same holds for Greene’s findings. Until then, we suggest that philosophers be a bit more cautious in the claims they make on their basis.

Greene rightly deserves much credit for playing a major part in bringing about the recent explosion of work on the neuroscience of morality—we’re very far from being opposed to such fascinating ‘experiments in ethics’. But precisely because of the great significance of such research, philosophers shouldn’t just take reports in Nature or Science at face value. There are still difficult issues about the methodology of the neuroscience of morality that remain far from clear—and they won’t get any clearer without input from philosophers.


Comments

  1. 1. Posted by David Morrow | March 25, 2008 2:05 pm

    Interesting exchange. Is it possible for you to make available (at least aggregate) data about the classification of scenarios that you had done?

  2. 2. Posted by Guy Kahane | March 28, 2008 4:33 pm

    Dear David

    Sorry for the delay. You can find our classification here.

  3. 3. Posted by Tony Danza | May 21, 2008 6:45 pm

    My question: even if we suppose that ‘deontological’ judgments are typically associated with emotion and ‘utilitarian’ judgments are typically not, why should we be the slighest bit moved to claim that deontology has somehow been refuted? Anybody looking at recent work on the emotions in philosophy or cognitive science knows that it’s a rather silly mistake to talk about emotion and cognition as mutually exclusive. Even if we don’t take a strictly cognitive view of the emotions, they still involve beliefs and judgments, and the truth of those judgments can’t be settled by their association with emotion. Furthermore, there seems to be a fairly easy moral-psychological explanation for why ‘deontological’ judgments would be more often associated with emotions than utilitarian ones. Deontological judgments very frequently involve the belief that people ought to be respected, and people who hold that belief very often also find it appropriate to identify with and feel compassion for the suffering of others. Strict utilitarians, on the other hand, typically believe that no individual has any right to unconditional or near-unconditional respect, and are accordingly very unlikely to allow emotions like compassion to enter into their judgments (and rightly so, since genuine compassion would make for a serious conflict with the serious harm that utilitarians allow us to inflict on others).

    Moreover, I would be willing to bet a hefty sum of money that further studies of this variety will show that plenty of deontological judgments are in fact *not* associated with much emotion and that plenty of utilitarian ones are. I have witnessed too many arguments between stone-cold deontologists and emotionally disturbed consequentialists to believe otherwise. It would be especially interesting to study the brains of professional philosophers. My bet is that the leading defenders of ‘deontological’ theories will be a whole lot less emotional than your average guy on the street, and your average ‘utilitarian’ will be a whole lot more emotional than that approach’s leading defenders.

    Am I the only person who finds that philosophers lose their critical thinking facilties when faced with some claim that has been stamped with the allegedly authoritative label of ‘science’? I am reminded of Harman’s embarrassing invocation of ‘situationism’ in social psychology, and as with that claim, I’m inclined to say that we’d be on the verge of learning something interesting here if philosophers could manage to apply the same critical lens to the conceptual arsenal of ‘scientists’ as they so often are to their philosophical opponents.

Post a comment

Name: (required)

Email Address: (required) (will not be published)

URL:

Comments: