Print This Post Print This Post

In this post, all too long and speculative, I will examine how a sentimentalist theory of moral thinking could exploit and improve recently popular theories of universal moral grammar, developed by John Mikhail, Susan Dwyer, Marc Hauser’s group, Gilbert Harman and Erica Roedder, and others. I’ll be drawing mostly on Mikhail’s 2009 ‘Moral Grammar and Intuitive Jurisprudence’, in Psychology of Learning and Motivation 50, 27–100 for moral grammar. The sentimentalist theory I sketch is my own, though heavily inspired by Adam Smith. It is independently motivated, but I believe it does a better job of explaining our intuitions than other views that highlight the role of emotions.

An important (though far from the only) test for accounts of moral thinking is descriptive adequacy: the account must explain and our observations of people’s reactions to cases. In recent years, a wealth of such observations have been generated by The Moral Sense Test and other surveys of ordinary people’s judgments. A large number of these observations concern trolley cases, which are formulated in minimal pairs in order to isolate the influence of factors like intentionality or physical contact. I presume that anyone who reads this will be familiar at least with the basic two cases, Bystander (in which the agent must choose between letting the trolley kill five or hitting a switch that redirects it to a track on which one person will die) and Footbridge (in which the agent must choose between letting the trolley kill five or pushing a fat man off a bridge to stop the trolley, killing him). Many more complex cases have been developed to distinguish between competing theories. Here are two well-known ones, as formulated by Mikhail (2009):

Loop Track
Ned is taking his daily walk near the train tracks when he notices that the train that is approaching is out of control. Ned sees what has happened: the driver of the train saw five men walking across the tracks and slammed on the brakes, but the brakes failed and the driver fainted. The train is now rushing toward the five men. It is moving so fast that they will not be able to get off the track in time. Ned is standing next to a switch, which he can throw, that will temporarily turn the train onto a side track. There is a heavy object on the side track. If the train hits the object, the object will slow the train down, giving the men time to escape. The heavy object is a man, standing on the side track with his back turned. Ned can throw the switch, preventing the train from killing the men, but killing the man. Or he can refrain from doing this, letting the five die. Is it morally permissible for Ned to throw the switch?

Man-In-Front
Oscar is taking his daily walk near the train tracks when he notices that the train that is approaching is out of control. Oscar sees what has happened: the driver of the train saw five men walking across the tracks and slammed on the brakes, but the brakes failed and the driver fainted. The train is now rushing toward the five men. It is moving so fast that they will not be able to get off the track in time. Oscar is standing next to a switch, which he can throw, that will temporarily turn the train onto a side track. There is a heavy object on the side track. If the train hits the object, the object will slow the train down, giving the men time to escape. There is a man standing on the side track in front of the heavy object with his back turned. Oscar can throw the switch, preventing the train from killing the men, but killing the man; or he can refrain from doing this, letting the five die. Is it morally permissible for Oscar to throw the switch?

Moral grammarians argue that people’s intuitions about these and other trolley cases cannot be explained by appeal to affective reactions to the stimulus. Rather, to account for people’s permissibility judgments, we need to construct a computational theory parallel to Marr’s theory of vision and Chomsky’s theory of grammar, which has been a particular inspiration. Such a theory specifies a set of conversion rules that take us from the stimulus to the judgment. The process is assumed to be subpersonal: non-conscious, quick, and automatic. Consequently, it predicts that people may be unable to articulate the principles underlying their judgments, which matches the observations.

The moral grammar view comes in a variety of strengths, as Ron Mallon (2008) points out. The weakest view would be purely descriptive: we can construct a formal system that takes as inputs the stimuli and after computation, yields as output the observed verdicts. This would correspond to what is often the first stage of normative theorizing, finding patterns among intuitions and principles that explain them. (In normative theorizing, we typically go further and reject some of the intuitions or principles.) A stronger version says that this set of rules is actually internalized by people so that it plays a causal role in the production of judgments. This middle position, however, doesn’t say anything about the origin of the rules or how they are realized in the brain. As such, it is neither particularly new nor controversial: a lot of people have thought that our moral judgments are guided by rules that it takes effort and philosophical skill to articulate. (I think this is probably what Rawls’s actual position was.) The novelty would be just in formatting the rules by close analogy to linguistic rules.

The strong version that I will focus on is the Universal Moral Grammar (UMG) hypothesis. It says that there is a dedicated, innate moral module (or faculty) the computes situation-representations according to innate principles that may be parametrized by culture. As far as I can tell, this is Mikhail and Hauser’s view (e.g. Hauser 2006, 53-54). It is certainly a bold and newsworthy hypothesis. It is also an expensive hypothesis, in evolutionary terms: even if we accept massive modularity, evolution doesn’t throw up dedicated modules when the same job can be done by existing means. Hence UMG has the burden of proof against accounts that make do with fewer resources.

The alternative account that I’m particularly interested in is moral sentimentalism. Sentimentalism, as I define it, differs from what I call affectivist views like Jonathan Haidt’s ‘social intuitionism’, according to which moral judgments result from immediate affective reactions like anger and disgust. Sentimentalism, by contrast, has it that at least in canonical cases, we don’t judge something to be morally wrong unless we invest the affective reaction with authority. In these core cases, it is specifically moral emotions like indignation that are driving the judgment. Further, sentimentalism emphasizes the hypothetical reactions of the people affected by actions – in short, it is patient-focused. By contrast, affectivist views are typically agent-focused. For example, according to Joshua Greene’s well-known affectivist explanation of the trolley cases, when considering the dilemmas, we imagine ourselves in the position of the agent hitting the switch or pushing the fat man down, and “the thought of pushing someone to his death in an ‘up close and personal’ manner (as in the footbridge dilemma) is more emotionally salient than the thought of bringing about similar consequences in a more impersonal way” (Greene 2008, 43). Though Greene has lately expressed some reservations about his original view, he still thinks that it is the personal/impersonal distinction that fundamentally explains the difference between utilitarian and deontological responses.

So sentimentalism of the sort I want to explore differs from simple affectivism in at least these two ways: not all emotional reactions are created equal in moral judging, and a lot of weight is put on sympathy with (hypothetical) reactions in the position of the patient of the action. To be sure, in particular cases, our judgments may be influenced by immediate affective reactions, often leading to performance error, or result from dispassionate rules that are inductive generalizations from unbiased sympathetic reactions. I argue elsewhere that such a view of moral judging has a variety of advantages. But can it handle trolley cases as well as (or indeed, better than) UMG – that is, can it be descriptively adequate? Note that sentimentalism as I’ve formulated it does not claim that emotions need be involved in each and every process of judgment – it could be that we’ve acquired the relevant dispositions in other, real-life cases with the same structure (such as harming-as-side-effect-of-helping), and they are simply activated by the prompts. (This is one reason why the view doesn’t put much stock in brain scans – even if some cases activate emotion-relevant areas of the brain, those particular feelings may play no role in judgment, or play an inessential role.) Nevertheless, the view owes us an explanation about how cases with this sort of structure give rise to the observed verdicts, and for that purpose we may just as well treat trolley cases as if they were the ones giving rise to the sentiments.

The first step for a sentimentalist response to the UMG challenge is to notice that there is much in UMG that can simply be shamelessly stolen. A key part of the argument for UMG is that the trolley cases show that the moral stimulus is too poor to account for the difference in our reactions. We first need to analyze the action and its effects in terms of temporal and causal structure, the benefits and harms it involves for the individuals involved (which Mikhail misleadingly labels ‘moral structure’), and intentional structure (sorting out the end, means, and side effects among the consequences the basic action generates). Only then we get to apply deontic rules like the prima facie prohibition of homicide and the Doctrine of Double Effect (DDE). But as critics like Jesse Prinz (in the Sinnott-Armstrong volume) have pointed out, there is nothing about the structural analysis of action that would be specific to morality. It is rather part of garden variety mindreading. As such, it doesn’t support even the weakest version of the moral grammar hypothesis, much less UMG. So regardless of one’s view of the genesis of moral judgment, anyone is free to make use of Mikhail’s beautiful and precise tree diagrams in her account.

Any view that accords reactive emotions a pride of place in moral judging should thus not focus on emotional reactions to bare cases of throwing the switch or pushing the man, but rather to reactions to, for example, intentionally touching one and thereby intentionally committing battery and intentionally throwing one and thereby intentionally committing battery and intentionally causing train to hit one and thereby intentionally committing battery and thereby both knowingly killing one and intentionally saving five (Footbridge). Our own emotional reactions, which are the immediate cause of (or constitutive of) moral judgment, result from considering (perhaps by way of simulating) the hypothetical reactions of someone being intentionally thrown off a bridge to save five, for example.

What is the nature of such hypothetical reactions? My starting point is Adam Smith’s account in The Theory of Moral Sentiments. Here’s one of his formulations:

He, therefore, appears to deserve reward, who, to some person or persons, is the natural object of a gratitude which every human heart is disposed to beat time to, and thereby applaud: and he, on the other hand, appears to deserve punishment, who in the same manner is to some person or persons the natural object of a resentment which the breast of every reasonable man is ready to adopt and sympathize with. (The Theory of the Moral Sentiments, I.ii.1.2)

(Let’s ignore the gap between deserving punishment and doing something morally impermissible.) The Smithian sentimentalist explanation of our judgments in Switch and Footbridge, in brief, is then that the agent in Switch who turns the trolley is not, to the one person consequently killed, “the natural object of a resentment which the breast of every reasonable man is ready to adopt and sympathize with”. By contrast, the agent who pushes the fat man in Footbridge would be resented by any reasonable and informed person.

A more detailed story must begin with non-moral reactive attitudes, for simplicity just non-moral resentment or anger (anger that doesn’t presuppose a moral judgment about the action or agent). What is in common to things that make us non-morally angry? We get angry when we’re deliberately hurt, ignored, ridiculed, played for a fool, when unwanted burdens are imposed upon us. All these things, to be sure, can be excused sometimes. What unifies them seems to me to be that we perceive that our plans and interests don’t count for as much as we expect in the agent’s decision-making. Consequently, we are treated in a way that doesn’t match our sense of agential self-worth. This analysis is supported by the fact that people who are so beaten down they expect to be ignored don’t get angry either. Further, excuses typically take the form of showing that our plans and interests were, after all, taken into account by the agent. To keep the story short, I will just say that the natural object of anger is action that fails to take one’s own agency into account as expected. Paradigm examples of this sort of action include battery (unwanted harmful touching) and (attempted) homicide.

The sentimentalist says that in (canonical) moral judging, we consider whether we would react with gratitude or resentment (anger) to the action performed. But the question isn’t whether I, as I am, would resent being hit by a trolley, say. Nor is it about the actual or likely reactions of a real or fictional character. Rather, as Smith notes, it is about the natural reactions of “every reasonable man” in the position. I will cash out ‘reasonable’ here in a Scanlonian spirit. (Given that Scanlon’s contractualism is a patient-focused view that establishes the acceptability of moral principles by way of considering whether they could be reasonably rejected by the individuals affected, and balancing the strength of their objections, there are many parallels between it and Smithian sentimentalism.) An agent is reasonable when she modifies her expectations of how she is treated in the light of the benefits and burdens that alternative actions would impose on others. Exactly how much one most modify one’s expectations to count as reasonable is hard if not impossible to define precisely. (This is a case of constructive ambiguity, to borrow Kissinger’s phrase.) But we have a sufficient understanding of what it amounts to. Everyone can agree that a reasonable agent does not expect to receive another piece of cake when giving it to her would impose the burden of starvation on another. However, being reasonable does not mean being utterly neutral – it is not unreasonable to expect others not to cut off one’s hand, even if it’s in order to save five other people’s hands from being cut off.

In addition, the sentimentalist says that when we consider the reasonable person’s reactions, we make use of what we consider to be the facts of the situation, including those concerning the agent’s intentions. We do not abstract from facts that we as judges know, even if the patients of the action in the target situation are unaware of them. In real life situations, this aspect of moral judging disposes us to find out how things are before passing moral judgment, and discount judgments that are made in ignorance.

For clarity, here’s a boxological version of the sentimentalist view I propose (if I can figure out how to do it, I’ll add an actual picture later):

Stimulus -> Action analysis -> Simulating the reactive attitudes of a reasonable and informed person -> Moral sentiment -> Moral judgment

So here’s how the story goes in trolley cases if sentimentalism along the lines I’ve sketched (and elaborated elsewhere) is correct, and the subject isn’t relying on previously internalized rules. Suppose I am the naïve subject presented with the Switch case. I automatically analyze the action of the agent into what is caused by what, what are the intended means and aim, and what are the known side effects. I then focus on those affected by the action, the One and the Five, and consider what the natural reaction of a reasonable person, aware of the agent’s intentions etc. and the circumstances, would be in their place to the proposed action and its salient alternatives (here not hitting the switch). (On my favourite version, this is a matter of simulating the patients’ reaction in the guise of a reasonable person, a process that involves the off-line use of one’s own emotional response system.) I find that were the trolley redirected to the side track, a reasonable person aware of what is at stake for the Five and the fact that his death is not intended by the agent would not naturally resent the agent (because he wouldn’t expect his plans and interests to be given more weight than they are in the situation), whereas a reasonable person aware of the facts in the shoes of the Five would naturally feel gratitude for the action. So overall, a reasonable and informed person would sympathize with this choice.

By contrast, (as a result of such simulation) I find that were the trolley not redirected, a reasonable and informed person in the place of the Five would naturally feel resentment, and in the place of the One would not dance of joy (since he would be aware of the burden his survival places on the Five). So such a person would not, overall, sympathize with failing to hit the switch. Putting my take on how a reasonable person would react to the alternative actions together, I emerge with the moral sentiment of approval of hitting the switch. This either amounts to or causes a moral permissibility judgment, depending on the metaethics that we adopt (there are both cognitivist and non-cognitivist varieties of sentimentalism).

In Footbridge, the story is different. Being intentionally pushed off the bridge against one’s will to stop the trolley from hitting others would naturally arouse the resentment of a reasonable person aware of the facts. Being used as mere means, even for an end that benefits others, is a fundamental violation of our expectation of being treated as an agent possessing a will and interests, and as such a natural object of resentment even for a reasonable person. (Perhaps this becomes unreasonable when the number of people who would otherwise be saved grows to a million, say – but at this point our verdict about permissibility may change, too.) And so is, of course, simply being pushed against our will. So, in short, the typical subject will experience a sentiment of moral disapproval against pushing the fat man in Footbridge. Some people might think, to be sure, that a reasonable person would not resist being pushed in these circumstances. They regard such preference for self as a bias that should be eliminated in aspiring for a moral perspective, and make the utilitarian judgment.

(Thus the psychological difference between deontological and utilitarian views comes down to how much self-sacrifice is considered reasonable to expect of one. Note that contrary to Greene’s allegations, neither view is more rational than the other, though numerical reasoning will naturally play more of a role for someone who discounts agency-related objections, nor are any of the emotions involved in the canonical process fickle flashes.)

How about Mikhail’s further cases, which pose challenges to agent-focused affectivist views like Greene’s? In Loop Track, Ned is being used as a means to save the Five, but no personal touching is involved. The same considerations about the natural object of resentment apply as in Footbridge. Still, since no pushing or falling is involved – in Mikhail’s terms, there’s fewer counts of battery in the case – a reasonable person wouldn’t be as strongly resentful as in Footbridge, so we would expect more people to switch to the side that sympathizes more with the demand of the Five. The symmetrical position of the One and the Five is also more salient. After all, if the switch isn’t hit, it will be the death of the Five that will presumably save the One (that is the difference that an extra bit of track will make), so it wouldn’t be outrageous for the Five to complain that inaction leads to them being used as means. The case is thus far less clear-cut in Smithian sentimentalist terms. Philosophers’ intuitions are split – Thomson originally presented it as a counterexample to DDE, while others disagree (Kamm, for example, introduces what she calls the ‘Doctrine of the Triple Effect’ to handle the case). And the folk are the same: according to Hauser 2006, 128, half the subjects say flipping the switch is permissible, and Mikhail’s diagram (2009, 47) has slightly less than half permitting it.

In Man-In-Front, Oscar’s death is a side effect of diverting the trolley on a loop track with a heavy object on it. The reasonable person in his shoes can’t strongly complain that his plans and interests are ignored by the choice to divert the trolley. Nevertheless, while his death is not an intended means of saving the Five, his being hit by the train is located in the act tree as a step in the chain of events that leads to the event that constitutes the saving, namely the trolley hitting the heavy object on the loop track. It is thus not quite as pure a side effect as in the original Switch case. Insofar as subjects recognize this in their automatic analysis of the structure of the action, they may take it to be reasonable to resent the agent slightly more in Man-In-Front than in Switch. And this is indeed borne out by the surveys: 75% say it is permissible (Hauser 2006, 128).

So, in short, a patient-focused sentimentalism that highlights the strength of a reasonable person’s objections (cashed out as negative emotional reactions) to alternative actions, pre-analyzed in terms of intentional structure and burdens and benefits, seems to be able to handle the crucial cases, and should thus be a serious contender in terms of descriptive adequacy. As a psychological account, it assumes that we’re capable of doing a lot of tacit emotional processing. But it does seem we’re well capable of such a thing – consider our ability to keep track of a large host of fictional characters in TV shows like The Wire and even predict how they would feel in various contingencies. Further, some of the emotional processing may be conscious, as we recognize when we instruct each other to answer moral problems by saying things like “Think how you’d feel in her shoes!”.

A clear advantage this account has is that it has no need to assume that DDE or harm-prohibiting and help-favouring principles are innate. Rather, they are explained by the fact that sympathizing with the reactions of an informed and reasonable participant to the potential actions in the relevant circumstances gives rise to moral approval or disapproval. These attitudes to individual cases, in turn, give rise to standing dispositions to react to similar situations similarly – dispositions whose content can be captured, to an extent at least, in terms of deontic rules of the sort that Mikhail introduces. (Sentimentalism need not be committed to our being able to capture ordinary morality in computable rules, however, insofar as our emotional reactions aren’t. This gives it an advantage when it comes to accounting for thick concepts and other messy elements of morality, as I hope to argue elsewhere.)

What is more, the sentimentalist account provides a recipe for generating new rules on a non-ad hoc basis. Consider the following variation, for example:

Comeuppance
Neil is taking his daily walk near the train tracks when he notices that the train that is approaching is out of control. Neil sees what has happened: there’s a bullet hole in the front window of the train, and the driver is dead from a wound in the head. A large man stands on a footbridge that Neil is about to cross, holding a smoking gun and sporting a grin. The train is now rushing toward five men walking across the tracks. It is moving so fast that they will not be able to get off the track in time. However, as Neil has now arrived to the footbridge, he realizes that the large man who shot the driver and who is leaning over the guardrail to witness the death of his five victims could be pushed onto the track, thereby preventing it from killing the men. Neil can throw the man, killing him; or he can refrain from doing this, letting the five die. Is it morally permissible for Neil to throw the man?

In Comeuppance, I believe, the intuitive reaction is to judge that it is morally permissible to push the would-be killer on the tracks. The sentimentalist explains this by saying that a reasonable and informed person would not sympathize with the objection of the fat man, in the circumstances – it’s hard to sympathize with someone hoist by his own petard. I can’t see that this would be predicted by Mikhail’s rules, complex as they are. Of course, another rule could be added to cover this case, and others like it. We could then further modify the case – perhaps the killer is the only survivor of a large Roma family, and the driver and the five men are the Nazis responsible for the extermination – and again, perhaps, reverse the intuition. Again, the rules could be modified and multiplied. But what rationale would there be for that, other than the sentimentalist one?

The general lesson is that whenever we explain reactions to particular cases by appeal to rules, we face the further explanatory question of why the rules are the way they are. And it will not do to just appeal to evolution. That simply raises the further question of why such rules would have evolved, and also faces the burden of explaining why the rules would be hardwired, if we get the same reproductive benefits as a result of extending natural emotional reactions by natural sympathy.


Comments

  1. 1. Posted by James Beebe | October 6, 2009 9:02 pm

    I’m curious to know what sources you are citing in this paper.

  2. 2. Posted by Bryce Huebner | October 6, 2009 10:39 pm

    Hey Antti:

    Nice to see you dabbling in this stuff!

    You say of UMG that it is “an expensive hypothesis, in evolutionary terms: even if we accept massive modularity, evolution doesn’t throw up dedicated modules when the same job can be done by existing means.”

    This claim sounds a bit odd to my ears, especially since the architectural model suggested by Hauser is a minimalist model that requires nothing more than the evolution of an interface between existing systems (that could have been selected for other means). Perhaps something is lost across the translation from the cognitive science to philosophy vis-a-vis the concept of modularity; but it strikes me that the intuitively plausible version of your claim suggests that it would be evolutionarily costly to construct a new system from scratch (on this point, Herbert Simon nails it brilliantly in his discussion of the architecture of complex systems–it’s in Sciences of the artificial). But, I see no reason why the moral grammarian should, or needs to be committed to anything that is nearly this strong. What must be the case, from this perspective, is that moral cognition is relatively encapsulated and relatively domain specific–that is, there must be a dedicated architecture that is triggered by ‘morally significant’ inputs, reflexively executes computations on the basis of some sort of computational rules, and then spits out relatively stable stable patterns of judgments. Of course, modularity (like moral grammar) comes in varyingstrengths; but, if I were you I would have a look at the recent Hauser paper (Nature, 2009) where he discusses this issue. (If I’m remembering correctly, I think that there is a paper with Liane Young on interfaces and what not…maybe a year or two ago but I don’t remember where off the top of my head)

    Two other quick things: 1) I think that once one sees what the architecture is supposed to look like, Prinz’s argument against UMG starts to look much less troubling as well; and 2) I have a hard time seeing how the boxology that you suggest is much different from a moral grammarian’s view–it seems to me that the question might be where to draw the boundaries around the moral system, but i’m not sure.

    Oh yeah, and I think I have data on a case like your comeuppance case, but I have to think about whether they really are as structurally similar as they now seem to me…

    Nice read…

  3. 3. Posted by Tim Dean | October 10, 2009 3:36 am

    Thought provoking post. Although I’m not convinced your version of sentimentalism achieves its goal of being more parsimonious than UMG.

    The way in which you appeal to reason within sentimentalism seems to me to be calling on some fairly sophisticated and ‘costly’ cognitive mechanisms that may be unnecessary to explain the moral judgement. Such as:

    and consider what the natural reaction of a reasonable person, aware of the agent’s intentions etc. and the circumstances, would be in their place to the proposed action and its salient alternatives

    That sounds like it could escalate in to a incredibly taxing process depending on the circumstances and contingencies at play. Perhaps it can be done unconsciously, but is there evidence that this is the process we use rather than the UMG model?

    I thought one of the great strengths of sentimentalism is that the emotions are heuristics that arise as a result of quick and dirty rules in the UMG and which motivate behaviour, all without calling on our lumbering, carbohydrate hungry reasoning centres unless there’s a conflict to sentiments to be resolved.

    So, to revise your boxology:

    Stimulus -> Action analysis -> Moral sentiment -> Moral reasoning -> Moral judgment

    With the moral reasoning step optional depending on the clarity of the moral sentiment. Thus, moral sentiments are still central, but the heavy lifting is done by the quick and dirty rules in the UMG rather than more complex unconscious abstract reasoning about an idealised reasonable person.

  4. 4. Posted by Antti Kauppinen | October 10, 2009 8:15 pm

    Thanks for the comments, James and Bryce! Sorry for being slow in responding – I’ve been prepping for a job interview. Anyway, the sources I mention include the following:

    Erica Roedder and Gilbert Harman, “Grammar,” to appear in Empirical Moral Psychology, edited by Doris, Nichols, and Stich, Oxford University Press.

    Mallon, R. (2008). Reviving Rawls Inside and Out. Moral Psychology, Volume 2: The Cognitive Science of Morality: Intuition and Diversity. W. Sinnott-Armstrong. Cambridge, MA, MIT Press. 2: 145-155.

    Hauser, Mark (2006), Moral Minds. Harper Collins, New York.

    F.M. Kamm, Intricate Ethics: Rights, Responsibilities, and Permissible Harm, Oxford University Press, 2007

    Jesse Prinz, Resisting the Linguistic Analogy: A Commentary on Hauser, Young, and Cushman (In W. Sinnott-Armstrong (Ed.) (2008). Moral Psychology, Volume 2: The Cognitive Science of Morality: Intuition and Diversity. MIT Press.)

    Bryce, very interesting stuff. I’ve a lot of reading to do on the evolution of cognition before I’m properly armed to evaluate the costs. But at the very least, UMG is committed to the potentially parametrized principles being hardwired (never mind exactly how). If it isn’t, then it gets hard to see what is new about it, except Mikhail’s Chomsky-inspired way of representing the principles. And however the computation is realized in the brain, the innatist has the burden of showing that having hardwired principles is adaptive. If the sentimentalist can explain the same pattern of judgments as a byproduct of other capacities and dispositions, it does seem cheaper to me.

    So the issue is the source of the principles. An additional advantage of sentimentalism is that it provides a recipe for generating new principles for new situations, and also allows for particularist judgments. To be sure, some variation is predicted by the parameters idea, but leaning too hard on that trivializes the notion of there being universal principles (as Jesse notes).

    I’m curious to see your data on new trolley variations. If and when you have a suitable draft, email it to me at a.m.kauppinen (that’s what the Dutch call me) at uva.nl.

  5. 5. Posted by Antti Kauppinen | October 10, 2009 8:30 pm

    Tim, thanks for your comment! I agree that the sentimentalist machinery I sketch is too heavy to be deployed in run-of-the-mill cases. It is part of the hypothesis that those will be handled by internalized rules, or learned affective reactions (in the Aristotelian rather than Haidtian spirit). The reason I introduce the complex and costly sentimentalist process is that I want it to do explanatory work elsewhere – for example, to distinguish moral from other kinds of evaluative judgments. But I also want to show that it predicts the observed pattern of judgments in trolley cases, so that someone who did engage in sentimentalist simulation would arrive at the common intuitions. This would buttress the case that whatever unconscious rules or affective dispositions are at work behind people’s actual judgments, their ultimate source is in the complex sentimental process. As I put it in a forthcoming paper, judgments resulting from these other processes are asymmetrically dependent on sentimentalist simulation. (I allow for social asymmetric dependence, which is how I suggest we can understand low-functioning autists’ moral judgments – they cotton on to rules for which others have a sentimental rationale without having one themselves, which explains why their application is often described as ‘wooden’ or rigid.)

  6. 6. Posted by Tim Dean | October 11, 2009 12:27 am

    I think I see what you’re saying, Antti (although I’m not sure I understand what you mean by asymmetrical dependence – guess I have to read your upcoming paper!). But I still wonder whether the complex and costly sentimentalist process is necessary to explain everything you want to.

    My current opinion is that, cognitively, there isn’t much difference between regular non-moral decision making processes and moral ones. The latter adds a feeling of universalisability and non-negotiability to the judgement, but otherwise they operate in much the same way. (In fact, I think one of the jobs of a UMG is to determine which scenes are moral or non-moral before it applies things like the principle of double effect – so we might have a universal perceptual grammar and a UMG.)

    As for low-functioning autists, the boxology I proposed might also explain their behaviour. The UMG functions differently in them, for example attributing intentionality differently due to their different Theory of Mind, and their emotional responses are low. As a result, the resulting moral sentiment is often absent, aberrant or weak (psychopathy might be another related example). However, moral reasoning and the ability to reflect on learned moral principles isn’t necessarily degraded. In fact, outside of moral issues, individuals with ASD often employ effortful reasoning to compensate for diminished or absent intuitions, such as in social situations.

    Still, I am sympathetic to the sentimentalist approach, I just think we’re a long way from understanding the details of the actual cognitive processes that underlie it and its relation, if any, to a moral grammar.

    I have some more thoughts on the UMG, written about a year ago and due for an update, on my site, here.

Post a comment

Name: (required)

Email Address: (required) (will not be published)

URL:

Comments: