Draft version to appear in: Caruso, G. (ed.), Exploring the Illusion of Free
Will and Moral Responsibility, Rowman & Littlefield.
Free Will, an Illusion?
An Answer from a Pragmatic Sentimentalist Point of View
Maureen Sie
Introduction
According to some people, diverse findings in the cognitive and neurosciences suggest
that free will is an illusion: We experience ourselves as agents, but in fact our brains decide,
initiate, and judge before ‘we’ do (Soon, Brass, Heinze and Haynes 2008; Libet and Gleason
1983). Others have replied that the distinction between ‘us’ and ‘our brains’ makes no sense
(e.g., Dennett 2003)1 or that scientists misperceive the conceptual relations that hold between
free will and responsibility (Roskies 2006). Many others regard the neuro-scientific findings
as irrelevant to their views on free will. They do not believe that determinist processes are
incompatible with free will to begin with, hence, do not understand why deterministic
processes in our brain would be (see Sie and Wouters 2008, 2010). That latter response
should be understood against the background of the philosophical free will discussion. In
philosophy, free will is tradionally approached as a metaphysical problem, one that needs to
be dealt with in order to discuss the legitimacy of our practices of responsibility. The
emergence of our moral practices is seen as a result of the assumption that we possess free
will (or some capacity associated with it) and the main question discussed is whether that
assumption is compatible with determinism.2 In this chapter we want to steer clear from this
'metaphysical' discussion.
The question we are interested in in this chapter, is whether the above mentioned
scientific findings are relevant to our use of the concept of free will when that concept is
approached from a different angle. We call this different angle the 'pragmatic sentimentalist'-
approach to free will (hereafter the PS-approach).3 This approach can be traced back to Peter
F. Strawson’s influential essay “Freedom and Resentment”(Strawson 1962).4 Contrary to the
meatphysical approach, the PS-approach does not understand free will as a concept that
somehow precedes our moral practices. Rather it is assumed that everyday talk of free will
naturally arises in a practice that is characterized by certain reactive attitudes that we take
1
towards one another. This is why it is called 'sentimentalist.' In this approach, the practical
purposes of the concept of free will are put central stage. This is why it is called 'pragmatist.'
The structure of the chapter is as follows. First, we explain the social function of moral
responsibility that is at the core of the PS approach. The practice of responsibility arises
because we work together to reach our goals, depend on others to satisfy our needs and to
reach our goals and because it is not always clear how to behave as part of that attempt. Also,
we might differ in our expectations and ideas concerning how best to behave as part of that
attempt. The continuous exchange of reasons and the adaptation of our behavior on the basis
of that exchange are crucial to this process of coordination.
Secondly, we explain how this exchange gives rise to a so-called space of reasons. The
practice of responsibility serves both to adjust our normative expectations and our behavior to
what the space of reasons requires and to adjust the space of reasons in the light of the
difficulties we stumble upon in our social life (such as differences in normative expectations
and differences in the ability to adjust our behavior in the light of reasons). We argue that
participation in this practice requires people to be able to ‘locate themselves in the space of
reasons.’5 Given the social function of the practice, however, it only makes sense to blame
people who do not conform to our normative expectations when they are able to adjust their
behavior in the light of reasons, not when they are unable to do so. Therefore, we have to
differentiate between cases of not responding to reasons that warrant and cases that do not
warrant a response with the moral reactive attitudes of blame, resentment, and indignation. It
is here, we argue, that the concept of free will finds its natural application.
Thirdly, we turn our attention to the scientific findings of recent decades that to us seem
relevant to free will (understood from the PS approach). We concentrate on those findings
that show that we (1) lack agential transparency, i.e. immediate and infallible introspective
access to the motivational origin of our actions and that, as a result, (2) we are sometimes
'mistaken' in our understanding of our own actions. We are mistaken in the sense that we do
not always know why we acted as we did or do not know the full story. That finding, we
subsequently argue, is relevant to free will when the reasons we exchange to explain
ourselves and understand others (a) fail to cite the causes that the sciences suggests to be
efficacious or (b) structurally cite reasons the sciences show not to be of influence. With
respect to (a) research on the influence of cognitive biases, stereotypes, and prejudices is used
as an example. With respect to (b) the situationist literature that denies the existence of
virtues is illustrative. Another illustration of (b) is a cluster of experiments that suggest that
we are primarily concerned with the wish to appear moral. When correct, so we argue, this
2
body of research should lead to serious reconsideration of the occasions on which we claim,
e.g., “to act on the basis of reasons and out of our own free will,” “that our own morally
wrong actions were not performed for reasons (not performed out of our own free will), “ and
“that other people's morally wrong actions were freely willed and performed for reasons.”
We also explain why given the social function of moral responsibility ascriptions and
the role of the concept of free will as understood in this paper, the claim that it is an illusion
makes no sense.6 Every society needs means to coordinate behavior. In our Western society
we allow people to coordinate a great deal of their shared lives within limits set by the law.
That is, we delegate large parts of the regulation of our interpersonal affairs to the moral
realm: To the realm in which people come to an understanding of what should and what
should not be done by exchanging reasons, in just the way set out in the first two sections of
this paper. This requires that we put great trust in people’s ability to figure out what is
expected from them in a variety of situations, and in their willingness and ability to do what
is expected out of their own free will. The scientific findings discussed in this chapter, though
practically relevant and fascinating, provide no reason to think this trust misplaced.
§ 1 The Social Function of Moral Responsibility Ascriptions
Whenever people work together to reach their goals they need a way to coordinate their
behavior.7 Many goals can be reached only by organized activity. Often, the people involved
in a shared project differ in their wishes and ideas about how to carry out the project and in
their capacities to contribute to it. As a consequence they need to communicate with each
other during the project as well as to come to a division of the labor. If we want to build a
house we need to meet several technical requirements (a stable construction, how to ensure
it), and we need to reach an agreement on the design of the house and the division of duties.
Different people engaged in building the house will have different capacities (one might be
good at laying bricks, another at painting), but also different interests (one needs the money,
another will be the owner). Hence, they will also have different expectations about what is
required for a good house, how much time and energy everyone should invest, and so on.
The coordination of shared practices has two important but distinct aspects. First, we
must decide what is to be done (the plan of the house and the way to build it) and what is
expected from the different people involved. We shall call the expectations about how people
should behave ‘normative expectations:' expectations about what should be done by whom,
why, and when (and sometimes also about what should not be done). Second, it must be
ensured that people behave as expected, that they keep their end of the deal, so to speak. Let
3
us call these aspects the ‘design-aspect’ and the ‘process-aspect’ of our shared coordinative
practices.
Several practices help us to realize those two aspects (obedience to authority, tradition,
voting procedures). One important way to promote that people do what is expected from
them is to make clear to them what is expected of them. As we will explain in the next
section, it is an important aspect of our Western society that making clear to people what is
expected of them and explaining why, suffices to get them to behave accordingly. Other ways
to get people to behave in ways we would like them to is through conditioning and
manipulation. When people fail to do what is expected we can punish them as we can reward
them for doing (or exceeding) what is expected. Both punishment and rewards function as
part of conditioning processes, be it in different ways. However, typical to our Western
society is that we put great trust in people’s ability to figure out what is expected from them
in a variety of situations, and in their willingness and ability to do what is expected out of
their 'own free will.' That is, without taking recourse to institutional force, violence,
manipulation or other measures.
This means that when people’s normative expectations differ, we try to bring them into
line by discussing which expectations are reasonable. Out of such discussions evolves what
we might call a space of reasons: Interconnected views of what we should do (and not do) in
what situations and under what conditions. The moral sentiments of blame, praise,
resentment, gratitude and moral indignation function as an intermediary between the space of
reasons, our normative expectations, and our actions. Let us consider an example to
understand how this practice, the dynamics of responsibility, works.
Some time ago my colleague, Arno, left his bicycle pump in the
corner of the café where he had lunch. He wanted to have his hands
free on his after-lunch walk and the barman of the café promised to
keep an eye on the pump. When he came back, the pump was stolen.
He blamed the barman for being inattentive. The barman told him
that he had allowed a customer to use the pump, but she had not
brought it back and he promised to call her to account at her next
visit. A week later, a friendly young woman rang at Arno’s door,
handed over his pump and apologized for having taken it. She had
assumed that the owner of the pump had forgotten it and would not
4
miss it. Hence, she had taken it with her to pump up her punctured
tire a second time during her way home.8
This example illustrates the occasions when we blame, the immediate function of
blaming, and its effects. We blame people when they do not behave as we think they should.
In the resulting exchange, both parties explain how they think the blamed behavior fits into a
shared space of reasons and they might come to share a common view of what happened,
what should have been done, and the proper response. Arno expected the barman to take care
of the pump and blamed him because it seemed to him that he did not fulfill that expectation.
The barman explained his behavior by giving reasons. In his view he did nothing wrong: He
let someone use the pump in the expectation that she would bring it back. It is the woman
who is to be blamed. Arno accepted his interpretation and his explanation and, subsequently,
changed his view of what the barman should have done and accepted his offer to call the
woman to account. In the exchange between the woman and the barman, it is the blamed
person who changes her interpretation of what happened, agrees that she should not have
taken the pump and offers to return it. Finally, in the exchange between Arno and the woman,
Arno changes his view of what happened again and accepts that the woman is not to blame
for theft, but for having a wrong view of what one can do with objects you have permission
to use.
What happens in this example is what we regard as the proper function of ‘ascriptions’
of responsibility. Strictly speaking ‘ascriptions’ is not the right word to use. We do not
engage in any theoretical enterprise when we are involved in such everyday interactions.
What the example illustrates is what it means to interact with others as responsible
individuals. We see these interactions as moral because the sentiments involved in them are
of a moral nature. We believe with Strawson that these reactive attitudes are a crucial and
indispensable part of our interpersonal relationships (Strawson 1962), at least in our
contemporary Western society.9 They are what constitute our everyday practices of
responsibility (cf., e.g., Wallace 1994; Scanlon 1998). Now let us turn to the relation of these
everyday practices, the space of reasons it gives rise to and the concept of free will.
§ 2 The Space of Reasons and The Concept of Free Will
The pump-example shows how we bring our normative expectations and actions into
harmony by blaming people when they do not conform to what we expect and exchanging
reasons for our interpretations of what happened and what should be done in response. The
5
exchange is not always as harmonious as we sketched here. We sometimes disagree on what
can be expected from whom and on which ocassions.10 However, even if we fail to settle
upon a shared set of expectations we often find ways to deal with the situation. We might
agree to disagree and each go our own way. We might even take institutional measures (hand
over the case to the law authorities), and by doing so, try to force one another to accept
behavior in accordance with what we believe proper. This brings us to the second aspect of
our ascriptions of responsibility, the process-aspect: Sometimes we need to get others to do
what we believe should be done.
To prevent recourse to institutions, force or even violence, exchanging reasons and
seeking a common ground is a good way to coordinate our shared practices.11 We see this as
an important assumption of our contemporary Western society. When people do what they do
‘out of their own free will,’ because they see how it fits into a network of reasons they can
identify with (that they experience as their own reasons), coordination of shared activities
becomes much easier. Let us return to our mundane example to argue our case. The exchange
of reasons in the harmonious scenario of the pump example profits from the fact that each
participant in the exchange feels responsible for her or his actions. It is because the barman
feels responsible for the loss of the pump that he proposes to call the woman to account. It is
because the woman feels responsible for the shared practice that she brings the pump back.
Identifying with the reasons why what is expected is expected enables people to do what is
expected. When people feel responsible for a shared practice and for their contribution to it
(what they do), they are better motivated to keep an eye on what is expected and to act
accordingly. Hence, with respect to both aspects (design and process) of the coordination of
our shared activities, the space of reasons plays a crucial role. When you can explain to
others why you acted as you did and or can convince them that your reasons were adequate,
this will have an impact on future interactions with those involved. Though more difficult to
describe, the same seems true for more complex shared activities, involving more people,
hence, more interests, individual differences, and so on, as a result leading to a multifaceted
conversation.
In any case, given the social, coordinative, function of our practice of responsibility it
makes no sense to take or hold people responsible for transgressing normative expectations
that could not have been fulfilled. It is silly to resent or blame, for example, a mentally
retarded barman for failing to watch someone's pump when he does not understand such
requests. Would this barman blame himself, we would reassure him not to worry and explain
that Arno should not have made his request to him. Unless of course the barman’s mental
6
capacities well equip him to understand and fulfill such requests. In that case, blame is
appropriate and we would fail to do justice to the barman not to involve him in our shared
'pump-lending practices' and related conversations.
It is crucial to the dynamics of responsibility ascriptions, as explained in the previous
section, that it allows for differences between individuals, differences in capacities, interests,
degrees of participation. It is by way of the moral sentiments and subsequent exchange of
reasons that we fine-tune our mutual normative expectations: That we figure out what can be
expected when, from whom, and why (Sie 2005). Everyone able to position her or himself in
the space of reasons can participate in these moral practices.12 The concept of free will arises
naturally as part of this so-called positioning process, but does not precede it. It might make
sense to claim lack of free will on the part of our mentally challenged barman, for example,
when his action of lending the pump did not express 'his values' or 'his usual way of acting.'
When he lent the pump because he did not pay attention to what was beings asked, telling us
that he did not act out of his own free will might express that he would have done otherwise
if he had paid attention.13 Likewise, the barman might claim the opposite when a customer
tells Arno of the barman's impaired mental capacities. He might say something like: “No that
is not the reason why I did it! I lent the pump to that other customer out of my own free will.”
With that claim the barman might communicate that he does not believe that what he did was
wrong and does not see the need to make excuses. Or, alternatively, he might not believe that
his impaired mental capacities were the cause of his action and feel guilty. In that case he
uses 'free will' to emphasize that he did something that he now considers to be stupid and
something for which he cannot be excused. That is to say, that sometimes we summon free
will by way of lamentation, to convey that we think apologies and amends are in order, not
excuses or exemptions.
Free will in the PS approach escapes a simple and straightforward definition. It is a
concept that arises naturally when we locate ourselves and others in the space of reasons. We
sometimes use it to draw attention to the reasons we have for acting, sometimes to
communicate that we think our action to be wrong and unexcusable, i.e., that we feel the need
to apologize and make amends. We also sometimes use it to explain that our actions should
be understood against our limited range of options, as in 'I do not live in this neighborhood
out of my own free will' or to excuse ourselves completely as in 'I did not act out of my free
will.' In each and every case free will functions as part of the social function of responsibility
ascriptions. Now let us turn to the scientific findings that some have taken as a reason to
reject the existence of metaphysical free will.
7
§ 3 What the Sciences tell us about Free Will
§ 3.1 The Scientific Findings
Although we often think and talk about our choices in terms of individual
considerations, cognitive science has established that we often fall prey to all kinds of
cognitive biases without being aware of it. So, for example, we might explain our choice not
to use a particular kind of medicine because we believe the benefits do not outweigh the risks
or we may explain our choice for a pair of stockings by referring to their softness. In the past
decades, cognitive scientists have shown themselves able to manipulate such choices,
regardless of the individual considerations we tend to cite in everyday life (e.g., Tversky
1981, for a good overview of this literature see Kahneman 2011).14 Also they managed to
identify many situations in which these biases that serve us well in most circumstances, cause
us to function sub-optimally. Closely related research in social psychology and behavioral
economics suggests that our choices are often influenced by features of our surroundings or
the set-up of a situation or option (for a nice overview see Thaler & Sunstein 2008). With
respect to these cases too, we do not tend to cite these features when asked to explain our
choices (Nisbett & Wilson 1977). Moreover, recent research even suggests that we can be
misled about our choices and will provide reasons even for the choices we did not make (Hall
L 2012). Does all of this research not indicate the limits of our ability to position ourselves in
the space of reasons, so central in the previous sections? And what about research in other,
though related, disciplines?
Research in social science also shows that other biases, prejudices and stereotypes can
be activated, triggered, without us being aware of it, and in such a way that they influence our
behavior (see Wilson 2002, Ch.9; Fine 2006, Ch. 8; Kunda and Spencer 2003). One of the
explanations of why stereotypes and prejudices remain effective even after having fallen into
disfavor might be the phenomenon of implicit bias (Kelly and Roedder 2008). Extensive
research has shown that almost all of us are prone to implicit biases15 regardless of our
explicit attitudes with respect to those biases.16 I will come back to these findings below.
Other, perhaps more surprising findings in moral psychology suggest that our moral
judgments are co-determined by features of our surroundings that trigger feelings of disgust
or their opposite, cleanliness (Schnall, Haidt, Clore and Jordan 2008; Schnall, Benton and
Harvey 2008). We know that to look fresh and clean matters in many social situations, for
example when facing trial by jury. Nevertheless, that 'cleanliness and disgust related'-
8
circumstances also impact the severity of our moral judgments in a lab-setting has taken
many by surprise.17
Many people were also taken by surprise by findings in 'experimental philosophy' or
'survey-philosophy.' These suggest that our judgment on whether an action was done on
purpose or not is influenced by our moral evaluation of the outcome of certain actions: i.e.,
whether we morally like or dislike it (Nadelhoffer 2006).18 This phenomenon might be an
instantiation of the more general outcome bias from which we appear to suffer: the
inclination to use information presently available to evaluate the quality of a decision made in
the past (J. Baron and J.C. Hershey 1988). Neuro-ethical findings, furthermore, seem to
establish the general influence of emotions when we judge moral dilemmas, even when
abstract and highly fantastic ones such as the trolley-cases are involved (J. Greene 2005; J. D.
Greene, Sommerville and Nystrom 2001).19 What is interesting about these findings for the
purposes of this chapter is that they disclose something about the way in which we judge
these dilemmas that was not so clear before the fMRI results. That is, that differences in our
judgments can be explained by our different emotional responses to the dilemma's. For the
findings showed a greater20 activation in areas associated with emotion in one set of
dilemmas (those that required the agent to engage in a personal, physical act of killing)
compared to another set (those that only required a distant act of killing). When we make
such judgments, though, we are not especially aware of a difference in emotional responses
to them, or of what this emotional response is exactly related to.
The above is only a small sample of the vast and diverse findings that have become
available in the past decades. Does all of that research taken together not indicate that our
everyday exchange of reasons is a very poor attempt, indeed, to explain why we act as we
do? Clearly much of our mental lives is inaccessible to introspection and that should make us
wonder about our everyday success in explaining why we do what we do. To be sure, how
exactly to understand and evaluate the lack of introspective access that all these experiments
bring to the fore is controversisal, as is the question whether this inaccessibility is a truly new
discovery and/or contrary to popular belief (Kozuch and Nichols 2011; Sie 2009). What
seems very hard to doubt, though, is the following twofold upshot of this research, i.e., that:
(1) our first-personal interpretation of even very mundane actions occasionally fails to
be fully adequate
(2) without us—the interpreters—realizing this to be the case
9
Let us refer to the first feature as the phenomenon of ‘subjective misinterpretation,’ to
the second as the phenomenon of ‘agential intransparency.’ When asked why we did
something we sometimes provide answers that are incomplete or even mistaken (subjective
misinterpretation), without hesitation and/or doubt (which suggest agential intransparency).
Hence, even on occasions where we are, or appear to be, confident about our answers to so-
called why questions, we might nevertheless miss important aspects of what made us act as
we did or fail to appreciate the reasons why. We miss important aspects of what made us act
as we did when we fail to acknowledge crucial influences on our actions. We fail to
appreciate the reasons why when our explanation or justification bears little or no relation to
the actions explained.
According to an influential view in cognitive science the phenomena of subjective
misinterpretation and agential intransparency make clear that the reasons we offer as
explanations or justifications of our actions are not introspective reports on the states that
caused these actions (Wegner 2002; Wegner and Wheatley 1999; Hassin, Uleman, Bargh,
2005; Wilson, 2002). Rather we infer or reconstruct those reasons on the basis of so called 'a-
priori'21 causal theories originating from experience and the social environment (Bargh and
Chartrand 1999; Nisbett and Wilson 1977). When asked to explain an action, we determine
which of the possible causes were present at the time of the action and cite that as the reason
for the action.22 If we cannot find a plausible reason, we confabulate one, make one up on the
basis of the available information (Gazzaniga 2005). Causes that escape our attention, causes
that are not easily remembered, and causes that are not within our known range of possible
causes will not be cited.
This view explains elegantly how subjective misinterpretations can occasionally occur
and cause distortions in the space of reasons.23 This in itself need not interfere with the social
function of our exchange of reasons as outlined in the previous sections. After all, when our
a-priori causal theories are basically correct and the reasons we tend to cite more or less
adequate, the overall relation between our reasons and our actions will remain more or less
trustworthy. But can we truly say that this is the case? Let us start with the question whether
the reasons we tend to cite are basically correct. I address the question whether our a-priori
theories are basically correct at the end of § 3.2..
With respect to the reasons we tend to cite when explaining our actions, judgments and
choices, we do not talk about ourselves as prone to for example cognitive and other biases.
We do not typically refer to ourselves as risk-aversive creatures, for example, whereas our
common preference for a 'sure thing' over a 'favorable gamble' (risk-aversion) enables us to
10
explain many of our sub-optimal choices in all kinds of game-theoretical settings. Nor do we
consider our factual judgments prone to something like the outcome bias introduced above,
even though this bias might distort our view on what people do intentionally and what not.
Also we do not speculate on the efficacy of implicit biases in our judgments with respect to,
for example, a person's suitability for a job or specific task. Changes are, as the
aformentioned Kahneman believes, that the introduction of the vocabulary of cognitive biases
into our everyday conversations would make these conversations more adequate (Kahneman
2011).24 If it would, it would also increase our overall ability to locate ourselves adequately
in the space of reasons. Since that ability, as we argued in the previous sections, is closely
related to our ability to coordinate shared practices such an improvement would in its turn
improve these practices. With respect to the design aspect of our shared practice, for
example, it would help to determine what we can and cannot expect from one another given
that we are prone to certain cognitive and other biases. With respect to the process aspect we
might consider it proper to take certain measures that minimize unwanted behavioral
outcomes and maximize desired ones, also referred to as choice architecture (Thaler &
Sunstein 2008).
Related questions can be raised with respect to other areas of research. What to think
of, for example, the situationists’ claims of the past decades?25 They argue that our behavior
and actions are primarily the result of the particularities of the situation, not of enduring
moral traits of our individual character. Those enduring moral traits are known among
philosophers as ‘virtues.’ Clearly we often use virtues, or virtue-like language, to explain one
another’s behavior. We say of people that they acted cowardly or arrogantly, that what they
did was dishonest, cruel, mean or brave and kind. And even in more mundane cases, we often
use virtue-like language. Take the example from the first section in which the owner of a café
lends the pump that Arno left in his care to one of his other clients. In the exchange of moral
sentiments that follows the café owner’s action, it is easy to imagine that Arno resents the
owner, e.g. for being ‘careless’ or ‘eager to make a good impression on young women’ (such
as the one to whom he lent Arno’s pump). We tend to resent behavior that is indicative of a
morally bad character trait. We understand the person as being guilty of the kind of behavior
one should not make oneself guilty of. According to situationism such explanations are
misguided. Situationists argue that we should explain behavior in terms of the situation, not
by citing traits of agents that would explain their behavior across situations. The reason for
this is that according to them empirical studies show, first of all, that it possible to get people
to behave in ways we (they themselves included) consider morally bad by manipulating their
11
situation (e.g., Zimbardo 2004; Milgram 1974). Secondly, empirical studies designed to
disclose differences between individuals that could count as referring to enduring character
traits, all failed to establish them (Harman 2000).26
Another line of research in moral psychology raises questions about our standard
practice of explaining ourselves by citing behavioral standards or moral principles such as
“honesty is important to me,” “we need to do what is just.” According to a series of
experiments run by Daniel Batson et. al. we often do not act at all in accordance with such
standards when it is not in our advantage. A much more plausible explanation for the
occasions when we do act in accordance with such standards, they argue, is our wish to
‘appear to be moral’ (Batson 2008; Batson, Thompson, Seuferling, Whitney and Strongman
1999). They call this phenomenon ‘moral hypocrisy:’ we only act in accordance with
explicated behavioral standards when that is required to appear to act morally. We, however,
so the researchers suggest are not aware of that underlying motivation. Moreover, in the right
circumstances, some of us will even deceive ourselves about our motivation to act the way
we do.27
The phenomenon of moral hypocrisy might relate to two well-researched phenomena in
psychology: the asymmetric understanding of the moral nature of our own actions and those
of others (the fundamental attribution error) and the idea that our own actions and
motivations are much more moral than those of the average person (Epley 2000). In cases of
other people acting in morally wrong ways we tend to explain those wrongdoings in terms of
the agent’s lack of virtue or morally bad character traits. We focus on those elements that
allow us to blame agents for their moral wrongdoings. On the other hand, in cases where we
ourselves act in morally reprehensible ways we tend to focus on exceptional elements of our
situation, emphasizing the lack of room to do otherwise. This, finally, brings us to the
concept of free will. After all, when we believe that we, or others, lacked the room to act
otherwise, we might typically claim that “we/they did not act out of our/their own free will.”
On the other hand, when responding to morally blameworthy behavior we tend to assume that
the behavior was freely willed. In the next section let us explain in what way the scientific
research hitheto discussed is relevant to the space of reasons and our understanding of free
will.
§ 3.2 Free Will, an illusion?
When the social and moral psychologists presented in the previous sections are right,
we tend to overestimate free will in the case of other people’s wrongdoings and
12
underestimate it in the case of ourselves. The wish to appear moral and our general tendency
to feel holier than thou can be expected to aggravate that error: Both will incline us to take a
favorable view of our own motives, reasons, and actions. That means that we will tend to
explain our moral wrongdoings by appealing to excusing and exonerating circumstances or
giving an overtly sympathetic reading to the reasons and motives for which we acted. Also
we will tend to cast our own actions more easily as moral than those of other people. On the
other hand we will tend to explain other people’s moral wrongdoings as the result of bad
character-traits rather than circumstances. In terms of free will:
(1) in the case of wrongdoings we will tend to overestimate other’s people free will
and underestimate our own;
(2) in the case of exemplary and good actions we will tend to overestimate our own
free will and underestimate that of other people.
Would we be keeping this in mind that would surely change how we locate ourselves in
the space of reasons. Grosso modo, we claim to act out of our own free will, e.g., when we
(believe to) have good reasons to act as we did.28 Or we claim the opposite, that we did not
act out of our own free will, when we did not act for reasons and were caused by exceptional
circumstances to act as we did. In the latter case we will offer excuses and/or apologies. On
the basis of the social and moral psychology findings we need to be more modest with
respect to the role of behavioral and moral principles and virtues in the explanation of our
own morally adequate functioning. That is, whenever we explain our actions in terms of
moral reasons (citing behavioral or moral principles or virtues) we might be alert whether
that story is all there is to it. For example, when I explain the fact that I did not lie about some
embarrasing fact about my past behavior I should not neglect the features of the situation that
enable me to confess. By the same token I should be suspicious whenever I readily look for
excuses when explaining my own morally wrongfull behavior. It just might be the case that I
am able to improve myself by focussing on what I could have done differently in exactly the
same situation.
Analoguosly, we need to tune down our explanations of others in terms of their moral
vices and failure to observe moral standards. When people lie and we resent them for it we
should not neglect the features of the situation that made it hard for them to confess. By the
same token, when we explain away the morally praiseworthy behavior of other people we
should keep an open mind as to whether they not actually did something that deserves praise.
13
As fas as the cognitive and other biases, stereotypes and prejudices are concerned, we
might seriously investigate their role and impact on our functioning and introduce their
possible distorting effects into our everyday conversations (Sie and Van Voorst Vader -
Bours WiP). Critically scrutinizing our own and one another's professed reasons might enable
us to gain more grip on ourselves and improve our shared practices.
To be sure, we might have learned the general lessons that can be abstracted from the
above—know thyself, who is without sin..., and so on—also by reading the great classics in
philosophy and literature, the bible or carefull attention to common sense's wisdom. But the
scientific findings of the past decades have disclosed not only general lessons but many
details on our social and cognitive functioning that we can use to improve it, as well as many
fascinating and surprising facts about what influences us and what barely does so. Why
would we not consider how to adjust and improve our self-understanding on particular
ocassions and with respect to certain domains of action in light of this huge amount of
scientific work? As argued a good way to do so would be to reconsider how we locate
ourselves in the space of reasons.
To conclude, let us shortly address the idea that our a-priori theories about how our
reasons relate to our actions as such are mistaken. Is it conceivable that the whole idea that
we can locate ourselves in the space of reasons is misplaced and that, in that sense, whatever
we have to say about our free will or our reasons is misplaced? Partly on the basis of research
discussed in the previous section, Jonathan Haidt and colleagues infamously argued that we
should not take our reason-giving practices at face value (Haidt 2001; Haidt and Bjorklund
2008). Although we act and talk as if our reasons and judgments are the result of deliberation
that is not what is actually the case. Rather, the reasons we exchange are post-hoc
constructions devised primarily with an eye to influencing the judgments made by other
people. We seek their approval and do not aim to provide true reports on the causal
antecedents of our judgments.29 According to Haidt the actual antecedents of our moral
judgments are gut-feelings (which Haidt also calls intuitions), not deliberative processes.
These gut-feelings and intuitions are ‘sudden flashes of evaluation’ which are susceptible to
all kinds of influences that escape our attention. According to him traditional rational models
wrongly claim that our moral judgments originate from individual rational deliberations. In
doing so they distort our view on the role and nature of moral evaluation in everyday life,
mainly its social role and non-deliberative (‘emotional’) nature.
In the Western world we leave a large part of the regulation of our interpersonal affairs
to our everyday moral practices, in much the way set out in the second section of this chapter.
14
Being able to position our selves in the space of reasons constitutes an important part of this
moral practice. The space of reasons comes into existence by our shared enterprise of
bringing our normative expectations into mutual harmony. Obviously this would not work if
the reasons we exchange collectively missed all relation to the shared practices regulated by
them. Even if we discover that all reasons are constructed post-hoc, this would not matter, as
long as reasons that we exchange in general relate to our practices in trustworthy ways. In
this sense reasons are the 'currency' of our shared coordinative practices: What hold true for
the real currency, money, holds true for reasons. That is, it is not obvious or easy to
determine what it represents exactly, it is sometimes counterfeited, it is often worth more or
less depending on the situation, it sometimes buys us more than it should, sometimes less, in
addition it is used in and possibly even crucial to, all kinds of immoral and criminal
enterprises. However, money, like reasons, would no longer be exchanged if it lost all value.
That being said, the research listed in this section and the findings on which SIM relies leave
little room to doubt the phenomena of subjective misinterpretation and agential
intransparency. Hence there is reason to question whether the concept of free will that arises
as part of this process has the application we think it has.
Conclusion
Every society needs means to coordinate behavior, but there are several ways to
accomplish this. In our Western society the value of autonomy plays an important role: we
think it is important for people to act in accordance with what they themselves think is right.
Hence, we allow people to coordinate a great deal of their shared lives within limits set by the
law. That is, we delegate large parts of the regulation of our interpersonal affairs to the moral
realm: To the realm in which people come to an understanding of what should and what
should not be done by exchanging reasons, in just the way explained in the first two sections
of this paper. As we have seen, in this process individual agents take responsibility for their
actions and exchange reasons when those actions transgress other people’s normative
expectations. Hence, we put great trust in people’s ability to figure out what is expected from
them in a variety of situations, and in their willingness and ability to do what is expected out
of their own free will. In this paper we argued that certain scientific findings of recent
decades could throw new light on those ocassions at which we (do not) act out of our own
free will, in this sense. We also concluded that the view that we are unable to locate ourselves
in the space of reasons, makes no sense from the approach to free will endorsed in this
paper.30
15
16
REFERENCES
Julia Annas, 2005. Comments on John Doris's “Lack of Character.”Philosophy and
Phenomenological Research. Vol. 71, No. 3 (Nov., 2005), pp. 636-642
Ariely, D. 2008. Predictably irrational: The hidden forces that shape our decisions. New
York, NY: HarperCollins.
Bargh, J.A., & Chartrand, T.L. 1999. The unbearable automaticity of being. American
Psychologist (July 54), 462-479.
Baron, J. 2007. Thinking and deciding New York, NY: Cambridge University Press.
Baron, J., Hershey, J. C. 1988. Outcome bias in decision evaluation. Journal of Personality
and Social Psychology, 54, 569-579.
Batson, C. 2008. Moral masquerades: Experimental exploration of the nature of moral
motivation. Phenomenology and the Cognitive Sciences, 7(1), 51-66. doi:
10.1007/s11097-007-9058-y
Batson, C.D., Thompson, E.R., Seuferling, G., Whitney, H., & Strongman, J.A. 1999. Moral
hypocrisy: Appearing moral to oneself without being so. Journal of Personality and
Social Psychology, 77(3), 525-537.
Dennett, D. 2003. Freedom evolves: Penguin.
Dennett, Daniel Clement, & NetLibrary Inc. 1998. Brainchildren
essays on designing minds. Cambridge, Mass.: MIT Press.
Doris, John M. 2002. Lack of character : Personality and moral behavior. Cambridge, UK ;
New York: Cambridge University Press.
Epley, N., Dunning, D. 2000. Feeling “holier than thou”: Are self-serving assessments
produced by errors in self or social prediction? Journal of Personality and Social
Psychology, 79, 861-875.
Fine, C. 2006. Is the emotional dog wagging its rational tail or chasing it? Philosophical
Explorations, 9(1), 83-98.
Fine, Cordelia. 2006. A mind of its own : How your brain distorts and deceives (1st ed.). New
York: W.W. Norton & Co.
Fischer, J. M. 1994. The metaphysics of free will. Oxford UK/Cambridge USA: Blackwell.
Fischer, J. M., & Ravizza, M. 1992. Responsibility, freedom, and reason. Ethics,
102((January)), 368-389.
Fischer, John Martin, & Ravizza, Mark. 1999. Responsibility and control: A theory of moral
responsibility (New ed. ed.). Cambridge: Cambridge University Press.
17
Foot, Philippa. 1978 fp 1967. The problem of abortion and the doctrine of the double effect in
virtues and vices. in: Virtues and Vices, 19-32. doi: 10.1093/0199252866.003.0002
Frankfurt, H. G. 1969. Alternative possibilities and moral responsibility. Journal of
Philosophy, LXVI, no. 23.
Frankfurt, H. G. 1976. Identification and externality. In A. Rorty (Ed.), The identities of
persons: University of California Press.
Frankfurt, H. G. 1987. Identification and wholeheartedness. In F. D. Schoeman (Ed.),
Responsibility, character, and the emotions. New York: Cambridge University Press.
Frankfurt, Harry G. 1971. Freedom of the will and the concept of a person. Journal of
Philosophy, 68(January 14), 5-20.
Frankfurt, Harry G. 1973. Coercion and moral responsibility. In T. Honderich (Ed.), Essays
on freedom and action. (pp. Repr. in: Frankfurt, H. G. 1988., p. 1926-1946.). Londen:
Routledge & Kegan Paul.
Gazzaniga, Michael S. 2005. The ethical brain. New York: Dana Press.
Greene, J. 2005. Emotion and cognition in moral judgment: Evidence from neuroimaging. In
J.P. Changeux, A.R. Damasio, W. Singer & Y. Christen (Eds.), Neurobiology of
human values. Berlin: Springer-Verlag.
Greene, J.D., Sommerville, R.B., & Nystrom, et al. 2001. An fmri investigation of emotional
engagement in moral judgment. Science, 293(14), 2105-2108.
Greenwald, A. G., McGhee, D. E., & Schwartz, J. K. L. 1998. Measuring individual
differences in implicit cognition: The implicit association test. Journal of Personality
and Social Psychology, 74, 1464-1480.
Haidt, J. 2001. The emotional dog and its rational tail: A social intuitionist approach to moral
judgment. Psychological Review, 108, 814-834.
Haidt, Jonathan; Bjorklund, Frederik. 2008. Social intuitionists answer six questions about
moral psychology. In W. Sinnott-Armstrong (Ed.), Moral psychology (Vol. 2, pp. 181-
218): MIT.
Hall L, Johansson P, Strandberg T. 2012. Lifting the veil of morality: Choice blindness and
attitude reversals on a self-transforming survey. PLoS ONE 7(9). doi:
10.1371/journal.pone.0045457.
Harman. 2000. Moral philosophy meets social psychology Explaining value: And other
essays in moral philosophy gilbert harman.
Hassin, Ran R., Uleman, James S., Bargh, John A., & ebrary Inc. 2005. The new
unconscious. Oxford ; New York: Oxford University Press.
18
Honderich, T. 1993. How free are you? The determinism problem.: Oxford University Press.
Huebner, B. 2011. Critiquing moral psychology from the inside. Philosophy of the social
sciences (41), pp 50–83.
Jacobson, Daniel. 2008. Does social intuitionism flatter morality or challenge it? In W.
Sinnott-Armstrong (Ed.), Moral psychology (Vol. 2, pp. 219-232): MIT.
Kahneman, Daniel. 2011. Thinking, fast and slow: Farar Straus and Giroux.
Kane, R. 1996. The significance of free will. New York: Oxford University Press.
Kelly, Daniel. 2011. Yuck!: The nature and moral significance of disgust (life and mind:
Philosophical issues in biology and psychology). Cambridge Mass. London, England:
MIT.
Kelly, Daniel, Roedder, Erica. 2008. Racial cognition and the ethics of implicit bias
Philosophy Compass, 3(3), 522–540. doi: 10.1111/j.1747-9991.2008.00138.x
Knobe, J. 2003. Intentional action in folk psychology: An experimental investigation.
Philosophical Psychology, 16, 309-324.
Knobe, J. 2004. Intention, intentional action and moral considerations. Analysis, 64(2).
Kozuch, B., Nichols, S. 2011. Awareness of unawareness. Folk psychology and introspective
transparency. Journal of Consciousness Studies, 18(11-12), 135-160.
Libet, B., & Gleason, C.A. et al. 1983. Time of conscious intention to act in relation to onset
of cerebral activity (readiness-potential): The unconscious initiation of a freely
voluntary act. Brain, 106, 623-642.
Milgram, Stanley. 1974. Obedience to authority : An experimental view. New York ; London:
Harper & Row.
Nadelhoffer, Thomas. 2006. Bad acts, blameworthy agents, and intentional actions. Some
problems for juror impartiality. Philosophical Explorations, 9(2).
Nagel, T. 1979. Moral luck Mortal questions (pp. 24-38): Cambridge University Press.
Narvaez, Darcia. 2008. The social intuitionist model: Some counter-intuitions. In W. Sinnott-
Armstrong (Ed.), Moral psychology (Vol. 2, pp. 233-240): MIT.
Nisbett, R.E., & Wilson, T.D. 1977. Telling more than we can know: Verbal reports on
mental processes. Psychological Review 84, 231-259.
Peereboom, D. 2001. Living without free will: Cambridge University Press.
Pettit, P., Smith, M. 1996. Freedom in belief and desire. Journal of Philosophy, XCIII(9),
429-449.
Prinz, Jesse J. 2007. The emotional construction of morals: Oxford University Press.
Roskies, A. 2006. Neuroscientific challenges to free will and responsibility. TRENDS in
19
Cognitive Sciences, 10(9), 419-423.
Russell, Paul. 1992. Strawson’s way of naturalizing responsibility. in: Ethics, 102, 287-302.
Sabini, J., Silver, M., 2005 Lack of character? Situationism Critiqued, Ethics 115, pp 535-
562.
Scanlon, Thomas. 1998. What we owe to each other. Cambridge, Mass.: Belknap Press of
Harvard University Press.
Schnall, Simone, Benton, Jennifer, & Harvey, Sophie. 2008. With a clean conscience:
Cleanliness reduces the severity of moral judgments. Psychological Science, 19(12),
1219-1222.
Schnall, Simone, Haidt, Jonathan, Clore, Gerald L., & Jordan, Alexander H. 2008. Disgust as
embodied moral judgment. Personality and Social Psychology Bulletin, 37(8), 1096-
1109.
Sie, M, Voorst Vader Bours, N, WiP, Stereotypes and Prejudices, Whose Responsibility?
Personal Responsibility vis-a-vis Implicit Bias.
Sie, M. M. S. K. 1998. Goodwill, determinism and justification. In J. Bransen & S. Cuypers
(Eds.), Human action, deliberation and causation (pp. 113-129). Dordrecht: Kluwer
AP.
Sie, M.M.S.K. 2000. Mad, bad, or disagreeing? On moral competence and responsibility.
Philosophical Explorations, III((3)), pp 262-280.
Sie, M. 2005. Justifying blame. Why free will matters and why it does not. Amsterdam-New
York: Rodopi.
Sie, M., & Wouters, A. 2008. The real neuroscientific challenge to free will. Trends in
Cognitive Science, 12(1), 3-4.
Sie, M. 2009. Moral Agency, Conscious Control, and Deliberative Awareness. Inquiry, 52(5),
pp 516 – 531 (DOI: 10.1080/00201740903302642)
Sie, M., & Wouters, A. 2010. The BCN challenge to compatibilist free will and personal
responsibility, Neuroethics, 3, 121-133.
Sie, M. 2012. Moral soulfulness & moral hypocrisy. Is scientific study of moral agency
relevant to ethical reflection? In Christoph Lumer (Ed.), Morality in times of
naturalising the mind. Frankfurt; Paris; Ebikon; Lancaster; New Brunswick: : Ontos
2012.
Soon, Chun Siong, Brass, Marcel, Heinze, Hans-Jochen, & Haynes, John-Dylan. 2008.
Unconscious determinants of free decisions in the human brain. [10.1038/nn.2112]. Nat
Neurosci, 11(5), 543-545. doi:
20
http://www.nature.com/neuro/journal/v11/n5/suppinfo/nn.2112_S1.html
Strawson, P. 1962. Freedom and resentment. In G. Watson (Ed.), Free will (1992 ed., pp. 59-
81): Oxford University Press.
Thaler, R.H., & Sunstein, C.R. 2008. Nudge: Improving decisions about health, wealth, and
happiness. New Haven, CT: Yale University Press.
Thomson, Judith Jarvis. 1976. Killing, letting die, and the trolley problem. The Monist 59,
204-217
Tversky, Amos, Kahneman, Daniel. 1981. The framing of decisions and the psychology of
choice. Science, 211(4481), 453-458.
Van Inwagen, P. 1975. The incompatibility of free will and determinism. Philosophical
Studies 27, 185-199.
Wallace, R. J. 1994. Responsibility and the moral sentiments: Harvard University Press.
Watson, G. (Ed.). 1982. Free will (7th 1992 ed.): Oxford University Press.
Wegner, D. M., & Wheatley, T. 1999. Apparent mental causation: Sources of the experience
of will. American Psychologist, 54, 480-491.
Wegner, Daniel M. 2002. The illusion of conscious will. Cambridge, Mass.: MIT Press.
Wilson, Timothy D. 2002. Strangers to ourselves: Discovering the adaptive unconscious.
Cambridge, MA: Belknap Press of Harvard University Press.
Wolf, S. 1990. Freedom within reason. New York: Oxford University Press.
Wouters, A. 2011. Vrije wil en verantwoordelijkheid in evolutionair perspectief. In M. Sie
(Ed.), Hoezo vrije wil? (pp. 190-209). Rotterdam: Lemniscaat.
Zimbardo, Philip G. 2004. A situationist perspective on the psychology of evil:
Understanding how good people are transformed into perpetrators. In Arthur Miller
(Ed.), The social psychology of good and evil: Understanding our capacity for kindness
and cruelty. New York: Guilford.
Ziva Kunda, Steven J. Spencer. 2003. Do stereotypes come to mind and when do they color
judgment? A goal-based theoretical framework for stereotype activation and
application. Psychological Bulletin, 129(4), 522–544. doi: 10.1037/0033-
2909.129.4.522
1
See for example his excellent criticism on the Libet-experiments in Chapter 8.
2
The debate that subsequently unravels is that between so called incompatibilists and
compatibilists. The first group does not believe free will is compatible with the thesis of
21
determinism (Van Inwagen 1975) and investigates the possibility and actuality of
indeterminist free will (see, for example Kane 1996). Others within that group discuss
whether free will is required for moral responsibility and investigate what the answer implies
for our current moral practices (see for example Honderich 1993; Peereboom 2001). For
many philosophers the kind of free will worth wanting is a free will that makes us morally
responsible (Dennett 1998). The second, by far the largest, group believes that determinism
and free will are compatible or that, at least, the kind of free will worth wanting is compatible
with determinism. Their work explains why this is the case and/or how people are misled into
thinking it is not. There are too many compatibilists of either sort to function as
representative of the whole group, see, e.g., the excellent collection of (Watson 1982) for a
nice overview.
3
We will not discuss the relation between the pragmatic and metaphysical approach, but see,
(Wolf 1990), introduction, and (Nagel 1979) who both argue that the metaphysical issue of
free will derives from conditions we use to determine moral responsibility in our everyday
practices. Albeit not in these terms, I address this issue in (Sie 1998).
4
This is not to claim that Strawson himself defended a PS-approach to free will. Strawson
claims not even to understand what 'the thesis of determinism' is supposed to mean (Strawson
1962, p 59).
5
Although the phrase 'locating oneself in the space of reasons' might not sound familiar, the
idea is much in line with views such as originated by Peter F. Strawson, and elaborated on
by, among others, Gary Watson, Susan Wolf, Jay Wallace, Michael Smith, and Philip Pettitt.
They all focus our attention on our ability to act for and exchange reasons. However, these
latter authors present themselves explicitly as compatibilists (Pettit 1996; Wallace 1994;
Wolf 1990), hence, as contributing to the metaphysical discussion on free will. I explain why
that is unfortunate in Sie 2005.
6
Given a different understanding of the concept of 'free will' the claim that it is an illusion
might still make sense.
7
This first section and part of the second was originally written together with my postdoc
Arno Wouters for a workshop-paper on responsibility in 2010. Unfortunately, due to personal
circumstances we failed to finish a joint version of that paper, but the ideas put forward here
bear the marks of our joint enterprise to do so.
8
This example can also be found in (Wouters 2011).
9
For the purposes of our claim in this paper we treat Strawson's view as a descriptive view.
22
We are fully aware that it possible to argue over the exact nature of the claims regarding the
'indispensable nature' of the reactive attitudes made in his paper. See for example, (Russell
1992).
10
Elsewhere I argue that the need to allow for such normative disagreements is what justifies
blame, not the compatibility of determinism and free will (Sie 2005).
11
There could be several ways in which this practice can be defended as a 'good' one: more
efficient, most pleasant for all involved, and so on. I take it that most people involved in the
moral practices described in this paper agree with this point, although I also see room to
argue against it.
12
When philosophers claim that it is reasons-responsiveness, rather than counterfactual
freedom (the freedom to do otherwise than one actually did) that matters to moral
responsibility they might very well have something in mind along the lines of this ability to
'locate ourselves in the space of reasons.' See the aforementioned works of Wolf, Wallace,
Pettit and Smith. As said, the primary interest of these philosophers, however, still is the
metaphysical discussion on free will which is why a lot of energy is devoted to working on
the details of how to exactly understand this reason responsiveness, see e.g., (Fischer 1994; J.
Fischer and Ravizza 1999; Fischer & Ravizza 1992).
13
Cf Harry Frankfurt who argued extensively against the metaphysical interpretation of the
vocabulary of free will and in favor of an interpretation that links free will to the things we do
because we want to do them (it is our will that we do them). See (Frankfurt 1969; Frankfurt
1973) For his positive account of what we do mean when we say that we did something out
of free will see his influential paper (Frankfurt 1971).
14
Other books often cited in this respect are: (Ariely 2008; Baron 2007)
15
Introduced in the literature by (Greenwald et. al. 1998). See for the test itself: (29/10/2012)
https://implicit.harvard.edu/implicit/demo/.
16
Cordelia Fine discusses several findings that suggest that we can correct for these implicit
biases (Fine 2006).
17
Although controversial (e.g., Huebner 2011), this cluster of research has led to serious
reflection on the ethical role and status of disgust. See for example (Kelly 2011)
18
It is the work of Joshua Knobe that set off the discussion on the 'moral slot' in our
judgments on intentionality. See (Knobe 2003, 2004)
19
These dilemmas where introduced in moral philosophy by Thomson and Foot (Foot 1978
(fp 1967); Thomson 1976).
23
20
One could also argue that the correct interpretation of the fmri results is not 'greater'
activation but a 'different' one. See e.g., (Prinz 2007), see 1.2.2.
21
The phrase ‘a-priori causal theories’ derives from Nisbett and Wilson (Nisbett and Wilson
1977). We will stick to that use although the label ‘a-priori’ is a bit confusing since the
theories are based upon experience.
22
Philosophers, for understandable reasons, are very careful not to conflate 'reasons' with
'causes.' Note, however, that I am here explaining the view of the cognitive scientists who do
not use these terms in their strict philosophical sense.
23
The origiantors of this line of thought actually believe that our a-priori causal theories
usually give us quite reliable estimates about the origin of our actions and why they were
performed. See (Nisbett and Wilson 1977; Wegner 2002).
24
Daniel Kahneman lists 'spreading information about our cognitive biases to the wider
audience' as his main reason for popularizing his work.
25
See: (Doris 2002; Harman 2000).
26
The situationist position is of course controversial and many so-called virtue ethicist have
argued against the situationist claims, especially its normative implications (Sabini and Silver
2005; Annas 2005). Note, however, that our point here focusses not on the normative
implications of situationism but on the descriptive claims. If the situationist literature is right
we often misdescribe the motives of our actions.
27
I argue that this label might be misleading in (Sie 2012).
28
Grosso modo, as explained in the previous section in particular situation we might use 'free
will' differently.
29
Many philosophers have objected to the Social Intuitionist Model Haidt proposed as a
replacement for rationalist models of moral judgments, e.g., (Narvaez 2008; Jacobson 2008;
Fine 2006).
30
Thanks are due to Philip Robichaud, Filippo Santoni De Sio, Katrien Schaubroeck, Marion
Smiley, Nicole van Voorst Vader Bours, Arno Wouters and the editor of this volume, Gregg
Caruso, for lively discussion and helpful comments. I also thank Myrthe van Nus, Michiel
Wielema and Gregg Caruso for their editorial work on this chapter.
24