Do you suffer from Dogbert’s anxiety? Do you feel that, if all your actions are shown to be the predictable outcomes of brain chemistry, then you’ll be missing something deeply important?
I don’t, and that’s because I’m a compatibilist. Compatibilism is a school of thought that runs from Thomas Hobbes in the seventeenth century (and some would say even Aristotle) to Daniel Dennett in our own time. We compatibilists have a robust concept of free will, which is not threatened by biology, or chemistry, or physics, no matter how Dilbertishly deterministic any of them turn out to be. Our philosophy is science-friendly and free-will-friendly at the same time.
In this post I’ll show you how to be a compatibilist, but before I do that I should disclose the costs and benefits. First the benefits:
(a) you’ll be able to sort actions into “free”, “not free”, and “hmm, grey area”, and they’ll land in the same baskets for you as they would intuitively for most people;
(b) you’ll see why it makes sense to praise or blame (and reward or punish) free actions but not others; and
(c) you’ll be able to accept whatever science discovers about brain chemistry etc., even if it turns out to be absolutely deterministic (the Dilbert scenario).
And now the cost:
(d) you’ll have to define “free”, and some related words such as “could”, in a way that some people find strange (or so I’m told).
If you’re resisting (d) already, then please take a moment to articulate why. I’m guessing that it’s either because you think your current definition of “free” (whatever that might be) keeps you in step with ordinary usage, or because you think that actions need to be free in your current sense (whatever that might be) to explain moral responsibility. If it’s either of those, then I understand the concern, but trust me: it will go away as soon as I deliver on my promises of (a) and (b).
Or maybe you just don’t like redefining things: “It’s changing the question!” and all that. I can sympathize here too, because I feel exactly the same way, coming from the other side. I’ve been a compatibilist for all of my adult life; I’m supremely comfortable with it, I find that it fits in seamlessly with common English usage (see (a)), and it guides me in judging moral responsibility (see (b)); so I wouldn’t take kindly to anyone asking me to give up my definition of “free” for some other definition at this late stage.
So we’re at an impasse, but fortunately there is a way to resolve it: pragmatism. Rational people do redefine terms sometimes, for the sake of getting a theory that works better. (That’s why whales no longer count as fish, planets no longer count as stars, and Pluto no longer counts as a planet.) And I put it to you that compatibilism, with its combination of features (a)-(c), is a theory that works better. Now let me show you.
I assume that we are physical creatures in a physical world. It’s also true that we have thoughts, emotions, beliefs, desires and (most importantly for today) decisions, but these aren’t something extra; they are physical states of our brains. All of them have physical causes, and most of them have physical effects.
Long chains of causes and effects, going back to my birth and earlier, have set up the current state of all of my cells, including my brain cells. Meanwhile other long chains of causes and effects are going on outside me. Sometimes the chains of causation outside strike my surface, and then they join the chains of causation inside. Follow the various chains far enough, and eventually you’ll come to an event deep inside my skull, known as “a decision to wave my right hand”. This in turn causes me (via another long chain) to wave my right hand. Hello!
Now I’d like to focus on just the tail end of the story: the part where the decision to wave my right hand causes me to wave my right hand. I say that this part, and this part alone, is what makes the waving free. In general, I define “X does Y freely” as “X’s decision to do Y causes X to do Y.” I’m stipulating that freedom is all about the chain of events downstream of the decision.
That example was drawn from real life: I really did decide to wave my right hand, and then I waved it. But we can also consider hypotheticals. For example, what if I decided to wave my left hand? I won’t decide that, and perhaps my brain chemistry ensures that I won’t. Nevertheless the question “What would happen if I did?” makes sense. It’s like looking at a perfectly deterministic clock mechanism and asking “What would happen if this cog was facing that way?” Hypothetical questions of this kind are strictly forward-looking: we don’t ask how the hypothetical state could come about; we only ask what would result from it if it did.
And here’s the answer: If I were to decide to wave my left hand, then that decision would cause me to wave my left hand. I say that this, and this alone, is what makes me free to wave my left hand. In general, I define “X is free to do Y” –and also “X could do Y”– as “If X were to decide to do Y, then that decision would cause X to do Y.” Here again, I’m stipulating that freedom is all about the chain of events downstream of the (hypothetical) decision.
Now I need to come good on my first promise: to sort actions into “free”, “not free”, and “grey area”, using my compatibilist definitions, and to show that the results are the same as most people would get intuitively.
We’ve seen two positive examples: one where I freely waved my right hand, and one where I was free to wave my left hand (or equivalently: I could have waved my left hand), even though I chose not to.
Unfortunately, however, while I was typing all that, I was sitting on my left foot, and now it has “fallen asleep”. I’m trying to wiggle the toes but they’re not moving. So the statement “If I were to decide to wiggle the toes on my left foot, then that decision would cause me to wiggle the toes on my left foot” is false. So there’s a negative example: I am not free to wiggle those toes.
If my left foot was set in concrete, or permanently paralysed, or if my muscles were overpowered by some external force (the stock example is some sort of evil puppeteer), then the result would be negative for essentially the same reason.
When a deer is caught in the headlights, and is too stunned to move, is it free to move? That comes down to the question: “If it were to decide to move, would that decision cause it to move?” — and now we need to distinguish cases. If the deer is momentarily paralysed, somewhere downstream of its decision-making, then it’s like the example of my toes, so the answer is no: it is not free to move. But if it is suffering some kind of brain-freeze upstream of its decision-making, then the answer is probably yes: if the deer were to decide to move, then that decision would cause it to move. Personally I don’t know which of those cases it is, so I can’t tell you whether the deer is free to move or not.
If I hold my breath for five seconds, am I free to hold it for another five? (Or equivalently: Could I hold it for another five?) Yes, certainly: that’s a positive case. But if I hold it for too long, there will come a time when I am no longer free to keep holding: a time when, even if I were to decide to hold for a further five seconds, I wouldn’t do so, because neural mechanisms downstream of my decision would override it. So that’s a negative case. But mental concepts like “decide” have fuzzy boundaries; so between the positive and negative cases, there’s this case: I hold my breath for a while and then I sorta-kinda-decide to take another breath, and I sorta-kinda-decide to keep holding but get overriden. In that grey area between a positive case and a negative case, the best I can say is that I was sorta-kinda-free to keep holding.
So far all the examples have involved one person (or deer) at a time. Now I’d like to step it up to two people: one trying to influence the other.
Umm … hmm … these two-person examples are hard work. I’ll need my very best concentration for this. So what I’d really like is for you to stay out of my study for a while. How can I make that happen?
Strategy #1: I jam a chair against the door. This is crude but effective, and notice that it doesn’t depend on your state of mind. If you’re pushing on the door because you’ve made a decision to come in, the chair stops you; if you’re falling against the door because you’ve slipped on a banana peel in the hallway, the chair stops you; and if you’re shuffling into the door because you’re sleepwalking, the chair stops you just as well.
Strategy #2: Knowing that you’re a considerate person who respects my wishes, I ask you please not to come in. The genius of this plan is its efficiency: it doesn’t require a chair. Rather than stopping you by brute force at the door-pushing stage, it aims way back up the causal chain to stop you at the deciding-to-come-in stage. It’s an early intercept. It nips the causal chain in the bud.
But note well: strategy #2 doesn’t do everything that strategy #1 does. #2 doesn’t stop you from entering if you slip on a banana peel. It only prevents you from entering my study as a result of a decision to enter my study. And here we can use our new shorthand: strategy #2 only prevents you from entering my study freely.
So I have two possible strategies to keep you out of my study, each with its own advantage. In the wider world, however, there are times when neither strategy #1 nor strategy #2 would have much chance of success. Sometimes we want to influence what people do, but we don’t have the physical means to overpower their actions, and nor do we have much prospect of changing their decisions just by saying “please”. What we often do in such cases is to bolster strategy #2 with some combination of the following reinforcements: praise for those who do as we ask, blame for those who don’t, more substantial rewards, more substantial punishments, laws, and moral principles. Let’s call that whole approach “Strategy #2+”.
Strategy #2+ can be much more effective than strategy #2, but it remains quite similar to #2. It’s still an “early intercept” strategy: it influences action by aiming back up the causal chain and targeting the decision to act. Therefore strategy #2+ also only works for actions that are free, in the compatibilist sense of “free”.
And now I can explain why we don’t attach praise or blame, or rewards or punishments, to actions that aren’t free in the compatibilist sense. It’s because there’s no point. It would be applying strategy #2+ under the conditions where we know it doesn’t work.