Sunday, March 11, 2007

An argument from freedom against causal closure

I want to approach this mind body problem from a fresh angle, which pretty well encapsulates everything I think about the nature of things.
Decision theory tells us what action to take given the reasons we have. The metaphysics involved is that we can (it is metaphysically possible to) decide to do things on the basis of our reasons. Reasons are made up of evidence for propositions, or beliefs; and subjective utilities which are ordered preferences of outcomes. Ultimately we have the choice to act not in accordance with our reasons. Decision theory tells us what we should do given our evidence and our goals. To make this simple, let us say that the best decision theory will be one such that you input beliefs and desires and you get an output in terms of action. The output will result in a physical event that is counterfactually dependent on the beliefs and desires and the operation of the decision theory. Given an internalist theory of Mind beliefs and desires and the decision process used by a subject in a decision will all be brain states, or at least be realised or supervene on brain states. So here a picture emerges of what we typically take to be mental causation. The agent has Belief states B1, B2, B3 and desire states D1, D2, D3. Churn these through some decision function DT and out pops A1 which causes physical effect E1.

Physical theory tells us what will happen given certain preconditions. The preconditions are churned through a physical theory PT and out pops the consequences, the physical effect E1. So B1, B2, B3, D1, D2, D3 and the mechanism that realises DT are identical with some set of physical preconditions PP. PP and PT will jointly entail E1. (For those who believe in quantum indeterminism or even macro indeterminism then E1 will presumably be a partition of probable effects). Causal closure of physics means in this light that for every actual effect E, there will be a causal explanation such that E can be entailed by the conjunction of the completed Physical Theory and a set of physically described preconditions.

So the hope of token identity theory is that this is fine. We can go to work on PT and work out the underlying causes of our free decisions, which will be to do with neurons and what have you.

The problem now is that our free choices are vulnerable to the endemic problem of self reference. The problem of self reference can be also thought of as the problem of incompleteness. It underlies the liar paradox and the sorites. It destroyed Russell’s principia mathematica and lies at the heart of the ineffability of consciousness, which is irreducibly self referential. The problem is related, if not identical with Newcomb’s problem and since human beings are essential competitive, it is a problem that our brains have been under great pressure from natural selection to overcome.

Let us suppose that an agent has the complete physical theory and also has the wherewithal to measure his own physical brain states. He is in a competitive situation and he knows that his enemy has used PT and his brain states to predict his decision. In this case the agent may reason that his best decision is not to act in accordance with his own decision theory, but choose an action that his own decision theory would not recommend.

Let’s call the agent “Ace”, and his enemy, “Bad”. To simplify as much as possible, there are two choices before Ace. He can either choose Box 1 or Box 2. Bad has loaded the boxes. Bad has used the complete physics and a brain scan of Ace’s mental states to calculate what Ace will do. Ace knows this. Ace has also access to the complete physics and a brain scan of his own mental states. Bad is obliged by the rules of the game to put a Million pounds in one box and nothing in the other. Ace will get the contents of the box he chooses and Bad will get the contents of the other box. The complete physics and Ace’s brain scan predict that Ace will choose Box 1. Question 1: which box should Bad put the million in? Question 2: Which box should Ace choose?

Conclusion: The causal closure of physics is incompatible with our ever being able to use it for practical decision making.

2 Comments:

Blogger LauLuna said...

Let me draw a parallelism with Turing's theorem about the incomputability of the halting problem

Take Ace's and Bad's brain to be Universal Brains in the sense that they can compute whatever any brain would output on a given input.

Ace and Bad have a complete description of Ace's brain and the instructions the brain follows (i.e. the relevant physical laws). They must compute what Ace's brain will output when fed the following input:

INPUT: a complete description of Ace's brain together with the relevant physical laws plus a complete description of the game.

The computation is impossible for Ace (for he will change his output whenever he thinks he's reached one; the output diagonalizes out of itself, so to say) and so it is for Bad.

In the case of Turing machines the conclusion is that the Universal Turing Machine cannot compute the halting problem. Since the machine would be able to compute it if the problem were computable, the problem is not computable.

Here the conclusion to be drawn is that human decision making cannot be examined as an objective process. But it should be (or so it seems), if it were a purely physical event. So, it seems it is not such.

Only one last concern: are we sure that the whole situation is well-defined? The description of the game seems self-referential: it includes a mention to 'a complete description of the game'. Can we show this is harmless?

Regards

3:28 PM  
Blogger bloggin the Question said...

Thanks for the comment. I'm especially interested by the last section. Why should it be harmful if the description of the game is self referential? Surely that is the point? Also, isn't it a feature of any game that the players understand the rules of the game?
I don't actually mention a complete description of the game, only a complete physical description of the brain states of one of the players at a time plus the relevant laws of physics.
The self reference comes in because Ace has to make a decision on the basis that he and Bad already know what he is going to decide. I can see this might not be harmless, but I don't understand what this means exactly. Suppose it is harmful, what is it harmful for? Physical closure? Then it is supposed to be harmful! My argument? In which case I must be assuming some contradiction. The contradiction is that Ace can make a free choice based on a prediction of that free choice. But we can and do this.

6:25 PM  

Post a Comment

<< Home