Thursday, January 31, 2008

Is the concept a priori passed its sell by date?

The Philosophy Society managed to get Timothy Williamson to give a talk on a priori knowledge last night. The stated aim of the talk was to get people to stop using the a priori/ a posteriori distinction since it had passed its usefulness. The strategy was to show that there are many mixed and borderline cases, and that trying to classify these cases “obscures epistemologically crucial features of the examples”. He concludes that “We should resist the temptation to assimilate new cases either to the stereotype of the a priori or to the stereotype of the a posteriori.”.

Mike Gabbay made a point which struck me as right. I’m not going to get it quite right, but roughly, “a priori” knowledge is knowledge by inference. If we presuppose a body of knowledge K, then we can deduce a larger body K­2 using inference. All the propositions in K2 but not in K will be a priori. There are many techniques and skills that could come under the heading “inference”, and it could be the case that we learn new skills, either collectively, or individually, from experience. What can be deduced a priori will therefore be relative to experience.

For example, if I knew that there were some trees planted in a square that was seven trees long and seven trees wide, I do not yet know how many trees are in the square. If I know about squares and squaring then I can deduce without further observation that there are 49 trees in the square. I will have made this deduction a priori. I could have found out this knowledge by going out and counting each tree. This would have been a posteriori. I may have been told by my line manager that the best way to count the trees is to multiply the length by the breadth on a calculator. Given this information and a calculator, I could discover that there were 49 trees in the square without counting them. Would this be a priori? I guess Professor Williamson’s point is that the concept of a priori has two much philosophical baggage for this to be a useful question. What have we solved by calling this technique for counting trees planted in squares a priori? No part of the process was either necessary, nor innate nor derived purely from reason, nor absolutely certain. Since these are often thought to be properties of the a priori, perhaps we should stop using the term since it just confuses matters.

However I think it highly useful to make a distinction between what we can find out before hand given a body of knowledge K, and what we just have to wait and see. Lets play dice. I’ll throw two dice and I’ll give you £35 if it’s a double six and you give me £1 otherwise. What do we know a priori? We know a priori that there is a 1/36 chance that you will win. We know a priori that the odds are fair. What we don’t know a priori is who will win. We have to actually throw the dice for that. The fact that we know these things a priori is not innate, or intuitive or necessary or any rubbish like that. It has been hard won by the greatest of our species and been passed down through teaching and tested through experience.

Labels: ,

Thursday, January 24, 2008

Ramsification of Leibniz

Andrew Murray gave an interesting talk last night about Leibniz and Galileo’s paradox. Here are the main points (as far as I’m concerned).
There is a problem for Leibniz involving the distinction between necessary and contingent facts. He is in danger of all facts coming out necessary.
Concepts are constituted by parts. In my contemporary way of looking at things I take this to mean the extension. This is no doubt an offence to Leibniz scholars, but hey. The analysis of concepts terminates in primitive concepts that can’t be broken up any further.
Some concepts have infinite parts, and these are involved in contingent facts.
This is important for free will.
Galileo’s problem: lets take a concept n which has the extension of the infinite number series of integers. It seems that n^2 is a part of the extension of n. But there is a one to one correspondence of the elements that form the extension of both concepts.

Why does this interest me? Well I’ve been thinking about the difference between two types of probability and two types of generalisation. The two types have been discussed by Popper, Ramsey, Strawson and countless others no doubt. I’ll call them deductive and inductive:
Deductive probability: the domain of applicability is defined and finite.
Inductive probability: the domain of applicability is infinite.
Numerical example:
Deductive: the probability that n is even given n is an integer between 0 and 11 is ½.
Inductive: the probability that n is even is ½.
Empirical Science example:
Deductive: There are 118 elements in the periodic table.
Inductive: Water is H2O.

Inductive generalisations are counterfactual supporting whereas Deductive generalisations are not. Inductive generalisations therefore can never be verified but only falsified, whereas deductive ones can be verified (in principle)
Ramsey had the view (which I think was plagiarised by Wittgenstein) that the meaning of inductive generalisations can’t be identical to their extension because it is psychologically impossible. When I entertain the belief that all men are mortal, I can not be applying the property of mortality to each and every man because, well I just can’t. I don’t even know how many men there are. So an inductive generalisation is more like a rule for belief formation. Since beliefs come in degrees, these rules can be probabilistic. So “All men are mortal” means P( x is mortal given x is a man) = 1. Because the meaning is not constituted by the extension, the generalisation is not true or false, but good or bad. Since this is inductive it is perfectly possible that this generalisation is a good one, yet some man lives for ever. A rule is a good one if it generates true beliefs. Since these rules are neither true nor false, they would not be included in a complete inventory of the facts. An Omniscient God wouldn’t know them, with his infinite mind he wouldn’t need to. With our finite minds, however, we certainly do need them.

So back to Leibniz. Judas was the betrayer of Christ and this is something that he is responsible for. It is therefore a contingent fact. (if it was necessary, then it wouldn’t have been his fault). The problem is that it flowed from Judas’s nature that he betrayed Christ. In Andrew’s speak it is part of the concept of Judas that he was the BOC. Leibniz’s solution is that the concept of Judas has infinite parts. Look at this in Ramsey’s way it means that P(JBOC given Judas, W) = 1. In words, given Judas’s character and the situation he was in, a wise man should believe to degree 1 that he would betray Christ. But because this is an inductive generalisation rather than a deductive one, Judas is still free not to. The problem of free will solved and Leibniz has a distinction between necessary and contingent.
So what about Galileo’s paradox.

Well, we can think of infinite proportions of infinite sets quite easily if we think in terms of rules a belief formation rather than one to one mappings. P(x is even given x is a number) = ½. With n and n^2 it is a little more difficult since the relative frequency is itself a variable. P( x is a square given x is a number from 0 – n^2) = n/n^2. As n tends towards infinity then this value tends towards 0, but this is no concern of ours and is not in the mind of God. What we are interested in is the shape of the curve and the area underneath any interval. Essential n squared is a part of the concept of n because P(n given n^2) = 1 but P(n^2 given n) < 1.