What justifies believing? Jack Darach.
What I'm trying to understand is what kind of norms, if any, govern the rational acquisition of belief and the kind of responsibility we have to such norms. A basic division to start with might be between intrinsic and extrinsic norms. Intrinsic norms are those that are based in the concept of belief (whatever that is, I'm still not sure) while extrinsic norms for belief would be based outside of epistemology; for example they could be practical or moral norms. If we accept that the function of belief is to represent accurately how the world is then we move towards an intrinsic evidentialist position. This position suggests: one would be justified in believing that p only when one has sufficient evidence for the truth of p. Obviously this can't be enough as it stands. Evidential considerations alone cannot determine what counts as 'sufficient' evidence. The evidentialist proposal needs to include non-evidential considerations, such as how much time you have to inquire as to whether p; how much of your cognitive resources you can devote to the issue, etc, that help to determine when someone is justified in believing that p. These other considerations do not do any justifying. It is not that by accepting the need for non-evidential considerations one is then giving a space over to practical norms in the rational acquisition of belief that tells you when you can believe that p. And certainly knowing that you haven’t got much time left to inquire as to whether p can’t motivate you to believe that p. But this position arises most naturally when we start with the assumption that the function of belief is to represent the world accurately. (Isn't this a way of stating the oft used, difficult to explain, phrase: belief aims at the truth?) And it is this I'm not sure about and what I need help on. Why is the only function of belief to represent the world accurately? Belief plays a role in our actions; mightn’t it have another function connected to this, to facilitate action (or facilitate successful action)? In which case wouldn't it be better if our beliefs were subject to practical considerations? Specifically about what it would be desireable to believe in order to generate acts that are more likely to satisfy our intentions and desires?
26 Comments:
This is a kind of emergency post. Jack wrote this post because he has some philosophical questions about belief and justification. It struck me as exactly fitting the function of this blog. Genuine enquiry. So I've posted it ahead of the prize winning post by Jessica Leech. Please help Jack out in his quest for understanding.
By the way, a big thanks to Gabriel, Lisa, Bart, Jessica and Rob-S for getting the thing rolling. And shame on Maggy for saying nothing.
Jack, thanks for this post, very interesting.
One point you raise is that there are evidential considerations in belief acquisition and non evidential considerations. You point out that "Evidential considerations alone cannot determine what counts as 'sufficient' evidence". I take this to be true. The suprising implication is that there are non evidential considerations inside epistemology. Let me explain with an example from Milikan:
Evidential considerations: An electrical plug socket has not been working for some time. A lamp and a TV that work in other plug sockets don't work in this one.
Non evidential considerations, case 1. You want to plug in the hoover, there is another slightly more inconvenient plug socket that works.
Rational action: Plug the Hoover into the slightly more inconvenient socket.
In case 1, you believe that the socket does not work and this is rational.
Case 2. Same evidential considerations.
Non evidential considerations. A baby is crawling towards the plug socket and is just about to put his fingers in.
Rational Action: Run over and pick up the baby to prevent it putting its fingers in.
In case 2 you believe that the socket does work, or at least that there is a strong enough probability that you won't risk the babies life.
There are three important values, A the value of the consequences of not relying on the belief. B value of the consequences of relying on the belief if the belief is true. C the consequences of relying on the belief if the belief is false.
If you are practically indifferent between relying on the belief or not, then your degree of belief, according to Ramsey is measured as A-C/B-C. In case 1 and 2 above, the rational degree of belief in the proposition p (that the plug socket does not work) will vary due to non evidential considerations.
In case 1 let us suppose there is no utility difference in trying out the faulty plug and finding it doesn't work or going to straight to the inconvenient plug. (It is a bad example but it is easy to construct a better one) In this case A=B and C is the slight inconvenience of using the working plug. This make your degree of belief A-C/B-C which, since A and B are equal is 1 whatever value you give to the inconvenience of hoovering from the working plug. Therefore in the first case you assign a subjective probability of 1 to the proposition that the plug doesn't work. This is the highest subjective probability that is rationally possible. Since the plug doesn't work, you know it doesn't work.
In case 2 on the other hand, by rushing over and saving the baby, you have shown that you do not rely on the belief given this situation. You are not indifferent between relying on the belief and not relying on the belief when A=B. In this case, A is the value of the baby not dying of an electric shock, B is also the value of the baby not dying of an electric shock. So in this case your degree of belief that the plug does not work is less than 1. This means you are rationally not certain, and therefore you do not know that the plug socket isn't working.
The conclusion is that the non evidential considerations that are important epistemologically are the difference between value B and C. If the difference is large, if to rephrase it, the stakes are high, then more evidence will be required for certainty. If B-C is low, then less evidence will be required for certainty. B-C can be used as a metric of certainty. Note that this is different from degree of belief, or subjective probability. Degree of belief is measurable by the ratio gain:loss, degree of certainty is measurable by the magnitude gain:loss.
I tend to think practical considerations ought to justify beliefs on occasion. An example comes from the (debated) theory of depressive realism, which suggests that depressed people have a more accurate understanding of the extent to which they have control over their environment (Alloy and Abramson, 1979). To put it bluntly, a depressed person knows how insignificant she is. A related idea is that of self-fulfilling prophecies -- if I believe I won't achieve anything, I probably won't (I'll stay in bed), whereas if I believe I will be prime minister, maybe I'll make it in politics even if not to the highest office. The upshot is that having all true beliefs about oneself may not be as good as having a few useful delusions. At least, this is what a psychologist who endorses depressive realism and also wants to cure their patients' depression must think.
It does seem morally dangerous, though, to break the principle that we ought to be aiming at representative beliefs all the time. Once you have gone as far as the above paragraph, what is to stop belief justification of that kind spreading? For instance, there is the threat of it extending beyond beliefs that are directly about yourself to beliefs about social/ethnic/religious groups under whose banner you'd march.
Hi, thanks for the replies. I'll try to respond to jonny's post later but for now I just want to put something to rob_s.
1) You said that practical considerations sometimes justify belief and pointed to self-fulfilling beliefs as an example. I think we need to flesh out both what I mean by practical considerations and what a self-fulfilling belief would be.
First, the practical considerations are those norms that cover action. S should do p if doing p brings about Y, or makes Y more likely to occur. Where Y is something S wants. S is rational when, knowing that doing p will get him what he wants, he does p. Apply this to belief and we get: S should believe that p if believing p brings about Y. Where Y is desireable for S.
Cases of self-fulfilling belief will be those in which believing that p makes it more likely that Y will occur. We would be rationally justified in believing that p, because, following the norm governing action which is meant to apply in these cases, rational action makes it more likely that one gets what one wants.
Crucially, for practical considerations to justify the belief in question, S should be able to get himself to believe that p just by thinking about the desireable consequences of having that belief. In the way that S can do some act just by reflecting on the desireable consequences of doing it.
The problem is we can't just make ourselves believe that p because of the expected benefits; and not being able to do so certainly isn't a failure of our rational capabilities.So self-fulfilling beliefs aren;t subject to practical norms.
These depressive realism and self fulfilling beliefs are interesting. I tend to think it is a completely different use of "belief" which means something like endorse, have faith in, or intend. You can have faith in an idea without any evidence at all. A sales man when he picks up the phone "believes" that he will make a sale, even though he may be well aware that he only makes one sale per one hundred phone calls. I think this just means that he is going to try to make a sale in a confident matter. In this meaing of "belief" we can make ourselves believe something for expected benefits. Liars, spies, actors and faith healers will all believe in things that they know aren't true.
Hi Jack, how's it going?
Clearly, desiring outcome Y isn’t enough to bring about action p. p might be very difficult, or we might be paralysed. What it is sufficient for is the will to do p; it would be irrational to desire Y but not to have the will to do p. (Insert a bunch of caveats here.)
If this is right, we need to change:
“S is rational when, knowing that doing p will get him what he wants, he does p.”
to
“S is rational when, knowing that doing p will get him what he wants, he has the will to do p.”
Applying the same modification to the version for belief,
“S should believe that p if believing p brings about Y.”
becomes:
“S should have the will to believe that p if believing p brings about Y.”
So how does change the example of a self-fulfilling belief? Well, a person who has the self-fulfilling belief that they will never achieve anything may (understanding the damaging nature of that belief) have the will to believe instead that they are going to be a big success. I think this is probably a pretty common predicament. The difficulty is finding a way to exert that will, but it may be possible, for instance by visiting a therapist. The upshot is that although you are right to say that the ability to believe p because of the expected benefits isn’t a failure of rational capabilities, perhaps there are cases where the failure to desire to believe that p because of the expected benefits is indeed a failure of rationality.
So the depressed person wants to believe he is a good person, but is incapable of believing this since it is so blindingly obvious that he is a schmuck.
That's about it. He has seen the lucre and beautiful women that accrue to people with a high opinion of themselves and he wants a piece.
Blogginthequestion -- I see what you mean, though I think your comparisons underestimate how entrenched people's beliefs about themselves are -- it's not like an actor or spy who can snap out of it at will. In any case, while beliefs of this sort may be in a special category, the Alloy and Abramson experiments did concern authentic judgements. Subjects had to figure out the extent to which a green light was under their control -- sometimes when they pressed a switch it went on/off, other times it went on/off at random. The depressed subjects got it right -- they saw the world more clearly.
Hi Rob,
We need to make a distinction between the rationality of the process actions undertaken to bring about a belief and the rationality of the belief thereby produced.
A rational course of action directed at acquiring a belief does not entail that the belief acquired is rational. So, yes, it may be perfectly rational for people to visit the hypnotist so that they can have more positive beliefs about themselves because of the benefits of havign that belief. But the beliefs induced won't derive their rational warrant (or lack thereof) from the person being rationally justified in visiting the hypnotist.
I think to say that the belief acquired in the above way is rational is to concede too much to the practical side of beliefs function. I'm really looking for a single principle that can combine both functions I take belief to have, in a way that doesn't veer off implausibly; for myself, it easy to be pulled towards the watered down evidentialist position I mentioned in the post. hmm.
Rob, thanks for being more specific about the depressive realist experiment. Seems like depressed people have a higher threshold of evidence for pattern recognition. This means in general they would miss patterns that happier people would see. Whether it is a "clearer" view of the world to have fewer false beliefs, or to have more true beliefs with a higher proportion of false beliefs is at issue. The fact we have evolved to see patterns where there are none suggests that it is better to see more patterns at the expense of a greater proportion of error.
Incidently, this kind of false belief about control was called "auto-shaping" by Skinner and can be induced in pigeons on a fixed interval reward schedule.
Suppose that in the switch and light scenario, every time the subject thought she was in control she would get rewarded if correct with no penalty for incorrect guesses, then the depressive would lose out. Depending on the ratio of reward:penalty the depressive strategy may or may not win out over the happier one. The clearer picture of the world is the one that gets this ratio right. I think as a rule, false beliefs are rarely punished that severely. On the other hand it is clear that doubting everything will not get you anywhere.
Who has the clearer picture of the world? Not the depressive.
I don't know if that is a fair representation Jonny. I think the experiment showed that depressed people simply were more accurate about how much control they had over the light. That does not indicate they would miss patterns that non-depressed people see. For example, every one of the patterns the non-depressed person sees but the depressed one doesn't may be non-existent. But I'd need to check this out -- will do a little squirrelling on the web later today.
Moreover, the depressive has plenty of true beliefs -- he actively believes that he does not have control over the light. So this:
"Whether it is a "clearer" view of the world to have fewer false beliefs, or to have more true beliefs with a higher proportion of false beliefs is at issue."
does not seem to get the question aright. I agree that there is no simple way to compare how clearly two people see the world, though -- what weighting do we give to their different beliefs? Their clarity is relative to our interests.
Rob, the Skinner paper makes entertaining reading - Skinner, B.F. 1948. Journal of Exp. Psychol. 38: 168-172 “ “Superstition” in the pigeon.”
My interpretation of the light experiment you mention is mostly influenced by this marvellous book.
Panksepp, Jaak. 1998, “Affective Neuroscience” OUP. Chapter 8 “Seeking Systems and Anticipatory States of the Nervous System” page 144 –163.
I am not doubting that the depressive was over all more accurate in the experiment. I'm presuming that both groups got it right when they were in control, but the non depressives thought they were in control more often when they weren't. In this case, I agree, however you look at it the depressive has a better result: the same number of true beliefs and less false beliefs. (Saying there is no connection between two events I don't count as a belief, although I admit this requires argument)
My point is that this might be a specific feature of the set up. There is usually a strong regularity between switches and lights, which doesn't apply in the experiment. If you carry the reasoning practices of both groups (D and ~D) over into real world situations, the depressive may not do so well. For example, strip lights sometimes take a little while to come on. D and ~D go into a dark whare house to fetch a box, switch the switch and nothing happens. D will give up sooner, presume the light switch doesn't work and fail to get the box. ~D will wait a little longer and get the box. In this case D would perform worse. If there was no relation in fact between the switch and the light coming on ~D will have lost nothing, whereas D has failed the task.
As for who ends up with a clearer picture of the world, it is largely down to luck, whether your environment is hostile and unpredictable, or regular and rewarding.
I agree their clarity is relative to our interests, but, perhaps more fundamentally, to their own. I do not believe there is a unique clearest picture of the world simply because beliefs are ordered in a hierarchy of relevance, and relevance is intrinsically relative.
Jack, I wonder if this is any help. The kind of beliefs that it would be useful to have even though they are unevidenced are likely to be value judgements, not facts. It is hard to say what kind of evidence is required for value judgements, but a second order value judgement might be good enough. Example: I smoke, I believe that cigarettes feel pleasant to me. I want to believe that cigarettes feel unpleasant to me, in order to gain health benefits, but am unable to believe this since it is false. I go to a hypnotist. As a result I believe that cigarettes feel unpleasant to me and stop smoking. It is hard to say whether the new belief is true or false since I never smoke again. And "pleasant" is evaluative enough for it not to matter.
Compare the much more factual: I believe smoking gives the smoker genital warts. I would be irrational to believe this on the strength that I did not want to smoke and this belief would prevent me from smoking.
Perhaps the kind of beliefs that one rationally believes on purely non evidential grounds can all be shown to be evaluative.
Jack -- right. Maybe we could say that *having* this kind of belief is rational, but that the belief itself may not be. I was really responding to your question, “Why is the only function of belief to represent the world accurately?” The suggestion was that beliefs, even our own, in fact can have other functions, for example when viewed from afar in a therapeutic context.
But what you seem to be asking is why beliefs are not subject to practical considerations, in the sense of being justified simply by virtue of some desire being present, given that they do as a matter of fact guide actions and we might get along better with different beliefs. Is that right? It is a hard question. One answer could be that we have various propositional states, and that the ones formed out of practical considerations are defined as desires rather than beliefs. It’s not that anything stops propositional states being subject to practical considerations, it’s just that when they are we call them desires – that’s what a desire is.
But I doubt this is very helpful. To backtrack a bit, could you say a bit more about the watered-down evidentialist position? I don't think I quite get what you wrote in the original post about evidential considerations v non-evidential considerations.
"This position suggests: one would be justified in believing that p only when one has sufficient evidence for the truth of p. Obviously this can't be enough as it stands."
Why is this obvious? What is it to have evidence? (Just to have access to it, or to have read/viewed it, taken it on board?)
Thanks for the interesting posts.
Jonny: I like your 'value judgment' suggestion but, it seems to me, not all beliefs that lack evidence will be value judgments, some will just be held irrationally; and not all value judgements will be held in the way of irrational beliefs.
Rob_s: I see what you're arguing now, about the possible therapeutic function of belief. I wonder though, how belief can have this therapeutic function? It seems to me that a belief, engendered by hypnosis or whatever, can be therapeutic because the person believes that the belief is true. So the belief is therapeutic in virtue of 'playing the role of a true belief' as far as the person is concerned. So, if the person knew that not-p then p could not function as a therapeutic belief (without self-deception...). We're getting close to Moore's paradox territory.
On what I called the 'watered down evidentialist position': When do you know you have enough evidence to justify believing that p? How could a set of facts determine their own sufficiency? (Maybe there is a sorites for justification here...) No piece of evidence comes along saying, "I'm it guys, I'm THE sufficient piece; you can quit inquiring." So what stops an inquiry and need the person be aware of conditions that make the evidence sufficient? The latter depends on whether you're a internalist or externalist abotu justification. I'm not savvy on that debate so I won't say anything about it.
As for what indicates evidential sufficiency, won't it depend on the person conducting the inquiry? For a start, I might care/not care that much about whether p; I might have/not have much time to investigate whether p; and there are considerations about the consequences of having that belief I should take into account. This is all in an internalist vein I think - the externalist will simply say that factors in the context will determine the point of evidential sufficiency.
Hi Jack
Say the subject loses the knowledge that not-p in the course of their therapy. I am not sure that the belief’s therapeutic function would be lost just because the subject has been transformed in the course of acquiring it. Function is something that can be judged from outside – the belief can still be seen as part of a therapeutic project, even if the subject no longer sees it that way.
But maybe the knowledge need not be lost. Self-deception is very common and under the thesis of depressive realism desirable. I think it would be possible to view your therapeutically induced belief as justified because it is useful, at the same time as knowing that it is false. It would not be that you thought it rationally justified by warrant of its utility, but that you felt justified in having acquired it.
In behaving as if something you knew in one sense to be false was true you may be acting irrationally, but irrational is not the opposite of smart. F Scott Fitzgerald said that the test of a first-rate intelligence is the ability to hold two opposing ideas in mind at the same time and still retain the ability to function, and I think this is something very close to what he was talking about. Seeing that you are insignificant but acting with hopeful confidence nonetheless, for example.
Re: what you say about evidentialism. If you can be justified in a belief that p more easily if you don’t care whether p or don’t have much time to find out, then it may be thought that you are not really justified in thinking p, merely in acting as if p were true for present purposes. This looks interestingly similar to acting as if a therapeutically induced belief was true.
Hi Jack, and Hi Rob, I'm interested that you say "If you can be justified in a belief that p more easily if you don’t care whether p or don’t have much time to find out, then it may be thought that you are not really justified in thinking p, merely in acting as if p were true for present purposes."
I say there is no absolute standard of justification. Evidential sufficiency is always relative to an action. There is no more to believing p than acting as if p for present purposes. If you act as if p over a wide range of cases then you are disposed to believe p over a wide range of cases, nothing more. If an action depends on a belief p, then the consequences if p will be more desirable than the consequences if p is false. If it makes no difference whether p is true or false, i.e. if the result will be as desirable either way, then you need no evidence to rationally accept p. Take the null case, where the truth makes no difference. Lets take the belief "S is loved". call this L. To simplify things let there be 2 consequences for S, happy and sad.
There are 4 possibilities. The right hand column are the contingent consequences of some action based on L.
S believes L, L is T. S is happy.
S believes L, L is F. S is happy.
S believes ~L, L is T. S is sad.
S beiieves ~L, L is F. S is sad.
In this case, given that S wants to be happy, ideally rational S should believe L without evidence . Of course life is rarely this simple, and there are many situations where it is better to believe truely that you are not loved than to believe falsely that you are. As Fitzgerald says, a multi tasking intelligent human may occasionally profit from holding 2 opposing beliefs concurrently, if she is engaged in two tasks which depend on opposing beliefs.
Hi Jonny, actually I mostly concur with this.
In your comment above (posting as bloggin the question – that’s you, right?), you suggested that when we talk about certain self-fulfilling beliefs, we employ “a completely different use of ‘belief’ which means something like endorse, have faith in, or intend”.
Of a salesman who “believes” he is going to make a sale (and thereby might), you wrote: “I think this just means that he is going to try to make a sale in a confident matter. In this meaning of "belief" we can make ourselves believe something for expected benefits.”
I wasn’t intuitively keen on this. What worried me was the distinction between doing something as if you believed p, and believing p. That is a legitimate distinction in a range of cases – like the case of an actor or a spy. But what about in cases where the agent is not prepared to explicitly affirm not-p? Well-balanced actors are ready to behave as if not-p when the director says “Cut”. But the salesman may not be ready to say he may not get the sale. Or there may be a range of in-between cases. Someone might have to be in self-analytical mood on the therapist’s couch to admit not-p, if it is acting as if p that guides their friendships and love affairs for better or worse. Who is to say in that case whether they belief p or not-p, or neither, or both? Thoughts on these lines suggest to me that endorsing p is on a continuum with believing p – not a completely different kind of thing, as per your suggestion.
So when I wrote…
“If you can be justified in a belief that p more easily if you don’t care whether p or don’t have much time to find out, then it may be thought that you are not really justified in thinking p, merely in acting as if p were true for present purposes.”
…I was not expressing my own view – I agree that justification is context and purpose relative. Instead, I was (highly obliquely) raising a hypothetical criticism, analogous to the one in your spies/actors post. The object was to show that both criticisms are of a piece. In my view neither is convincing.
just to pick on the same quote,
"If you can be justified in a belief that p more easily if you don’t care whether p or don’t have much time to find out, then it may be thought that you are not really justified in thinking p, merely in acting as if p were true for present purposes."
are you only justified in believing that p if you care about p? No, you needn't care at all to establish your belief. What counts as sufficient evidence for p? Well, you have to take yourself to have sufficient reasons for belief that p. What closes off an inquiry into whether p? Generally non-epistemic factors like, 1 hr left at work & I can only spend half of that time on wikipedia researching housing in Japan,
so it's not that the cunning person can purposefully limit their inquiry so that they get belief quicker, those non-evidential factors don't figure explicitly in deliberations about whether p is true or not. No-one says, 'only 5 mins left on wikipedia, I better believe that tatami mats are smelly because of the time...' rather, it would be, 'only 5 minutes left? Hmm, says here tatami mats are smelly, maybe they are or maybe wikipedia's not reliable. Probably can't tell on this alone'.
Time determines sufficiency by limiting inquiry, inquiry generates evidence to justify belief.
haha,
http://www.artsci.wustl.edu/~grussell/dorisrussell.pdf
Thank's Jack for the article. Just read it, makes me furious. I think the bank example is Keith DeRose's. The only way I can respond to their bogus counterexamples to Stanlies view is to point out that if one were in the car with the Jack pot winning bank goers and were to say "So now do you doubt that the bank is open on Saturday?" one would be greeted with laughter. No one cares any more, so the question of whether the bank is open or not is irrelevant, and thus the evidence is irrelevant. The people who wrote this article seem to be cutting through this laughter and repeating "but do you know that the bank is open on Saturday? Surely you agree you do not on your evidence?" The lottery winners will not be motivated to answer such a question.
Also, they are missing out factivity. By emphasising that the characters are smoking hashish, they invite us to conclude that they do not know whether or not the bank is open. But surely the fact they don't care whether the bank is open means their evidence was sufficient for the purpose at hand. Notice that if they don't care AT ALL, then they have sufficient evidence for the fact the bank is shut on Saturday too. The fact that it seems odd to attribute knowledge to these cases is not because the justification is lacking, it is because the belief is lacking.
Rob, Yes, I agree about acting as if p being slightly different from believing that p. I wonder if this difference is down to pragmatics. "Acting as if x" seems to imply "~x" So in a fire drill you would act as if there was a fire, but in case of a real fire you would act on the belief that there was a fire. I'm kind of hoping this is a distraction. An occurent belief that p is constitued by an action on the belief that p. A dispositional belief that p is a dispostion to act on the belief that p over a wide range of actions. Acting as if p is not the same thing. Something like acting on the counterfactual assumption that p.
B.T.W. yes, I've been posting as Blogginthequestion, but I'm trying to give it up.
Cripes! This is a gold mine. I was scared at first, but I think now that Stanley's book hasn't completely pre-empted my PhD. Thanks again Jack.
I agree, this article is not very convincing. To put it very briefly, it attempts to extract this absurdity:
“Ded knows more than Hannah (simpliciter)”
from:
(1) “Ded knows P (relative to his interests)”
and (2) “Hannah does not know P (relative to her interests)”
But if you think knowledge claims are interest-relative, you are unlikely to think that different agents’ knowledge can be directly compared, independently of some purposeful project, in a way that’s meaningful.
Yes Rob, you put it so well. That was what I was trying to say with the article writers in the car with the jackpot winners.
Post a Comment
<< Home