From: Sincerity and Truth: Essays on Arnauld, Bayle and Toleration (Oxford: Clarendon Press, 1988)
One of the arguments used by the Academic sceptics of ancient times, to force general suspension of judgment upon the Stoics, ran as follows: (1) Any proposition, however certain it may seem, may in fact be false; (2) the wise man (according to the Stoics) will not assert dogmatically anything that may be false;[Note 1] therefore (3) we should not affirm anything. Premiss 1 is fallibilism, which to me seems true, and 2 is a proposition of ethics which to me seems false but harmless, if I understand it correctly. If "assert dogmatically" means assert in a way that implicitly denies the possibility of being mistaken then perhaps 2 is true. But if it means something like "say is true, and ask others to believe", then it seems false, since there seems nothing wrong with asserting, in that sense, something that seems true even if there is some possibility of mistake. Still, in that sense 2 is harmless, since it would allow us to say that something seems true, or seems probable, and would allow us to act on such probabilities.
But some sceptics, ancient and modern, have gone on to a further conclusion which seems false and pernicious, namely that we might as well give up serious thinking and do nothing, or act on whatever we happen to believe on instinct or by custom or by accident. Many people seem to find it tempting to adopt that attitude as soon as they become convinced that certainty is unattainable. It might be a reasonable attitude if (as the Pyrrhonian sceptics claimed) what seems true were just as likely actually to be false whether it is the result of serious thinking or not, or if probability were not enough for action---if before acting we had to be absolutely and infallibly certain, as we never can be. But although it may be true for some people and for some subjects, it does not seem true generally that serious thinking cannot improve our chances of being actually right; and while a high degree of probability (or even in some sense certainty) may be required for some very drastic actions, in most cases a lower degree of probability seems sufficient.
Many members of the academic profession espouse a mitigated scepticism which I think is also false and pernicious. They assert that nothing is to be asserted unless it can be backed by "good and sufficient reasons"; or, if they distinguish between knowledge and mere opinion in terms of the reasons one must be able to produce to justify a claim to knowledge, they assert that nothing is to be asserted (at least, as being known) unless it can be backed by good and sufficient reasons. If they are not fallibilists these mitigated sceptics may say that the reasons have to be good enough to make the proposition certain. But usually they are not as strict as that. In fact they are usually vague about what counts as a sufficient reason; perhaps they think the standard of sufficiency cannot be formulated in the abstract but has to be learnt by association with those who perceive it. But they suppose that there is some standard, the same for everyone, which one's reasons have to meet before one is justified in making any positive assertion, or at least in claiming to know something. As for action, some say that any departure from what they regard as ordinary behaviour has to be justified by good and sufficient reasons, which puts an onus of proof on the innovator; others say that whatever one does has to be justified by such reasons; and others seem to allow action (as the ancient Academics did) on appearances and probabilities, confining their demand for sufficient reasons to an academic context.
My objection to the demand for good and sufficient reasons is that there is no standard of sufficiency. To formulate such standards is supposed to be the business of epistemology or of an "ethics of belief", but it has not actually been done. Ordinary deductive logic can say that if the premisses are true the conclusion must be true, but cannot say whether they are true. Inductive logic says what makes evidence stronger or weaker (e.g. a generalisation based on many diverse cases is better grounded than one based on a few similar cases), but cannot say whether the evidence is strong enough to justify belief. Epistemic logic is supposed to give rules for determining whether the evidence for a proposition is strong enough to justify affirming it, as knowledge or as reasonable opinion, but I have never seen any specimen rules: it remains an unrealized project. Similarly with the "ethics of belief": there is no way of saying how readily we ought to believe.
In the following essay I will argue for Bayle's position, that it is always right to assert and do what conscience suggests, to act upon whatever has the "taste" of truth (see above, Essay II, sect. 3.3), and that in some cases there may be no fault in asserting and acting on beliefs we have never examined and for which we can produce no reasons. Instead of an ethics of belief we need an ethics of inquiry. Believing does not need to be justified. The conduct that led to the belief might need justifying, the failure to do anything to test the belief might need justifying, but not the believing itself: just as doing what leads to catching cold, or not doing anything to treat it, may need justifying, but not having the cold.
Some Christians have held that faith is a duty, one with different implications for believers and non-believers: non- believers ought to investigate until they discover the vital truths, believers ought to protect their belief, if necessary by refusing to investigate.[Note 2] Against this is the view, developed first also by Christians[Note 3] but later taken over by Rationalists as an important part of their critique of Christianity,[Note 4] that the duty is not to believe any particular set of propositions, but to inquire seriously and to believe only what the evidence warrants. The Rationalist ethics of thinking, or parts of it, are summed up by various terms: open-mindedness, a critical outlook, objectivity, detachment, intellectual honesty, fair-mindedness, rationality, and so forth. These are ideals which many philosophers and other academics try to advance, sometimes in a crusading spirit.
Since concern for truth is a duty, or something morally good, acts and dispositions are subject to moral evaluation as demonstrating concern for truth, or lack of concern. The part of morality to which such evaluations belong I will call the ethics of thinking, stretching the term "thinking" somewhat. It has at least three possible branches, dealing with areas in which it might be thought possible to show concern for truth: the ethics of belief, concerned with "acts" of believing, assenting, suspending judgement etc., and with certain dispositions related to those "acts"; the ethics of assertion, concerned with the expression of belief, either mentally to oneself or verbally to others; and the ethics of inquiry, concerned with gathering and considering evidence and argument. I will argue that the ethics of belief and the ethics of assertion should be eliminated, or at least relegated to a minor place, and that the ethics of inquiry should take more account of circumstances which are often ignored. This essay is, to put it crudely, a plea for a liberty of dogmatising, a defence of the closed mind. More exactly, it defends ways of thinking which may seem to show lack of concern for truth if circumstances which I believe are morally relevant are overlooked, as they often are. Whether we are justified in believing or asserting something without inquiring further depends not on what evidence we have, but on what we could or should be doing besides inquiring.
The most obvious objection against an ethics of belief is that belief and the other "acts" in question are not voluntary, and there are duties only where there is voluntary control. Whether belief is voluntary is a question with a long history. The Stoics held that assent is voluntary, although we cannot withhold it when the evidence is clear.[Note 5] This was also the opinion of Thomas Aquinas, William of Ockham, and some other medieval writers.[Note 6] Duns Scotus, however, denied that the will can control belief except indirectly, by diverting attention from one matter to another: otherwise it would be possible, just by willing it, to believe that the number of stars is even---which is not possible.[Note 7] Scotists and Thomists debated the question at the Council of Trent in the sixteenth century.[Note 8] Descartes held that assent is an act of the will, voluntary but necessary when there is intuitive evidence, optional where there is not---but then it ought to be withheld.[Note 9] Most other modern philosophers have adopted the position of Duns Scotus. In Thomas More's Utopia they did not punish false belief "because they be persuaded that it is in no man's power to believe what he list".[Note 10] Hobbes also argued in this way for freedom of thought: "the inward thought and belief of men. . . are not voluntary. . . and consequently fall not under obligation."[Note 11] Locke in his writings on toleration also used the same argument.[Note 12] In the Essay Concerning Human Understanding Locke concedes that voluntary suspense of judgement may be possible when there is little or no evidence,[Note 13] but argues that otherwise the will's influence is indirect, through the control of inquiry.[Note 14] Many other philosophers since the sixteenth century,[Note 15] including most of those who have written on the subject recently,[Note 16] have argued that belief is involuntary.
This is also my opinion; belief is not directly voluntary, it can be controlled only indirectly. Some acts or changes we can bring about in ourselves simply at will; for example, we can open or shut our eyes, speak or be silent. Others we cannot bring about directly, but we may be able to act at will in a way which puts us under the influence of causes from which some change may result. Such indirect control is usually imperfect, since the outcome of the procedure is somewhat unpredictable. Belief is not directly voluntary; I cannot by a naked act of will believe that the number of the stars is even. But it is indirectly voluntary; I can open my eyes, or speak to some one, and what I see or the answer I hear may change my belief, though I cannot predict the change exactly. The most important of the indirect procedures is inquiry, the gathering and consideration of argument and evidence. Other causes besides evidence seem to influence belief,[Note 17] so there are other indirect procedures, for example to live as if a proposition were true or to associate with believers. Some beliefs seem to arise in childhood and throughout life from unknown causes which cannot be manipulated. Some means by which we may affect other people's beliefs, such as reward and punishment, may not be available for modifying our own beliefs. Whether, and how, and to what extent, we can control our own beliefs seem to me questions of fact[Note 18] about which some difference of opinion is to be expected, especially since people may try to control beliefs in different ways and with different degrees of success.[Note 19]
A moral code regulates voluntary behaviour; it regulates directly voluntary behaviour in the first place, and it may thereby regulate certain things which are indirectly voluntary. Precepts about what is indirectly voluntary must be reducible to precepts about what is voluntary directly, otherwise they cannot be carried out. Belief results, without possibility of direct control, from the action of various causes upon a person with certain dispositions; the dispositions include prior beliefs, and other dispositions, some temporary (such as moods or resolutions) and others more or less permanent. Belief can be indirectly controlled by controlling experience and dispositions. These cannot be controlled directly by a simple act of will any more than belief itself can be, but there may be indirect methods of control. If we do certain things which are directly in our power the effects may make a difference to our experience or dispositions, and in turn this may make a difference to our beliefs. The indirect control of experience is the concern of the ethics of inquiry; control of the dispositions would be the concern of an ethics of belief.
The traditional ethics of belief includes exhortations to suspend judgement, to be "indifferent", and to proportion assent to the strength of evidence.[Note 20] These things are not directly voluntary: we cannot stop believing just by deciding not to believe, or stop wanting something to be true or false just by deciding not to want it, or vary the degree of assent just by deciding to do so. These exhortations will have to be dropped unless they can be applied to indirect methods of control. Now there do seem to be indirect ways of putting oneself into a suitable frame of mind: by resolution, by self-exhortation, by reminding oneself of the risk of mistake, by interacting in certain ways with other people, by carrying out certain rituals, and so on. But the judicial frame of mind seems to pertain not to the ethics of belief but to the ethics of inquiry. It is a disposition, perhaps temporary and limited to certain subjects, to attend seriously to evidence on both sides. It does not seem to involve suspending existing beliefs or not wanting them to be true or false. A man or woman who holds a belief and hopes it is true may be sincerely concerned to know whether it really is true, and may seek out and attend to evidence against it. The members of a jury form a tentative judgement early in the case and test it and modify it as the evidence unfolds; without some tentative judgement they might miss the significance of some of the evidence.
Can we do anything to proportion assent to the strength of evidence? Assent can be changed by collecting more evidence, but this is not what is meant. Can we increase or decrease assent given a body of evidence? Turning the evidence over in the mind may modify assent, but this is by bringing about an altered perception of the evidence. Can we control our response to a given body of evidence once we have it all in view? Teachers try to teach students to be less precipitate in judgement, and perhaps we can educate ourselves in the same way; this may suggest that it is possible to develop a power to control assent. However I do not believe that teaching or self-education can give the power deliberately to control response to a given body of evidence---to size up the evidence and then allow oneself to feel convinced by it to just the right degree; response on a particular occasion seems to be spontaneous, and not subject to direct control. Response to a particular body of evidence may reflect a general responsiveness, and perhaps this disposition can be modified deliberately; education might make us habitually less ready to assent. However the effect of education can be accounted for, partly and perhaps altogether, in other ways not pertaining to the ethics of belief. Education may leave us just as ready to assent and our feelings of conviction may be as strong as ever, but it may make us cautious in certain other ways---more disposed to continue with inquiry (since we know from experience how much difference it may make), more tentative in speech and action. If we are properly cautious in these ways the spontaneous impulses to believe may not matter much.
Even if belief is not itself directly voluntary, and even if we cannot (to any great extent) deliberately modify the dispositions which determine readiness to believe given a body of evidence, still it may be that some degree of slowness to assent is a moral virtue, or at least a valuable quality of some sort. But I do not think we know what degree of slowness or readiness to value. It seems quite possible that those who form beliefs readily and hold them strongly inquire more effectively: they may be more responsive to evidence, more likely to notice and be bothered by conflict of evidence, more fertile in hypotheses. It seems possible that as long as readiness to believe is accompanied by sensitivity to conflict among beliefs, a concern for truth, and a readiness to inquire (which may not be unconnected with one another), quickness to assent may favour, rather than impede, the growth of knowledge. Until we know what to say to such suggestions we do not know which disposition to prefer, readiness to assent or slowness.
I conclude that there is little scope for an ethics of belief.
Since the act of asserting a proposition is directly voluntary, some of the traditional ethics of belief might be translated into an ethics of assertion. For example, the injunction not to believe without sufficient evidence[Note 21] could be reinterpreted as an injunction not to assert without sufficient evidence. Even if we cannot at the moment help believing whatever we believe with whatever degree of assent we happen to feel, perhaps we should refuse to assert or to act on a belief unless it is supported by good and sufficient reasons.
But there is another objection which applies to an ethics of assertion as much as to the ethics of belief, that there is no satisfactory way of specifying how much evidence is sufficient to warrant assertion or belief. It might be said that the evidence is sufficient when it is as much as a reasonable man or woman would require. Perhaps there is some way of saying in the abstract just how much evidence a rational person would require, and then the reference to the rational person is eliminable; or perhaps the standard of sufficiency can only be grasped intuitively through discussing a range of cases with a good judge. In the present section I will argue against various ways of specifying in the abstract how much evidence is sufficient. In section 4 I will suggest that what a good judge is good at is deciding when to break off inquiry to do something else, which is a matter for the ethics of inquiry.
Is it possible to say in the abstract how much evidence is sufficient? "Sufficient" invites the question "For what?" If the aim of thinking is to know or believe the truth, then we might begin by saying that the evidence must be sufficient to guarantee that the conclusion is true, and that an assertion is permissible only if it is warranted by such evidence.[Note 22] To guarantee that the conclusion is true, the propositions offered as evidence must themselves be true without possibility of mistake. If these propositions must also be guaranteed by evidence, then an infinite series of justifications would be needed before any assertion was warranted, and nothing could ever be asserted. So some propositions must be evident in themselves, or perhaps some pieces of evidence are not propositions; at any rate, justified belief must ultimately rest on something superior to the demand for justification.[Note 23] Unless there is infallible basic evidence there can be no knowledge, and no beliefs or assertions will be warranted.
But there are no infallible propositions, and no conclusions for which there is enough evidence to exclude all possibility of mistake. Experience seems to support fallibilism, the doctrine that an apparently true statement of any kind may be false. Perceptual judgements and assertions of logical truth are the kinds that seem least likely to be fallible. But perceptual judgements sometimes have to be revised---we have to concede that we must not have seen what we thought we saw or what we think we remember seeing. Circumstances can be imagined in which we might feel certain about what we think we perceive and yet be wrong. As for truths of logic, propositions which seemed such have sometimes had to be abandoned or revised, for example in the face of antinomies. A necessary truth must be true, but we may be mistaken in classifying something as a necessary truth, or in drawing out its implications, or in applying it to the world of experience. So error is possible with judgements of every distinguishable kind. There are no ultimate premisses permanently beyond question, for which supporting argument is impossible and unnecessary---though it may happen that for the moment we cannot think of any objection or supporting argument.
Further, it seems possible that we may make mistakes which we can never correct, because they never show up in conflicts among our beliefs, or in disagreements with other people, or in any other way. Some beliefs may be "incorrigible" and yet false. If we are in the power of Descartes's evil demon we will never know it unless he lets us know, and much of what we think will be incorrigibly mistaken. It cannot be proved that we do make undetectable mistakes; neither can it be proved that we do not. The suggestion is not proposed as a fact, only as a possibility; and it does seem possible. If it is, then we may be mistaken even about things which seem certain, and there is no knowledge, and no permissible assertion, according to this first formulation of the standard of sufficiency.
Perhaps knowledge can be defined according to some lower standard, or perhaps we can say that some assertions are warranted if they meet the lower standard even though what they assert is not knowledge. One possible lower standard is this: a proposition can rightly be asserted if it is free from all actual doubt. This is, in effect, the standard put forward by C. S. Peirce.[Note 24] His position is as follows. Any belief may be mistaken, but this possibility does not make any belief actually doubtful; suspicion on general grounds (for example, because it is always possible that what seems true is actually false) is not actual doubt. Actual doubt cannot be summoned up at will.[Note 25] Doubt arises when there is a specific objection, some conflict with another belief, for example with a perceptual judgement, or with the belief of another person.[Note 26] If there is no specific objection and no doubt, then the belief can be asserted without supporting evidence. If there is doubt it may be removed by inquiry, and inquiry is pointless unless there is doubt.[Note 27] Inquiry should continue until sufficient evidence is collected to remove actual doubt. To achieve this it is not necessary to argue from basic premisses permanently beyond doubt---in fact there are none, since in the course of life anything may become doubtful (though not everything at once); it is enough if the premisses are not doubted at the moment.[Note 28] A belief is as good as a belief ever can be while it is free from actual doubt.
Peirce hopes that experience and discussion will take us toward the truth. "There is but one state of mind from which you can \OD\set out\CD\, namely the very state of mind in which you actually find yourself at the time when you do \OD\set out\CD\---in a state in which you are laden with an immense mass of cognition already formed."[Note 29] This original stock of beliefs (some of which may be instinctive) is continually enlarged by perception, abduction and testimony.[Note 30] It is also pruned. Conflict among one's beliefs, or with the beliefs of other people, leads to doubt, and this leads to inquiry, which eliminates, modifies or adds beliefs. The logician's hope is that these changes of belief take us closer to the truth. "Different minds may set out with the most antagonistic views, but the progress of investigation carries them by a force outside of themselves to one and the same conclusion."[Note 31] "Reasoning tends to correct itself. . . it not only corrects its conclusions, it even corrects its premisses."[Note 32] It even corrects its own method and motivation: "No matter how erroneous your ideas of method may be at first, you will be forced at length to correct them so long as your activity is moved by that sincere desire", namely, to learn what is true. "Nay, no matter if you only half desire it, at first, that desire would at length conquer all others, could experience continue long enough."[Note 33] Such is the logician's hope and faith: finite experience is not enough to demonstrate that it is true. A sceptic is one whose experience of inquiry has destroyed this hope: but perhaps the experience was not continued long enough, perhaps the despair is premature.
Peirce's approach is attractive. But on one important point I disagree with it. To remove actual doubt is not the sole purpose of inquiry; another purpose is to reduce the chance of mistake. The suspicion that a belief may be mistaken arises from (a) conflict with other beliefs---i.e. there is an argument against this belief from premisses which also seem true; (b) disagreement with other persons---if they give reasons using premisses which seem true then there is also conflict with one's other beliefs, but even if they give no reasons the mere fact that they disagree is disturbing; (c) the possibility that further experience or discussion may reveal conflicts or disagreement---a belief not much tested or discussed is under suspicion even if no actual conflicts or disagreements are known; (d) the possibility that some false beliefs may be incorrigible---for example, because our minds are controlled by an evil demon. Inquiry cannot remove (d), and this possibility therefore cannot affect the order of priorities among possible inquiries, and it cannot affect action since the error will never show up; it is a bare possibility about which nothing can be done. It is in this sense an "idle" hypothesis, but it might nevertheless be true. Inquiry can reduce suspicion (c) by testing and discussion, and (a) and (b) by the construction of arguments. If argument sometimes resolves conflict or disagreement, instead of extending it, some beliefs must "outrank" others, in the sense that conflict leaves the former unshaken and eliminates the latter. When that happens we do not just coolly "reallocate truth values" but undergo a change of belief. Perceptual judgements seem to be high-ranking. A proposition which conflicts with a perceptual judgment will probably cease to seem true---or, perhaps we should say, if a perceptual judgment is one of a set of seemingly true but apparently inconsistent propositions, at least one of the others will cease to seem true. Perceptual judgments may not be the only high-ranking beliefs.[Note 34] A high-ranking belief is not immune from suspicion, and may itself in the end be eliminated if it ceases to seem true. But argument is useful even when its premisses are not all high-ranking; it may reduce or shift suspicion, or at least it may extend the conflict and articulate the problem. To a great extent inquiry is the construction of arguments from premisses which are low-ranking and themselves under suspicion.[Note 35]
"Actual" doubt is suspicion under heading (a), or perhaps (b); but there is at least one other heading, namely (c), to which inquiry may be relevant. Since even what is not actually doubted may be suspected of being mistaken, it is not pointless to investigate what is not actually doubted. A belief which is free from doubt after investigation is better than one which is free from doubt but has never been investigated.[Note 36] But since it is impossible and uneconomic to question everything, the agenda of inquiry must be selective. An obvious criterion of selection is the importance for theory or practice of being right. Some actually doubtful matters may not get onto the agenda because they are not important, and some important matters may although they are not actually doubtful. If it is important to be right, evidence which removes actual doubt may not be sufficient.
More recent authors have put forward theories which resemble Peirce's in allowing a proposition to be asserted with no, or little, supporting evidence provided there is no serious specific objection. For example, if I believe (or conjecture), without evidence, that there are Martians, I can assert (or entertain) this proposition unless there is evidence that there are no Martians. Thus Popper allows testable conjectures to be entertained pending refutation, and to be believed if they resist refutation. Chisholm regards a proposition as "acceptable" provided its contradictory is not "adequately evident". Lehrer says that belief in a proposition is completely justified provided we believe that the proposition has less chance of being false than any objection against it. Pollock says that we are justified in believing some propositions without reasons, and others for which the former constitute prima facie reasons, in the absence of good reasons for disbelieving.[Note 37] From some points of view the differences between these theories may be important, but they all alike set an easy standard of justified assertion. This might be (and by some of these authors is) accompanied by a more demanding ethics of inquiry. It is the ethics of inquiry which does the real work.
How important it is to make sure that an assertion is true depends on how it connects with other thought and action. To assert to oneself is to formulate and acknowledge one's belief, which is the first step toward reflecting on and criticising it, and acting on it. To assert to others is also a way of testing one's belief and may be part of acting on it (which may involve persuading other people). Apart from some such connections with further thought or action assertion is trivial, and would hardly need to be regulated by moral or other standards. How much supporting evidence or inquiry is sufficient depends on what the assertion will lead to, in theory or in practice. In lawcourts, for example, the standard of proof is higher in criminal cases than in civil because the judgements have different practical implications. So let us adjourn the attempt to decide how much evidence is sufficient to justify assertion and consider some of the connections between belief and action.
If a belief may be mistaken, as any belief may be, then it is appropriate to act on it with caution: with the more caution the greater the likelihood of being wrong and the greater the importance of being right. Caution can take several forms; for example, we may delay while we make more inquiry, we may keep open lines of retreat. Caution costs something: to delay or to keep open a line of retreat is profitable only if the cost is likely to be exceeded by the benefit of reduced risk; there is an optimal degree of caution which may be less than the greatest possible. The forms of caution may supplement one another, or one may substitute for another: for example, the less it costs to provide a good line of retreat the less need there is to delay.
Since there are various kinds and degrees of caution, which can be adjusted, roughly, to the chance of mistake, it seems unnecessary to refuse to act on a belief which falls short of some single fixed standard of certainty. Such a refusal would be a kind of caution, but a simple-minded and rigid kind. A more graduated and discriminating caution is actually more cautious, since there is some risk in disregarding any proposition that seems true and relevant---it might, after all, be true. This is what is wrong with scepticism. It is often said that universal suspense of judgement would mean inaction, or action at random, to which the sceptic can reasonably reply that if any proposition we can act on is as likely to be false as true then we might as well do nothing or act at random. Another objection is that universal suspense is unnatural and impossible to maintain in practice; but that does not show that it is ever reasonable to judge and act on our judgements, only that sometimes we cannot help it. The objection I am making is that to me it does not seem true that every proposition that seems true is just as likely to be false (those to whom this does seem true will properly be sceptics). It seems safer, more prudent, more reasonable, to take into account in thought and action everything that seems true, even if it is uncertain---taking into account also the apparent likelihood of error. What seems true may be true---indeed we believe that it is (unless experience has led us to total mistrust of seeming truth); if it seems true, and if it may be true, then it is risky to disregard it simply because we cannot show that it is true with absolute certainty, or to some other standard of sufficiency. It is perhaps strange to say "it seems true, and I believe that it may indeed be true", but it is reasonable to say this when we adopt a critical attitude toward what seems. The sceptic says "it seems, but just as likely it is not"; I say, "it seems, and---sometimes at least---it seems probable that it really is as it seems, though it is always possible that it is not"; and if I am right, then it is incautious to think and act as though what seems true (though it may not be) is just as likely to be false.
When we act on a belief cautiously we are acting on other beliefs as well: about what else might be done instead, and the advantages and disadvantages of each option, about what can be done to salvage the situation if some belief turns out to be wrong, about the chances of being wrong, about the costs and benefits of further inquiry, and so on. Among the beliefs involved two kinds are noteworthy. First, there are higher-order beliefs about how to apply other beliefs to practical decisions. They may require us to act as if we do not believe certain things we do believe, or as if we believe certain things we do not---for example, as if we trust someone whom in fact we distrust.[Note 38] Beliefs about the appropriateness of caution in some situations, and about the form it should take, are among these higher-order beliefs. Second, there are beliefs about the likelihood of mistake. An estimate of the chance of being wrong may be based on the strength of the feeling of assurance, or on the experience of testing similar beliefs, but usually it is a spontaneous belief for which little or no support can be offered. Usually the estimate is too rough to represent as a numerical probability. Sometimes no estimate is possible; but to be cautious in action we must be able to pair at least some of the beliefs involved with an estimate of the chance of being wrong.[Note 39] In view of the role of these two kinds of beliefs, and others, it might be better not to speak of acting on a belief as if in isolation, but of acting on the ensemble of one's beliefs, or at least on the relevant subset.
What I propose, then, is that we should reject the view that lack of justifying evidence or argument is itself an objection against a belief (for some people this would be a mental revolution---it is very common to assume that if we find that we cannot justify a belief we should therefore abandon it), and that we should acknowledge (assert) and act on all our beliefs, including those for which we can find no credentials ("spontaneous" or "intuitive" beliefs---I call them such without implying that there is any special infallible faculty of intuition). Spontaneous beliefs arise perhaps from instinct, or from (other) occult causes. Abduction and deliberation[Note 40] are important thought processes which lead to spontaneous belief. In most departments of thought little of substance would be left if intuitive beliefs really were excluded, and without guesses about the chance of being mistaken and about the likely yield of various possible inquiries cautious action and planned inquiry would be impossible. The point, however, is not that we cannot get on without them, but that it is imprudent to ignore anything that seems true and relevant, if it might indeed be true.
The possibility of mistake calls for caution. In a particular case considered by itself it might seem worthwhile to postpone action for further inquiry; the improvement in decision and the reduction in the risks of action might seem likely to make up for what is lost by delay. But the decision whether to make further inquiry cannot be made in isolation. The time and resources needed for more inquiry might be more urgently needed elsewhere. We must decide by balancing not two things, the benefit in this case of inquiry against the cost of delay, but many competing possibilities; and we must decide not project by project but across many projects at once. To adjust the competing demands of many projects is a typical problem of economics, so as a first step toward an "ethics of inquiry" it may be useful to sketch out an "economics of inquiry". To bring the two together I will assume (for this section) that we have only one moral duty, to do as much good as possible. I will also assume that there are other goods besides truth, that we therefore engage in other activities besides inquiry, and that sometimes inquiry may help one of these other activities. The problem is to decide when we have carried an inquiry far enough for the time being.[Note 41]
We will do our duty if we allocate our time and resources in such a way that any reallocation which would have a better outcome in some respects would have a worse outcome in other respects, and the losses would equal or exceed the gains.[Note 42] Now it is important to notice that allocation includes timetabling. The right quantities of time and resources must be allocated in the right order, at the right times. An inquiry or other activity which generally goes well may run into trouble, or a generally difficult one may open out, and then attention should switch from the one to the other. Progress along one line may presuppose progress in another. Some necessary resource may not be available for the moment; some practical matter may for a time take priority over speculation. In practical matters opportunities and deadlines are set by outside causes, and may be ordered and spaced out in various ways. To do as much good as possible we must judge well when to switch from one project to another.
The timetabled allocation can be thought of as a plan of action. What we are doing now should be the first stage of a plan, or of a number of possible plans, than which there is none better. The plan should allocate some time to reconsidering ends and means and to revising the plan, but not too much, since the return on revising such plans is especially uncertain. The plan is based on various beliefs which could themselves be investigated; the decision whether to investigate them is also based on beliefs which could be investigated; and so on, indefinitely. But we are not obliged to investigate every belief on which we act. We investigate only when it seems likely that investigation will yield a good return. Since no plan will be carried right through, the later stages can and should be left vague. The plan should make sense starting from scratch at this moment. Perhaps something was not done that should have been done, but this does not mean that it should be done now, or ever, since opportunities pass and new needs appear. Perhaps something was done in the past to prepare for something to be done now, but that does not mean that it should now be done, since by now it may be possible to do something better.
An inquiry has been carried far enough for the time being when a properly drawn-up plan directs a switch to something else; it will do so when the likely gain from more inquiry now is exceeded by the likely loss from not doing something else instead. This provides an answer of a sort to the question earlier adjourned, of how to decide when the evidence is sufficient to warrant assertion or other action. If we want to continue to talk about evidence being sufficient, let us say that we have to make do with the evidence we have, we must regard it as enough for the time being, when the moment comes to adjourn the inquiry. The evidence is then sufficient not forever, and not by some logical or epistemological standard which is the same for everyone,[Note 43] but in relation to the inquirer's particular circumstances at the time---his or her purpose in inquiring and other purposes, the time and resources available, the sequence of opportunities.
Justifying believing something is not the same as justifying the proposition believed. A proposition is justified, to some extent and provisionally, by testing and argument.[Note 44] But what about the believing? If belief is not directly voluntary, then the "act" of believing does not need to be justified morally, whatever the belief. If belief is the indirect result of some reprehensible directly voluntary act or omission in the past (for example, neglect to inquire) then that act or omission, but not the belief, is reprehensible, and would be even if this belief had not resulted.[Note 45] It seems to me, then, that believing is not itself the sort of thing that requires or can be given justification. What about allowing a belief to stand, without subjecting it to (further) inquiry---when is that justified? That depends on our "plan"; if the plan does not direct us to investigate the belief now, then whatever evidence we have for it (if any) must be deemed sufficient for the time being for whatever assertion and other action our plan envisages, and we are justified in allowing the belief to stand. Perhaps we ought to have investigated it in the past and did not, but this does not mean that we should investigate it now or in the future. Different people should have different plans, because of their different starting-points, talents, deadlines and opportunities. Two conclusions follow: (1) a body of evidence which suffices for one person, given his or her circumstances, may not be sufficient for another; and (2) in some circumstances we may be justified in letting a belief stand for which we have no evidence at all---even if we have never examined it, even if it conflicts with our other beliefs, even if other people reject it. I do not mean merely that this sort of behaviour may be excusable---that if they have acted by their lights in a difficult situation those who violate the morality of thinking in its rationalist version should not be blamed. I mean that by the standards of a more reasonable version they may have done no wrong, objectively.[Note 46]
The argument of the last section assumed that we have only one moral duty, to do as much good as possible. But this view of duty is not correct. It is in some ways too lax, in others too demanding. I want to leave the content of the moral code as open as possible (without implying that one code is as good as another), but it seems that a correct code will probably not include a general duty to do as much good as possible, and probably will include special duties some of which are of strict obligation, to be carried out whatever the effect on all or some other legitimate projects. What is left after the strict duties are done need not be devoted entirely to the service of a set of compulsory goals; some goals will be optional, some good acts will be supererogatory. The duty to further a goal seems generally to be of imperfect obligation; that is, it requires a reasonable effort over time, but no particular act at any particular time.[Note 47] Within the field of optional activities there are differences of better and worse. The maximising plan described in the last section, modified to allow for the demands of special duties, would show the best way of allocating time and resources, and this would provide a reference point for judgements of better and worse. But to do what is better is not an obligation, and not essential to being rational.[Note 48]
Perhaps there are duties of inquiry (either strict duties, or duties of imperfect obligation), some of which may go with certain roles (as member of a jury or of a commission of inquiry, examiner, etc.). In optional inquiry we do better or worse depending on how closely we conform to a version of the maximising plan, modified to allow for the demands of duty. We can say that in optional inquiry sufficiency is judged in relation to the plan, except that "sufficiency" is too strong a word---any amount of inquiry is morally sufficient when there is no duty. Perhaps the right word is zeal. Due concern for truth does not require the greatest possible zeal. How zealous someone is for truth can be judged by reference to the maximising plan. A given degree of zeal for truth will not require the same inquiries from everyone, since their plans ought to reflect their different circumstances, and it is compatible with not examining even uncertain beliefs which one holds and acts upon. In fact the conclusions drawn at the end of the last section still hold: it may be that of two persons having the same belief and the same evidence, one may reasonably let the belief stand while the other should subject it to inquiry, and it may in some cases be rational to let a belief stand entirely without supporting evidence and without ever examining it. To exclude these conclusions it would be necessary to adopt a code of duties so comprehensive, exacting and undiscriminating as to leave no room for optional inquiry or for differences of duty corresponding to the individual's particular circumstances and projects. I do not imagine it would ever be reasonable to adopt such a code.
So whether the truth lies with some maximising ethic, for example some form of utilitarianism, or with some kind of deontology, in either case it will not be easy to decide whether someone is properly zealous for truth because of the difficulty of knowing enough about the relevant circumstances. ". . . you will perhaps think this is a case reserved to the great day, when the secrets of all hearts shall be laid open; for I imagine it is beyond the power or judgment of man, in that variety of circumstances, in respect of parts, tempers, opportunities, helps etc. men are in, in this world, to determine what is everyone's duty in this great business of search, inquiry, examination; or to know when anyone has done it."[Note 49]
Methodology is like a morality of scientific thinking, a combination of an ethics of assertion and an ethics of inquiry.[Note 50] According to Popper, belief is irrelevant to science, which is an object, an artefact, constructed by scientists but existing independently of their minds.[Note 51] A methodology is a set of rules controlling changes to this artefact. The attempt to control construction from the foundations up has been abandoned, since there are no foundations. We begin with theories already provided by instinct or tradition (the source does not matter), test them, and change them when they fail. We need creative imagination to devise tests and revisions, but some possible changes are excluded by methodological rules which, ideally, it should be possible to apply mechanically, without exercising intuitive judgement. The ideal set of rules would exclude all possible changes but one, so that competent inquirers would all agree. A theory is not objective unless it can be tested by public, "intersubjective", repeatable observations;[Note 52] otherwise agreement could not be attained. But the ideal rules have not yet been devised, and agreement must sometimes rest on convention.[Note 53]
I disagree with this view of science at several points: the elimination of belief, the claim that concern for agreement is essential to objectivity, and the elimination of intuitive judgement from methodological decisions.
(1) A theory can be regarded as an artefact independent of its constructors' beliefs; a fundamentalist Christian might make useful contributions to the theory of evolution, an atheist might be a theologian. But the point of work on these artefacts (at least the ostensible point---the real point may be to advance someone's career) is to provide means of knowing and understanding[Note 54] and guidance for action. Explicit knowledge and understanding involve belief, and a theory can reasonably be acted on only if it is thought to be true---that is, believed. There is therefore no point in elaborating a theory unless it is at least "potentially credible", that is, unless someone (not necessarily the constructor) might believe it.
(2) Thinking is objective if it has certain characteristics which make it likely to lead to truth. I will not try to specify the characteristics, or to analyse the notion further; the essential point is that objectivity is connected with truth, not with agreement.[Note 55] Disagreement is a reason for suspecting falsity,[Note 56] but it is not a conclusive reason, nor the only reason; it would not be absurd to suppose that some people can see truths to which others are congenitally blind, or, on the other hand, that the human race may unanimously and forever agree upon something false. Thus there is no necessary connection between truth and agreement, nor, therefore, between concern for agreement and objectivity. So there seems no reason to believe, and I do not believe, that science cannot be objective unless it is "intersubjectively testable". Much of physics is not intersubjectively testable, if this means actually testable by everyone, because some people are deaf and blind; if the race evolved so that human beings normally were blind then optics might come to seem mystical except to a sighted minority, but it would not for that reason lose its objectivity. On the other hand, if the principle means testable by those capable of appropriate experiences then it is vacuous.[Note 57]
Nothing in the nature of science, therefore, forbids work on a theory by those who think they understand it and regard it as potentially credible, even if most other people regard it as mystical or quite incredible. It may be that questions about the use of common resources or the conduct of educational institutions will require collective decisions, and this may lead to political activity some of which may be quite legitimate; for example, there might be academic political activity aimed at taking away resources or room in the curriculum from some theory or subject which most members of the scientific community regard as mystical or incredible. But nothing in the notion of scientific inquiry itself requires consensus, spontaneous or imposed.
Perhaps the notion of a scientific community engaged in certain kinds of co-operation requires concern for agreement. Two kinds of co-operation can be distinguished: (a) discussion, and (b) management of common resources and common projects other than inquiry (teaching, for example). The first kind is part of inquiry. Discussion does not presuppose agreement, or even aim at it (the aim is knowledge of the truth---of course if all the participants attain that aim they will, incidentally, agree). The second kind may presuppose some agreement. If we decide to call people scientists only if they are members of a community based (in part) on the second sort of co-operation, then agreement may be an essential concern of science.[Note 58] But I do not think use of the word "science" should be restricted in this way, and if it is, that will merely determine what certain activities should be called, not what people should do or how they should do it. Agreement is neither a presupposition nor an aim of the search for truth.
(3) No plausible methodology yet formulated can do without intuitions, hunches, guesses and the like. For example, there are no rules which say that in specified circumstances a rational inquirer must definitively abandon one paradigm or research program for another.[Note 59] The multiplication of research programs alarms some, delights others. "We must find a way to eliminate some theories. If we do not succeed, the growth of science will be nothing but growing chaos."[Note 60] "Knowledge. . . is. . . an ever increasing ocean of mutually incompatible (and perhaps even incommensurable) alternatives."[Note 61] Economic considerations make some sense of both attitudes, and suggest that each may be appropriate in different circumstances: when one programme is clearly more promising the others should be eliminated or adjourned, but when no programme seems very promising many programmes should be tried. A programme abandoned now may be revived later, if the rival programme which now seems more promising runs into difficulties. Since difficulties cannot always be foreseen, it may be sensible to keep several programmes going simultaneously---but not too many, only those whose chance of yielding credible theories seems good enough to justify the costs of carrying them on.[Note 62] Just as a firm occasionally reviews its projects and eliminates or reduces those which seem least promising, so an inquirer should occasionally abandon or shelve the programmes which seem least likely to yield credible theories.[Note 63] This is an investment decision made under uncertainty, and for such decisions there are no rules sufficient to eliminate intuition and guesswork. The inquirers need not all make the same investment decision,[Note 64] although some collective decisions may be needed.
If these three points are right, then from the standpoint of the morality of thinking there is no difference between scientific and other inquiries, and what was said earlier about inquiry in general applies also to scientific inquiry. There is no reason in the nature of science why scientists should not conduct their inquiries according to their own beliefs, including intuitive beliefs for which they have little or no evidence, even if other scientists disagree.
The norms of intellectual honesty in discussion and teaching are part of the morality of thinking, or closely allied to it. We can show concern for truth not only in the interior dialogue of our own thinking but also in dialogue with others. I will indicate briefly how the line of thought I have so far followed extends into these areas.
One of the premisses of J. S. Mill's argument for freedom of discussion is that our best means of coming to know the truth is to listen to all that can be said against our beliefs by persons of every variety of opinion.[Note 65] The obvious objection is that life is not long enough.[Note 66] Since time is scarce the inquirer must choose some inquiries and some ways of inquiring instead of others, and must judge when to switch from one to another. To listen to other people is only one way of inquiring, and there are many people who might be listened to. It would be inefficient to discuss everything with everyone through to the end; we must decide when to stop listening and switch attention elsewhere. Special duties attached to certain roles may require us to go on listening even when it might seem a waste of time, but it is unlikely that the list of such duties will add up in practice to anything equivalent to Mill's recommendation.
In deciding whom to listen to, or what to read, we are guided by beliefs about the likely returns on time spent in this and other possible ways. Such beliefs may be based on advice, for example from a book reviewer. We may be unlucky in our advisers, but if time is very short we may still do better to act under fallible guidance than to leave the allocation of time to chance. Though book reviewers are fallible, it is not sensible to read books through to the end in just the order in which they happen to come to hand.
Teachers are advisers in the planning of inquiry, who advise students on what to read and whom to listen to. A reading list may be more useful if it is selective. If the teachers' advice is followed the effect may be much the same as censorship. There is obviously a risk in this, a risk which is increased when the same advisers help select books and articles for publication and help select people for jobs, and in particular when they select and train and appoint their own successors. To give all these tasks to the same people saves time and resources, but it is risky. On the other hand, even if these advisers are only moderately reliable, not to accept their guidance may be wasteful and therefore harmful. There is risk either way.
In relation to teaching, the main point of the academic ethics is to reduce the risk that the teacher may censor and repress. This is the point, for example, of the rule that good marks should be given for well-argued papers whatever the opinion expressed. But examiners must rely on their own opinions about which propositions have a fair chance of being true. It is not arguing well merely to avoid self-contradiction and validly derive some improbabilities from others. Arguing well means knowing what is unlikely and needs support and what can be taken for granted, which objections are strong and which are too improbable to need an answer. If the argument is that an hypothesis must be true because it is the only good explanation of the data, then the arguer must be able to distinguish a good explanation from others which are possible but unlikely. In judging whether someone argues well we must therefore rely tacitly on our own sense of what is likely.[Note 67] We can think that a proposition has a fair chance of being true without believing that it is true, so it is possible to judge that someone argues well while disagreeing at many points. Still, a student whose opinions about what is likely to be true are too much at variance with the teacher's cannot reasonably be given good marks. The problem would arise even in Utopia. If jobs were more equal the teacher's power would be less; but as long as people would rather be selected for one social role than another, for whatever reason (selfish or not), and as long as selection is based (to save time and resources used in selecting) partly on performance during education, there will be a risk that teachers may censor and repress, perhaps unwittingly. The economies may justify the risk.
Students and members of the public also assess the reliability of teachers, experts and other leaders, and their assessments also reflect what they think likely to be true. Which leaders people choose or willingly accept therefore depends on the quality of existing public opinion. Since discussion and other forms of inquiry begin from and are guided by the inquirer's existing beliefs and by advice given by leaders whose influence depends on the existing state of opinion, discussion in a community in which ignorance and error are widespread may merely strengthen the authority of pseudo-experts and confirm and disseminate false beliefs. People much superior to the rest might serve the cause of truth best by forcibly displacing the existing leaders, defending their position by means of censorship, and using their influence to spread true opinions. If this produced a more enlightened state of public opinion free discussion might then become the best means of further improvement; but until then authoritarian methods might be better.[Note 68] Thus Mill's argument for freedom of discussion assumes some degree of general enlightenment, as he acknowledged;[Note 69] it will not, and should not, convince those who believe that very many people are seriously mistaken on matters of importance.
I have argued not merely that we cannot (directly) help believing whatever we believe, or that it may be reasonable to have beliefs which may be mistaken, but that it may be reasonable in some cases to assert and act upon possibly mistaken beliefs without further investigation; and I have suggested how we might decide in a given case whether that is reasonable. At the beginning I said that, crudely, this essay is a defence of the closed mind. This is how it may seem to those who hold the position I have attacked, namely that it is immoral to hold a belief and act on it without sufficient evidence. But more exactly I am proposing to reinterpret parts of the ethics of belief as an ethics of inquiry and of action under uncertainty. Under the proposed reinterpretation, whether we are justified in acting on our beliefs without further inquiry does not depend upon whether our evidence is sufficient by some standard unrelated to our projects, talents, opportunities, etc. but on how else we could use our time, given our circumstances. It does not seem possible to say exactly how it depends on circumstances, so as to reduce the decision to rule. The relevant circumstances will usually be known in detail only to the person him- or herself. We should therefore be pretty slow to say that someone else has a closed mind; on the other hand, we cannot be too sure that we ourselves have not. Sometimes (for example, in helping to choose someone for public office or for a job) we need to judge other people's devotion to truth, but in most disagreements it is enough simply to urge on them what we may think they would have seen if they had looked further, without suggesting that they should have done so. Instead of passing judgement it is better to argue. But of course none of this means that in intellectual matters there are no duties and no moral differences of better and worse, or that one opinion is as good as another, or that it is wrong to criticise---indeed, it is part of respecting others to take their opinions seriously enough to criticise them, and to credit them with willingness to listen to criticism frankly expressed.
The common academic morality dismisses "unfounded" opinions (especially in religion and politics), and discourages people from asserting and acting on beliefs they cannot justify. Against this I claim that our assertions and actions may and should take account of all of our beliefs, including opinions, intuitions, and other beliefs for which no credentials can be shown. Philosophers since Plato have disparaged mere opinion, holding that while opinions may be unavoidable and in some ways useful they are no part of philosophy. Modern philosophy has tried to base thought and action on principles which are certain, and to devise criteria and rules of thinking which can be applied mechanically to give authoritative conclusions. These attempts have not succeeded. The principles are merely spontaneous beliefs under another name; criteria are revised if they give intuitively unacceptable decisions; the proposition that thought and action should be based only on certain principles cannot be justified in its own terms (since it is not self-evident or provable); the theoretical structure grows too slowly for the needs of action, and a gap opens between theory and practice---everyday life is based on opinions which are not part of the justified system, and the opinion that it is all right to live this way is not part of the system either.[Note 70] The academic ethics ends up as an armoury of weapons to be used selectively against unpopular creeds.
The policy which I advocate is at least self-consistent and practicable: to assert and (with due caution) act upon all the propositions which seem true and relevant, whatever their source and credentials, examining them if and when this is opportune, not claiming for any of them---even after thorough examination---any infallibility or authoritative status.[Note 71] This policy does not guarantee that a false belief accepted with no credentials except that it seems true will eventually be eliminated. No policy for the conduct of the understanding can guarantee this, since it seems possible that some false beliefs may be incorrigible. But as long as it does not seem true that what seems true has an equal chance of actually being false, there seems to be nothing more rational we can do than to act on what we believe even when we cannot prove it.
Epistemology and the ethics of belief are two parts[Note 72]of a wider enterprise without a name, the response to scepticism. The term "epistemology" refers to the Platonic contrast between episte\AM\me\AM\ and doxa, knowledge (in a strong sense) and mere opinion.[Note 73] Academic sceptics deny that we have knowledge, epistemology defines knowledge and tries to answer the question whether we have it. The Academic denial has its place in an argument like that which was presented above at the beginning of the introduction to this essay: (1) Any proposition, however certain it may seem, may in fact be false; (2) we should not affirm anything that may be false; therefore (3) we should not affirm anything. Instead of (2) the Academic may say: "(2a) We should not affirm anything we do not know." If we cannot, in the strong sense, know anything unless it is true, then, once more, conclusion (3) seems to follow. My response to such arguments is to concede (1), but deny (2) and its analogues, and deny (3). But others avoid (3) not by denying (2) outright but by substituting: "(2b) We should affirm nothing of which we are not at least reasonably or justifiably sure" (although it may in fact be false). Thus epistemology comes to concern itself not only with knowledge in the strongest sense, but also with the broader notion of justifiable belief.[Note 74]
The response to scepticism---the attempt to work out what really does follow, for theory and for practice, from the considerations which move the sceptic, and in particular from the possibility that whatever seems true may actually be false---is a necessary undertaking. But in my opinion epistemology, like the ethics of belief, is a mistaken project and should be scrapped---though much of what is at present classified as epistemology might still be useful in responding to scepticism. In my opinion there is no philosophical importance in the distinction between knowledge and mere opinion, and no proper notion of epistemological justification. Much work in epistemology manages to avoid the question of what it means to say that a belief is epistemologically justified. For example, the definition of knowledge classic in epistemology is that we know a proposition if (a) it is true, (b) we believe that it is, and (c) our belief is justified. Is some fourth condition needed, to rule out a claim to know in a case in which we have a justified but false belief which implies another belief that happens to be true?[Note 75] To discuss this question it is not necessary to say what "justified" means, although the term occurs on every page. In the same way many other questions of epistemology can be discussed without giving any account of this central concept. Now I challenge epistemologists to say, before they pursue those other questions any further, how we are to judge whether an affirmation is justified in the epistemological sense, and what that sense is.
Some will say that we are epistemologically justified in affirming whatever seems true. I agree that we are justified in affirming whatever seems true, but what is the point of "epistemologically"? I say we are justified not epistemologically but morally, against those who say that we have a moral duty to affirm only what we know or are reasonably sure of: we are justified simply because there is no such duty. To say for this reason that it is justifiable to affirm whatever seems true is to reject the pretensions of epistemology, not to provide an account of "epistemological justification".
The epistemologist may in turn challenge me to say what I mean by knowledge, as distinct from mere opinion ("I don"t just think it, I know it"), and what I mean by the justification of a proposition (since I do refer to such justification---see above, sect. 4.1). I will begin with justification. A proposition is justified to someone's satisfaction, and for the time being. If I believe that a certain proposition is true, test it, look for evidence and argument for and against it, and at the end of the time available for this investigation I still believe it---or if you assert something, and in response to my demand for justification produce evidence, arguments, results of testing, which in the end cause me to share your belief---then it has been justified to my satisfaction, not necessarily to anyone else"s; and the end of the process is always in principle an adjournment because it is time to do something else, not an absolutely final conclusion. There is no question of measuring the evidence against epistemological canons, which in any case do not exist.
As for what I mean by knowledge, my account is as follows. In formulating the conditions for correct application of the term "know" we need to distinguish two kinds of cases, namely those in which I claim that I have knowledge, and those in which I ascribe knowledge to someone else, for example to you.[Note 76] If I claim that I know something, then (i) I must believe it, and (ii) I must estimate as pretty low the likelihood that it is actually false. If I say that you know some particular proposition, then (i) I must believe that you believe it, (ii) I must also believe it, (iii) I must estimate as pretty low the likelihood that it is actually false, and (iv) I must think that it is not just by chance that you got it right.[Note 77] Two questions arise. First, when I say you know something, must I think that you estimate as pretty low the likelihood that it is actually false? I am not sure, but I think not: it seems possible to say that you know something you do not know you know---or even think you do not know---as long as you believe it. Suppose I believe that you have second sight, but you believe there is no such thing and try to ignore these fearful expectations that unaccountably come upon you, though you cannot help believing that your fears will come true: then I may regard as knowledge beliefs of yours which you quite reasonably regard as highly suspect. "You did know, after all," I may say when things turn out as you could not help fearing they would. Second, if I claim to know, must I believe that it is not just by chance that I am right? Again I am not sure, but I think not: it seems to be enough if I am confident that what I claim to know is not actually false. If I thought my belief was an accident I might have no confidence in it, but I may be confident without knowing why, without my confidence being based on anything. Condition (iv) must be satisfied if I say that you know, because otherwise I can ascribe knowledge to you simply because you believe something I believe I know. The point of claiming that I know seems to be to give others my assurance (for whatever they think it is worth) that the thing is true,[Note 78] whereas the main point of saying that others know seems to be to say something about them.
None of this involves any reference, even tacit, to epistemological canons, or to justification under the ethics of thinking. If I claim knowledge I do not imply that my belief is justified under the ethics of thinking, and I do not imply that yours is if I ascribe knowledge to you. What I recognize as knowledge depends in the first place on what seems true to me. I cannot claim to know p, or ascribe knowledge of p to you, unless to me p seems true. It may not be true. I must estimate as pretty low the possibility that it is actually false. "Pretty low" is vague and cannot be made precise; it cannot be measured against canons valid always, everywhere and for everyone. When does a house count as a "big house"? That depends on the social circles you move in, and on many other things. Similarly, in some circles, and in some contexts, people claim or ascribe knowledge more readily than in others. To make the best use of others' advice we need to develop a sense of how readily they claim to know; to communicate our views properly we need to develop a sense of how they will take our expressions of doubt and claims to know. In the expressions that convey these nuances there is no ideally correct standard of usage. Philosophers careful never to claim to know or to be certain will often seem more uncertain than they really are---without upholding any philosophical standard, because there is none.
When someone asserts or claims to know something, I do not think we should conduct any inquest to see whether he or she has enough evidence to justify making such a claim. We should simply ask, is it true? To answer that question we may want to know what evidence the asserter has, but we should not then try to decide whether it is enough to satisfy the canons of epistemology, or whether the asserter was justified in making an assertion on the basis of such evidence: instead we should simply look for more evidence, for and against, for as long as it seems reasonable to continue the inquiry. Sometimes we need to assess the reliability of other people. That does not depend on how readily they claim knowledge (though we may need some sense of that to interpret what they say), but on how likely they are to be right---which, in turn, does not depend on their epistemic standards, but on their general background knowledge of relevant matters, their intelligence, and how much study they have made of the matter. It has nothing to do with reluctance to believe.
Return to Home Page
Note 1. Cicero, Academica, II.xx.66-7. Perhaps the origin of this thesis was simply the thought that no one is ideally wise who ever makes a mistake. But it does not follow that no one trying to realize that ideal must ever risk making a mistake.
Note 2. The first Vatican council denied that Catholics and non-Catholics have the same duty in matters of belief. Theologians explained that those in the true faith are obliged to be constant and forbidden to doubt, whereas those not yet in the true faith are obliged to seek it, and to doubt their present sect and leave it after inquiry. See Harent, "Foi", in DTC, vol. 6.1, cols. 287-9. See also Thomas Aquinas, Summa, 1-2, q.6 a.8, q.19 a.6, q.76 a.2. Contrast Bayle's claim that we must be ready to listen to missionaries even from Australia: above, Essay II, n. 92.
Note 3. See Hooker, vol. 1, p. 269 ("[I]t is not required nor can it be exacted at our hands, that we should yield unto any thing other assent, than such as doth answer to evidence which is to be had of that we assent unto"); Chillingworth, pp. 293-4 ("God desires that we believe the conclusion as much as the premises deserve, that the strength of our faith be equal or proportionable to the credibility of the motives to it"); Taylor, Liberty of Prophesying, pp. 495, 497; Locke, Conduct of the Understanding, passim (e.g.: "What one of a hundred of the zealous bigots of all parties ever examined the tenets he is so stiff in, or even thought it his business or duty to do so? It is suspected of lukewarmness to suppose it necessary, and a tendency to apostacy to go about it" (p. 381)); Bayle, CP, pp. 337, 428, 437, 438.
Note 4. E.g. Collins, p. 33, Clifford, p. 186, James Mill, pp. 20-1, Kant, Religion, p. 124n. ("The shepherds of souls instil into their flock a pious terror. . . of investigation. . . a terror so great that they do not trust themselves to allow a doubt concerning the doctrines forced upon them to arise, even in their thoughts, for this would be tantamount to lending an ear to the evil spirit.").
Note 5. Cicero, Academica, I.xi.40, II.xii.37-8, Zeller, pp. 88-9, Stough, p. 121. The Sceptics held that the evidence never is absolutely clear, and that assent should always be withheld. (It is possible that the Stoics and Sceptics held that assent and non-assent are always both voluntary and necessary; see Burnyeat, p. 42, n. 38, and see below, Appendix.)
Note 6. See Thomas Aquinas, De veritate, q.14 a.1. According to Ockham, when there is evidence, assent and the apprehension of evidence are the same thing, but when evidence is absent, belief can be caused by the memory of seeing the evidence or by the will (2 Sent. q.25, L, X-Z).
Note 7. Harris, vol. 2, pp. 285-90. That the stars are even (or odd) was a commonplace example for ancient and medieval writers of a proposition which is inevident and neither probable nor improbable; see e.g. Cicero, Academica, II.x.32, xxxiv.110, or Sextus Empiricus, Against the Logicians, I.243, II.147, 317.
Note 9. "Reason. . . persuades me that I ought no less carefully to withhold my assent from matters which are not entirely certain and indubitable than from those which appear to me manifestly to be false" (Descartes, vol. 1, pp. 144). See also pp. 175-6, 233. Cf. Cicero, Academica, I.xii.45, II.xx.66-8.
Note 11. Hobbes, pp. 500-1; cf. pp. 410, 526, 527, 576, 700.
Note 12. Locke, Toleration, pp. 11, 12, 40, Two Tracts, pp. 127, 129, Bourne, vol. 1, p. 176. Hobbes realised, but Locke apparently did not, that this argument is not enough to establish religious toleration, since outward acts, including utterances, are voluntary, and may remain under the sovereign's control. Further, as Locke himself admits (see next two notes), beliefs may be voluntary at least indirectly.
Note 13. Probability lacks "that intuitive evidence which infallibly determines the understanding" (Locke, Essay, IV.xv.5, p. 656). "Where. . . there are sufficient grounds to suspect that there is either fallacy. . . or certain proofs. . . to be produced on the contrary side, there assent, suspense, or dissent are often voluntary actions" (p. 716; see also pp. 717-18).
Note 14. After thorough inquiry we cannot help assenting to the side on which probability is greater (Essay, pp. 718(7-14), 716(10-11), 716(21-3), 717(1-2), 717(7-17)). "We can hinder both knowledge and assent, by stopping our inquiry. . . if it were not so, ignorance, error or infidelity could not in any case be a fault" (p. 717); "all that is voluntary in our knowledge, is the employing, or withholding any of our faculties from this or that sort of objects, and a more or less accurate survey of them: but they being employed, our will hath no power to determine the knowledge of the mind one way or other; that is done only by the objects themselves, as far as they are clearly discovered" (pp. 650-1).
Note 15. Hooker, vol. 1. p. 268, Taylor, pp. 522-3, Whichcote in Tullock, vol. 2, p. 102, Stillingfleet, vol. 4, p. 134, Spinoza, vol. 2, p. 124, Bayle, CP, pp. 385-6, Hume, Enquiries, p. 48, James Mill, passim, J. S. Mill, Logic, vol. 2, pp. 737-8, Peirce, Writings, pp. 256, 292, 299. For other references see Levy, pp. 313-20.
Note 16. See Ammerman, Blanshard, ch. 11, Chisholm, "Lewis", pp. 223-7, Classen, Curley, Evans, Fohr, Govier, "Belief", B. Grant, Hampshire, pp. 155-8, J. Harrison, Johnson, Kauber and Hare, O"Hear, Penelhum, pp. 43ff., Pojman, Price, Suckiel, Williams, "Deciding".
Note 17. James Mill argues that belief is altered only by evidence: "The proof is indisputable, because the view which the mind takes of evidence, and its belief, are but two names for the same thing" (pp. 1-2; cf. Ockham, above, n. 6). This is not what "belief" means, and Mill's thesis seems false in fact.
Note 18. Williams, "Deciding", p. 148, argues that it is a matter of logic that belief is not voluntary. For criticism see Govier, "Belief", pp. 645-7, Winters, pp. 243-56.
Note 19. For an account of some indirect methods of controlling belief see Wolterstorff, pp. 148ff.
Note 20. An inquirer should "put himself wholly into this state of ignorance in reference to that question; and throwing wholly by all his former notions, and the opinions of others, examine, with a perfect indifferency, the question in its source" (Locke, Conduct, pp. 383-4). "The surest and safest way is to have no opinion at all until he has examined" (p. 383). "We should keep a perfect indifferency for all opinions, nor wish any of them true" (p. 380). "In the whole conduct of the understanding, there is nothing of more moment than to know when and where, and how far to give assent; and possibly there is nothing harder" (p. 378. On the proportioning of assent to evidence see Cicero, De natura deorum, I.i.1 (quoted by Locke at the beginning of Conduct), Academica, I.xii.45, Hooker, vol. 1, p. 269, Chillingworth, p. 27, Locke, Essay, IV.xix.1, Hume, Enquiries, p. 110. Hooker, Locke, Hume, James Mill and others believed that assent is involuntary, but inconsistently made rules about the giving of assent.
Note 21. "It is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence" (Clifford, p. 186). Perhaps there are degrees of sufficiency and degrees of assertion, but those who treat this topic often write as if a proposition is either asserted or not asserted and as if the evidence is either sufficient or not sufficient. I will follow this practice. The degree of assurance can be regarded as part of what is asserted: the proposition may indicate the probability of an event, or it may be accompanied by another which attributes to the first a certain likelihood of error. The question is then whether the evidence is sufficient to justify the assertion of such a proposition or pair of propositions.
Note 22. The wise man assents only to presentations which are such as no false presentation could be; see Cicero, Academica, II.xviii.57, II.xxxi.101, II.xxxv.113, Sextus Empiricus, Against the Logicians, I.151-3.
Note 23. Cf. Aristotle, Anal. post., 72 b5-25, 100 b3-18, Metaph., 1006 a5-12, 1011 a5-15.
Note 24. For an exposition of the relevant parts of Peirce's philosophy see Chisholm, "Fallibilism".
Note 25. Doubt comes from surprise: "It is as impossible for a man to create in himself a genuine doubt. . . as it would be for him to give himself a genuine surprise by a simple act of the will" (Peirce, Writings, p. 292). "The breaking of a belief can only be due to some novel experience. . . . Now experience which could be summoned up at pleasure would not be experience" (ibid., p. 299).
Note 26. Ibid., p. 229, Papers, 2.160, 5.509.
Note 28. Ibid., pp. 11, 57-8, Papers, 5.213f. Peirce rejects the doctrine that anything is "basic, ultimate. . . because there is nothing beneath it to know" (Writings, p. 55). Cf. Popper, Logic, p. 111 ("Science does not rest upon rock-bottom. . . . It is like a building erected upon piles. . . . When we cease our attempts to drive our piles into a deeper layer, it is not because we have reached firm ground. We simply stop when we are satisfied that they are firm enough to carry the structure, at least for the time being.").
Note 30. "Abduction" is Peirce's term for the inference whereby one supposes tentatively that a proposition is true because it seems a natural way of explaining something that is a fact, though other explanations are possible (cf. Popper's "conjecture"); see Writings, pp. 151-3. A perceptual judgement is a kind of abductive inference, an interpretation spontaneously imposed upon sensation by a process which is not conscious, which one cannot control or criticise because one cannot go behind the perceptual judgement to compare it with the originating sensations; "sensations emerge into consciousness in the form of beliefs". See Writings, pp. 36, 302-5, Papers, 2.140-3, 5.216-37, 5.263. On instinctive commonsense beliefs see Writings, p. 293. Another apparently original source of beliefs is the testimony of other people; Papers, 7.226. See above, Essay II, n. 112.
Note 33. Ibid., 5. 582. For a treatment of moral reasoning similar to Peirce's of empirical reasoning, see Wellman. Peirce explains the convergence of empirical belief by postulating real objects constraining agreement, at least among those capable of perceiving them. If discussion leads to convergence of ethical judgements, should we postulate moral objects constraining ethical intuitions? See Quinn.
Note 34. On the "hierarchies of forcefulness" see Wolterstorff, pp.173-5. In moral thinking the counterpart of perceptual judgement is intuitive judgement of the moral character of a particular action. But such judgements do not clearly outrank ethical generalizations: sometimes consideration of some ethical principle leads to abandonment of the particular judgement.
Note 35. I use "inquiry" also to cover the collection of new evidence, and the consideration and reconsideration of evidence and argument already collected. Moral inquiry includes collecting and considering cases.
Note 36. Peirce sometimes says this himself; see Papers, 5.451. Cf. the distinction made by Carneades between the "probable" and the "probable and tested" (Sextus Empiricus, Outlines, I.xxxiii.227, and Bury's introduction to his edition, p. xxxvi).
Note 37. Popper, Conjectures and Refutations, p. 228, Chisholm, Perceiving, pp. 8-9, Lehrer, pp. 189-92, Pollock, pp. 30-1, 40-1, 44.
Note 38. To say that a higher-order belief may require us to act as if we believe p although we do not is to say that when we act on the ensemble of our beliefs a person ignorant of certain of them (namely the higher-order belief) may mistakenly infer from our action that we believe p. For example, I may act towards a person in such a way that he observes nothing from which he can infer that I do not trust him, and will naturally suppose that I do. So strictly speaking we act only on the set of propositions which we do believe.
Note 39. As I indicated above, n. 21, I take a statement of partial belief as a conjunction of two statements, "p" and "There is such-and-such a chance that p is false".
Note 40. When we deliberate we consider and reconsider the pros and cons until a conviction forms. Unless a conviction forms spontaneously we cannot "make up our mind"; we can decide to do the contemplated act, but not to believe that it is the right thing to do. The only way to check the outcome of deliberation is to reconsider, to do it again. The pros and cons are not deductive arguments: to reject the arguments on one side is not to imply any doubt about their premisses, or about the validity of arguments of that type. Deliberation is common in moral and other practical thinking, and also in choice of theories.
Note 41. My sketch of an ethics of inquiry will concentrate on this question because it will focus attention on the matters in which I believe the rationalist morality of thinking is most mistaken. But the ethics of inquiry is also concerned with other things; e.g. it might lay down a duty (of imperfect obligation) to cultivate a judicial frame of mind. Other branches of morality may also include rules about thinking, e.g. "In thinking about whom to appoint to a university lectureship pay no attention to the candidates' religious beliefs"; such rules are not part of the morality of thinking in my sense since they are concerned not with truth but with other values.
Note 42. In saying that our duty is to do as much good as possible I do not mean that we should try to maximise our subjective satisfaction; I mean we ought to do what really is good, whatever that may be. To decide whether reallocation leads to more loss than gain, we must be able to compare outcomes and decide which is really preferable. If there are several kinds of intrinsic goods (things good in themselves, not merely as means), the decision will be an intuition arrived at through deliberation (see above, n. 40). I believe that there are several kinds of intrinsic goods, and that we ought to seek several kinds; we cannot concentrate exclusively on what we do best. I believe, with Aristotle, that the intrinsic goods include some kinds of knowledge, and also some kinds of action; the outcomes to be compared are not only the consequences of action, they are the outcomes of allocating time and resources to various activities some of which are valued for themselves. However, these points are not assumed by the argument of the text.
Note 43. C. I. Lewis's "critique of cogency" (as described by Chisholm, "Lewis", p. 228) is an attempt to formulate canons which will tell us what we have a right to accept, and these canons refer to the character of the evidence, not to the circumstances of the inquirer. Chisholm views the matter in the same way: "If you ask me to defend some conclusion of mine which you may think unreasonable, I will present evidence which I take to be such that, for anyone having that evidence (and no additional relevant evidence), the conclusion is a reasonable one to accept. Here, too, my justification may be formulated in a \OD\practical syllogism\CD\; the major premise will say that anyone having just the evidence in question is warranted in accepting the conclusion; the minor premise will say that I am in the position of having just that evidence; and these premises will imply that I am justified in accepting the conclusion" (ibid., p. 226, emphasis added). Cf. Kornblith, "Beyond Foundationalism", pp. 599-602, on what he calls the "arguments-on-paper" thesis. (Kornblith argues that the circumstances relevant to the justification of a belief include what other beliefs the person holds: I say, and also the person's purposes, opportunities, resources.) I agree that if one person is justified another person similarly placed will also be justified. The disagreement is over what has to be justified and which similarities and differences are relevant. I maintain that what needs justifying is not belief, but inquiring or (especially) not inquiring, and that the relevant circumstances include the likely returns from other possible uses of our time and resources.
Note 45. See above, Essay II, n. 109.
Note 46. We may objectively meet the standards of a revised morality of thinking, and yet action based on our beliefs may violate the standards of some other department of morality. And other people might possibly have the right to deter this act and similar acts by threatening and inflicting penalties.
Note 47. See above, Essay III, n. 3.
Note 48. It is sometimes said that action is "practically" rational only if the agent has conclusive reason for doing just that act, or only if he believes that there is no better way of furthering his ends. For a discussion of such conceptions of rationality see Benn and Mortimore, p. 4. I suggest instead something like this: an action is rational if it does not conflict with the agent's beliefs. To work this out it would be necessary to decide what is to count as conflict; perhaps there are degrees of conflict and degrees of rationality. There is clear conflict if we believe that it would be wrong not to do a certain act on a certain occasion and then do not do it. There is clear conflict if we believe that it would be wrong ever to do less than our best, and then on some occasion do less than our best. But if we do not believe in such an exacting duty then we do not need conclusive reasons, and do not need to suppose that no better act is possible. Unless our code of duties is comprehensive and exacting, or unless "conflict" is taken very widely, our beliefs will often leave open a range of possible acts none of which would be irrational (though they might be open to criticism in other ways).
Note 49. Locke, Second Letter, pp. 103-4.
Note 50. Different methodologies set different "standards for intellectual honesty", according to Lakatos, p. 122.
Note 51. "I am not a belief philosopher: I am primarily interested in ideas, in theories, and I find it comparatively unimportant whether or not anybody \OD\believes\CD\ in them" (Popper, Objective Knowledge, p. 25). "I wish to distinguish sharply between objective science on the one hand, and \OD\our knowledge\CD\ on the other" (Popper, Logic, p. 98). See also Objective Knowledge, pp. 73-4, 106-12, 121-2; and see Haack. A reason given for the elimination of belief is that "Our subjective experiences or our feelings of conviction. . . can never justify any statement" (Popper, Logic, p. 44). But "justify any statement" is ambiguous. The statement, i.e. the proposition stated, is not justified by the fact that I believe it, but my stating it is justified by my believing it. See above, sect. 4.1.
Note 52. See Popper, Logic, p. 56.
Note 53. Cf. Lakatos, pp. 106-12, 125-31.
Note 54. "We use objective knowledge in the formation of our personal subjective beliefs" (Popper, Objective Knowledge, p. 80). "Subjective" presumably means here not "arbitrary" or "unfounded", but "in a subject"---cf. Essay II above, n. 47. I would add that it is only because we use it to form subjective beliefs that objective knowledge, i.e. the artefact, can be called "knowledge", the primary reference of which is to something someone knows.
Note 55. Peirce defines the truth as the opinion upon which all inquirers would agree if inquiry were carried indefinitely far (Writings, pp. 38-9, 240, 247-8, 257-8). This is not acceptable. If the external world does not exist then the belief that it does is false even if every inquirer (or the only inquirer) holds it and will always hold it; some false beliefs may not be corrigible.
Note 56. See above, p. 000, point (b).
Note 57. "Some mystics imagine that they have such a method [whereby beliefs are determined by reality, by an "external permanency"] in a private inspiration from on high. But that is only a form of the method of tenacity, in which the conception of truth as something public is not yet developed. Our external permanency would not be external, in our sense, if it was restricted in its influence to one individual. It must be something which affects, or might affect, every man" (Peirce, Writings, p. 18). "Or might affect" destroys the force of this passage. Mystics do not say that their revelations are essentially private, restricted necessarily to one individual; they might affect everyone except that some people lack the necessary receptive capacity, just as some people are blind or deaf. (Two incidental comments: Truth is neither public nor private, but something independent of what any person, or all persons, may think. And it does not follow that something restricted in its influence to one person is not (in the relevant sense) external.)
Note 58. Kuhn sometimes uses the word this way, for example at p. 159(19).
Note 59. This seems to be the outcome of the discussion in Lakatos, pp. 154-177; cf. Feyerabend, pp. 185-7. "There is no neutral algorithm for theory-choice, no systematic decision procedure which, properly applied, must lead each individual in the group to the same decision" (Kuhn, p. 200). The choice is made by deliberation. See Kuhn, pp. 199(37) and 204(1-2); cf. above, n. 40. The failure of attempts to substitute the mechanical application of clear criteria for intuitive judgement is also clear in Hempel, pp. 27(34-40), 30(7), 32(1-3), 36(23-5), 41(31), 57(41-3), 60(41-2), 65(20), 75(21-2).
Note 62. According to Kuhn, an approach which has some striking success, so that it seems much more promising than its rivals, may become paradigmatic for almost all work in the subject; but if later on a crisis develops and it no longer seems so promising, competing schools of thought appear again until another paradigm is found. This makes good economic sense. To concentrate investment on the most promising approach is especially sensible when work in the field is costly, requiring expensive equipment and materials and highly trained personnel, and, on the other hand, costly research is justified only when the programme seems promising. For both reasons some correlation is likely between what Kuhn calls "maturity" and costliness; the "immaturity" in this sense of a discipline in a field like philosophy, in which inquiry is relatively cheap, does not mean that nothing worthwhile is being accomplished.
Note 63. "In developing their research programs they act on the basis of guesses about what is and what is not fruitful, and what line of research promises further results in the third world of objective knowledge. In other words, scientists act on the basis of a guess, or, if you like, of a subjective belief (for we may so call the subjective basis of an action) concerning what is promising of impending growth in the third world of objective knowledge" (Popper, Objective Knowledge, p. 111). Cf. Kuhn, pp. 157(26)-158(7).
Note 64. It may be good that some people are willing to make investments which others consider too risky (Kuhn, p. 186(24- 29)). (Kuhn's account of science is reminiscent of Adam Smith, in that an "invisible hand" guides individually narrow-minded scientists in ways which bring the scientific community closer to Popper's ideal of the self-critical scientist; see e.g. pp. 24(31), 64(30)-65(20).)
Note 65. Mill, On Liberty, p. 232.
Note 66. Mill acknowledged this himself; see "The Spirit of the Age", pp. 40-4.
Note 67. Something similar can be said about clarity of thought and expression. Members of a circle of like-minded people will congratulate one another on being the only clear thinkers. A line of thought seems unclear if it is hard to tell how to continue it to answer other relevant questions, or if it seems to suggest unlikely answers. What seems relevant depends on what theories seem possible and likely; new possibilities raise new questions, and what formerly seemed clear may come to seem obscure. In judging clarity of thought we therefore rely on our sense of likelihood, in judging clarity of expression we assume that some words and ideas are familiar and clear and others in need of explanation. A line of thought can never be made absolutely clear; the most that can reasonably be asked is that it be made clear enough to those concerned for present purposes. Deciding who is concerned, what purposes should be envisaged, and how clear is clear enough, is part of the planning of inquiry, and can be covered by what was said in sects. 4 and 5 above.
Note 68. There could be some freedom of discussion within the élite group, and some "right of petitioning" for people outside the group (that is, a right to make representations to the élite); compare Plato, Laws, 634e. Mill's arguments in On Liberty, ch. 2, will not seem so strong if the possibility of a graduated and stratified repression is taken into account.
Note 69. "The early difficulties in the way of spontaneous progress are so great, that there is seldom any choice of means for overcoming them. . . . Liberty, as a principle, has no application to any state of things anterior to the time when mankind have become capable of being improved by fair and equal discussion" (Mill, On Liberty, p. 224). See also Representative Government, pp. 418-20. (However, a reciprocity argument may support freedom of discussion even when discussion may disseminate and confirm error; other things count besides truth, such as peace. It seems to me that some reciprocity argument is the best basis for freedom of discussion, not the dubious claim that it furthers truth.)
Note 70. Cf. Descartes's provisional code of morals, adopted "in order that I should not remain irresolute in my actions while reason obliged me to be so in my judgements" (vol. 1, p. 95). In ancient times Carneades set standards for knowledge so strict that no belief could meet them, and then allowed action to be based on probability; see Cicero, Academica, II.xxxi.99-100, xxxii.104.
Note 71. This is equivalent to Bayle's policy of following conscience. See above, Essay II, sect. 3.3.
Note 72. On their relationship see Firth.
Note 73. See Plato, Meno, 97-8.
Note 74. Justifiable, that is, epistemologically, as being an approximation to knowledge. (Some epistemologists, however, give the term an ethical interpretation---see Kornblith, "Justified Belief".)
Note 75. For an account of the discussion arising from Gettier's article see Dancy, pp. 25-40.
Note 76. More attention to this distinction might clarify the disagreement between "internalism" and "externalism" (see Bonjour): to be justified in claiming knowledge for myself I must believe that I know, but I can ascribe knowledge to another person who does not know that he knows, and even thinks (even for good reasons) that he does not know, and even to one who has not done his duty under the "ethics of thinking".
Note 77. See Unger. To be justified in ascribing knowledge to you I need not know why I think, or whether I have any reasons at all for thinking, that you are not right by chance. "Causal" and "reliabilist" theories sketch kinds of reasons I may have when I have reasons, but are not part of the analysis of what knowledge is.
Note 78. See Austin, pp.
67ff.
Return to Home Page