Epistemic Closure
Most of us think we can safely enlarge our knowledge base by accepting things that are entailed by (or logically implied by) things we know. Roughly speaking, the set of things we know is closed under entailment (or under deduction or logical implication), so we know that a given claim is true upon recognizing, and accepting thereby, that it follows from what we know. This is not to say that our usual way of adding to our knowledge is simply to recognize and accept what follows from what we already know. Obviously much more is involved. For instance, we gather data and construct explanations of those data, and under suitable circumstances we learn from others. More to the point at hand, when we claim that we know, of some proposition, that it is true, that claim is itself subject to error; often, seeing what follows from a knowledge claim prompts us to reassess and even withdraw our claim, instead of concluding, of the things that follow from it, that we know that they are true. Still, it seems reasonable to think that if we do know that some proposition is true then we are in a position to know, of the things that follow from it, that they, too, are true. However, some theorists have denied that knowledge is closed under entailment. The arguments against closure include the following:
The argument from the analysis of knowledge: given the correct analysis, knowledge is not closed, so it isn’t. For example, if the correct analysis includes a tracking condition, then closure fails.
The argument from nonclosure of knowledge modes: since the modes of gaining, preserving or extending knowledge, such as perception, testimony, proof, memory, indication, and information are not individually closed, neither is knowledge.
The argument from unknowable (or not easily knowable) propositions: certain sorts of propositions cannot be known (without special measures); given closure, they could be known (without special measures), by deducing them from mundane claims we known, so knowledge is not closed.
The argument from skepticism: skepticism is false but it would be true if knowledge were closed, so knowledge is not closed.
While proponents of closure have responses to these arguments, they also argue, somewhat in the style of G. E. Moore (1959), that closure itself is a firm datum—it is obvious enough to rule out any understanding of knowledge or related notions that undermines closure.
A closely related idea is that it is rational (justifiable) for us to believe anything that follows from what it is rational for us to believe. This idea is intimately related to the thesis that knowledge is closed, since, according to some theorists, knowing p entails justifiably believing p. If knowledge entails justification, closure failure of the latter might lead to closure failure of the former.
- 1. The Closure Principle
- 2. The Argument From the Analysis of Knowledge
- 3. The Argument From Nonclosure of Knowledge Modes
- 4. The Argument From Not (Easily) Knowable Propositions
- 5. The Argument From Skepticism
- 6. Closure of Rational Belief
- Bibliography
- Academic Tools
- Other Internet Resources
- Related Entries
1. Knowledge Closure
Precisely what is meant by the claim that knowledge is closed under entailment? One response is that the following straight principle of closure of knowledge under entailment is true:
- (SP)
- If person S knows p, and p entails q, then S knows q.
The conditional involved in the straight principle might be the material conditional, the subjunctive conditional, or entailment, yielding three possibilities, each stronger than the one before:
- (SP1)
- S knows p and p entails q only if S knows q.
- (SP2)
- If S were to know something, p, that entailed q, S would know q.
- (SP3)
- It is necessarily the case that: S knows p and p entails q only if S knows q.
However, each version of the straight principle is false, since we can know one thing, p, but fail to see that p entails q, or for some other reason fail to believe q. Since knowledge entails belief (according to nearly all theorists), we fail to know q. A less obvious worry is that we might reason badly in coming to believe that p entails q. Perhaps we think that p entails q because we think everything entails everything, or because we have a warm tingly feeling between our toes. Hawthorne (2005) raises the possibility that, in the course of grasping that p entails q, S will cease to know p. He also notes that SP1 is defensible on the (deviant) assumption that a thought, p, is equivalent to another, q, if p and q hold in all of the same possible worlds. Suppose p entails q. Then p is equivalent to the conjunction of p and q, and so the thought p is identical to the thought p and q. Hence in knowing p S knows p and q. Assuming that, in knowing p and q, S knows p and S knows q, then when S knows p S knows q, as SP1 says.
The straight principle needs qualifying, but this should not concern us so long as the qualifications are natural given the idea we are trying to capture, namely, that we can extend our knowledge by recognizing, and accepting thereby, things that follow from something that we know. The qualifications embedded in the following principle (construed as a material conditional) seem natural enough:
- (K)
- If, while knowing p, S believes q because S knows that p entails q, then S knows q.
As Williamson (2000) notes, the idea that we can extend our knowledge by applying deduction to what we know supports a closure principle that is stronger than K. It is a principle that says we know things we believe on the grounds that they are jointly implied by several separate known items. Suppose I know Mary is tall and I know Mary is left handed. K does not authorize my putting these two pieces of knowledge together so as to know that Mary is tall and left handed. But the following generalized closure principle covers deductions involving separate known items:
- (GK)
- If, while knowing various propositions, S believes p because S knows that they entail p, then S knows p.
Some theorists distinguish between something they call “single premise” and something they call “multiple-premise closure”. Such theorists would deny that K captures “single premise” closure, because K says that S knows q if S knows that two things are true: that p is true as well as that p entails q. The “single premise” closure principle is usually formulated roughly as follows (following Williamson 2002 and Hawthorne 2004):
- (SPK)
- If, while knowing p, S believes q by competently deducing q from p, then S knows q.
However, it is far from clear that one may competently deduce q from p without relying on any knowledge aside from p. Fortunately, it seems that nothing hinges on this possibility, except perhaps for people interested in whether we can identify something that can appropriately be labeled “single premise closure principle”.
Proponents of closure might accept both K and GK, perhaps further qualified in natural ways (but they might not: see the concerns about justification closure raised in section 6). By contrast, Fred Dretske and Robert Nozick reject K and therefore GK as well. They reject any closure principle, no matter how narrowly restricted, that warrants our knowing that skeptical hypotheses (e.g., I am a brain in a vat) are false on the basis of mundane knowledge claims (e.g., I am not in a vat). In addition to rejecting K and GK, they deny knowledge closure across instantiation and simplification, but not across equivalence (Nozick 1981: 227–229):
- (KI)
- If, while knowing that all things are F, S believes a particular thing a is F because S knows it is entailed by the fact that all things are F, then S knows a is F.
- (KS)
- If, while knowing p and q, S believes q because S knows that q is entailed by p and q, then S knows q.
- (KE)
- If, while knowing p, S believes q because S knows q is equivalent to p, then S knows q.
Let us turn to their arguments.
2. The Argument From the Analysis of Knowledge
The argument from the analysis of knowledge says that the correct account of knowledge leads to K failure. We can distinguish two versions. According to the first version, K fails because knowledge requires belief tracking. According to the second, any relevant alternatives account, such as Dretske’s and Nozick’s, leads to K failure. According to Dretske (2003: 112–3; 2005: 19), any relevant alternatives account leads “naturally” but “not inevitably” to K failure.
2.1 Closure Fails Due to the Tracking Condition on Knowledge
In rough outline, the first version involves defending say Dretske’s or Nozick’s tracking analysis of knowledge, then showing that it undermines K (versions of the tracking account are also defended by Becker 2009, by Murphy and Black 2007, and by Roush 2005, the last of whom modifies the tracking account so as to preserve closure; for criticisms of Rouse see Brueckner 2012). We can skip the defense, which consists largely in showing that tracking does a better job than competitors in dealing with our epistemic intuitions about cases of purported knowledge. We may also simplify the analyses. According to Nozick, to know p is, very roughly (and ignoring his thoroughly discredited fourth condition for knowledge, criticized, e.g, in Luper 1984 and 2009 and in Kripke 2011), to have a belief p which meets the following condition (‘BT’ for belief tracking):
- (BT)
- were p false, S would not believe p.
That is, in the close worlds to the actual world in which not-p holds, S does not believe p. The actual world is one’s situation as it is when one arrives at the belief p. BT requires that in all nearby not-p worlds S fails to believe p. (The semantics of subjunctive conditionals is clarified in Stalnaker 1968, Lewis 1973, and modified by Nozick 1981 note 8.) On Dretske’s view knowing p is roughly a matter of having a reason R for believing p which meets the following condition (‘CR’ for conclusive reason):
- (CR)
- were p false, R would not hold.
That is, in the close worlds to the actual world in which not-p holds, R does not. When R meets this condition, Dretske says R is a conclusive reason for believing p.
Dretske pointed out (2003, n. 9; 2005, n. 4) that his view does not face one of the objections which Saul Kripke (2011, 162–224; Dretske had access to a draft circulated prior to publication) deploys against Nozick’s account. Suppose I am driving through a neighborhood in which, unbeknownst to me, papier-mâché barns are scattered, and I see that the object in front of me is a barn. I also notice that it is red. Because I have barn-before-me percepts, I believe barn: the object in front of me is a (ordinary) barn (the example is attributed to Ginet in Goldman 1976). Our intuitions suggest that I fail to know barn. And so say BT and CR. But now suppose that the neighborhood has no fake red barns; the only fake barns are blue. (Call this the red barn case.) Then on Nozick’s view I can track the fact that there is a red barn, since I would not believe there was a red barn (via my red-barn percepts) if no red barn were there, but I cannot track the fact that there is a barn, since I might believe there was a barn (via blue-barn percepts) even if no barn were there. Dretske said that this juxtaposition, in which I know something yet fail to know a second thing that is intimately related to the first (there being a red barn, which I know, entails there being a barn, which I do not), “is an embarrassment,” and in this respect, he thought, his view is superior to Nozick’s. Let R, my basis for belief, be the fact that I have red-barn percepts. If no barn were there, R would fail to hold, so I know a barn is there. Further, if no red barn were there, R would still fail to hold, so I know a red barn is there. So Dretske can avoid the objectionable juxtaposition. Still, it is surprising that Dretske cited the red barn case as the basis for preferring his version of tracking over Nozick’s. First, Dretske himself accepted juxtapositions of knowledge and ignorance that are at least equally bizarre, as we shall see. Second, Nozick avoided the very juxtaposition Dretske discussed by restating his account to make reference to the methods via which we come to believe things (Hawthorne 2005). On a more polished version of his account, Nozick said that to know p is, roughly, to have a belief p, arrived at through a method M, which meets the following condition (‘BMT’ for belief method tracking):
- (BMT)
- were p false, S would not believe p via M.
If no red barn were there I would believe neither that there was a barn, nor that there was a red barn, via red-barn percepts.
Third, the red barn case is one about which intuitions will vary. It is not obvious that I do know there is a red barn in the circumstances Dretske sketches, which differ from those in Ginet’s original barn case (where I fail to know barn) only in the stipulations that I see a red barn and that none of the barn simulacra are red. What is more, both Dretske’s and Nozick’s accounts have the odd implication that I know there is a barn if I base my belief on my red barn percepts yet I fail to know this if, in basing it on my barn percepts, I ignore the barn’s color. Presumably the barn’s color is not relevant to its being a barn.
The tracking accounts permit counterexamples to K. Dretske’s well-known illustration is the zebra case (1970): suppose you are at a zoo in ordinary circumstances standing in front of a cage marked ‘zebra’; the animal in the cage is a zebra, and you believe zeb, the animal in the cage is a zebra, because you have zebra-in-a-cage visual percepts. It occurs to you that zeb entails not-mule, it is not the case that the animal in the cage is a cleverly disguised mule rather than a zebra. You then believe not-mule by deducing it from zeb. What do you know? You know zeb, since, if zeb were false, you would not have zebra-in-a-cage visual percepts; instead, you would have empty-cage percepts, or aardvark-in-a-cage percepts, or the like. Do you know not-mule? If not-mule were false, you would still have zebra-in-a-cage visual percepts (and you would still believe zeb, and you would still believe not-mule by deducing it from zeb). So you do not know not-mule. But notice that we have:
- You know zeb
- You believe not-mule by recognizing that zeb entails not-mule
- You do not know not-mule.
In view of (a)–(c), we have a counterexample to K, which entails that if (a) you know zeb, and (b) you believe not-mule by recognizing that zeb entails not-mule, then you do know not-mule, contrary to (c).
Having rejected K, and denying that we know things like not-mule, Nozick also had to deny closure across simplification. For if some proposition p entails another proposition q, then p is equivalent to the conjunction p & q; accordingly, given closure across equivalence, which Nozick accepted, if we know zeb we can know the conjunction zeb & not-mule, but if we also accept closure across simplification, we will be able to know not-mule.
In response to this first version of the argument from the analysis of knowledge, some theorists (e.g., Luper 1984, BonJour 1987, DeRose 1995) argued that K has great plausibility in its own right (which Dretske acknowledged in 2005: 18) so it should be abandoned only in the face of compelling reasons, yet there are no such reasons.
To show there are no compelling reasons to abandon K, theorists have provided accounts of knowledge that (a) handle our intuitions at least as successfully as the tracking analyses and yet (b) underwrite K. One way to do this is to weaken the tracking analysis so that we know things that we track or that we believe because we know that they follow from things that we track (this sort of option has been turned against Nozick by various theorists; Roush defends it in 2005, 41–51). Another approach is as follows. Knowing p is roughly a matter of having a reason R for believing p which meets the following condition (‘SI’ for safe indication):
- (SI)
- if R held, p would be true.
SI requires that p be true in the nearby R worlds. When R meets this condition, let us say that R is a safe indicator that p is true. (Different versions of the safety condition have been defended; see, for example, Luper 1984; Sosa 1999, 2003, 2007, 2009; Williamson 2000; and Pritchard 2007.) SI is the contraposition of CR, but the contraposition of a subjunctive conditional is not equivalent to the original.
Let us suppose without argument that SI handles cases of knowledge and ignorance as intuitively as CR. Why say SI underwrites K? The key point is that if R safely indicates that p is true, then it safely indicates that q is true, where q is any of p’s consequences. Put another way, the point is that the following reasoning is valid (being an instance of strengthening the consequence):
- If R held, p would be true (i.e., R safely indicates that p)
- p entails q
- So if R held, q would be true (i.e., R safely indicates that q)
Hence, if a person S knows p on the basis of R, S is in a position to know q on the basis of R, where q follows from p. S is also in a position to know q on the basis of the conjunction of R together with the fact that p entails q. Thus if S knows p on some basis R, and believes q on the basis of R (on which p rests) together with the fact that p entails q, then S knows q. Again: if
- S knows p (on the basis of R), and
- S believes q by recognizing that p entails q (so that S believe q on the basis of R, on which p rests, together with the fact that p entails q),
then
- S knows q (on the basis of R and the fact that p entails q),
as K requires. To illustrate, let us use Dretske’s example. Having based your belief zeb on your zebra-in-the-cage percepts, you know zeb according to SI: given your circumstances, if you had those percepts, zeb would be true. Moreover, when you believe not-mule by first believing zeb on the basis of your zebra-in-the-cage percepts then deducing not-mule from zeb, you know not-mule according to SI: if you had those percepts not only would zeb hold, so would its consequence not-mule.
Let us digress briefly in order to note that some versions of the safety account will not uphold closure (Murphy 2005 presses this objection against Sosa’s version of the safety account). For example, at one point Ernest Sosa discussed the following version of the condition:
If S were to believe p, p would be true.
This is to require that one’s belief safely indicates its own truth. However, it is entirely possible to be so situated that one’s belief safely indicates its truth even though the requisite condition is not met for something that follows from that belief. The point can be illustrated with a version of the red barn case. Suppose that (on the basis of my red-barn percepts) I believe red barn: there is a red barn in front of me. Suppose, too, that there is indeed a red barn there. However (you guessed it) many fake barns are scattered through the neighborhood, all of which are blue, not red. In the close worlds in which I believe red barn, I am correct, so I meet the requisite condition for knowing red barn, which is that my believing red barn safely indicates its own truth. Now, red barn entails barn: there is a barn in front of me. But, according to the view on offer, the requisite condition for knowing barn is not that my belief red barn safely indicates that barn holds. What is required instead is that my belief barn safely indicates its own truth. Assuming that I would believe barn if I saw one of the blue fakes, then my belief barn does not safely indicate its truth.
To pick up the thread again: now, K fails if knowledge entails CR but not if knowledge entails SI, but it may not be possible to underwrite K merely by replacing CR with SI, since some other condition for knowledge might block closure. We can underwrite closure if we assume that believing p on “safe” grounds is sufficient for knowing p, but this assumption is dubious. As we have understood safety, we can believe things on safe grounds without knowing them. An obvious example is any necessary truth: because it holds in all possible worlds we can safely believe it for any reason. For another example, recall the red barn case discussed earlier: despite the many fake blue barns in the neighborhood, my red-barn percepts are safe indicators that the object in front of me is a barn and that it is a red barn, so no objectionable juxtaposition (such as I know there is a red barn but not there is a barn) occurs, but some theorists will insist that, in the circumstances sketched, I know neither that the object is a barn nor that it is a red barn.
2.2 Closure Fails on a Relevant Alternatives Approach
The second version of the argument from the analysis of knowledge has it that any relevant alternatives view, not just tracking accounts, is in tension with K. An analysis is a relevant alternatives account when it meets two conditions. First, it yields an appropriate understanding of ‘relevant alternative.’ Dretske’s approach qualifies since it allows us to say that an alternative A to p is relevant if and only if:
- (CRA)
- were p false, A might hold.
According to the second condition, the analysis must say that knowing p requires ruling out all relevant alternatives to p but not all alternatives to p. Dretske’s approach qualifies once again. It says an alternative A is ruled out on the basis of R if and only if the following condition is met:
- (CRR)
- were A to hold R would not hold.
And, on Dretske’s approach, an alternative A must be ruled out if and only if A meets CRA.
So the tracking account is a relevant alternatives approach. But why say that relevant alternatives accounts of knowledge are in tension with K? We will say this if, like Dretske, we accept the following crucial tenet: the negation of a proposition p is automatically a relevant alternative to p (no matter how bizarre or remote not-p might be) but often not a relevant alternative to things that imply p. For a relevant alternatives theorist, this tenet suggests that we can know something p only if we can rule out not-p but we can know things that entail p even if we cannot rule out not-p, which opens up the possibility that there are cases that violate K. For while our inability to rule out not-p stops us from knowing p it does not stop us from knowing things that entail p. And an example is ready to hand: the zebra case. Perhaps you cannot rule out mule; but that stops you from knowing not-mule without stopping you from knowing zeb. These points can be restated in terms of the conclusive reasons account. For Dretske, the negation of a proposition p is automatically a relevant alternative since condition CRA is automatically met; that is, it is vacuously true that:
were p false, not-p might hold.
Therefore mule is a relevant alternative to not-mule. Furthermore, you fail to know not-mule since you cannot rule out mule: you believe not-mule on the basis of your zebra-in-the-cage percepts, but you would still have these if mule held, contrary to CRR. Yet you know zeb in spite of your inability to rule out mule, for were zeb false you would not have your zebra-in-the-cage percepts.
According to the second version of the argument from the analysis of knowledge any relevant alternatives view is in tension with K. How compelling is this argument? As Dretske acknowledged (2003), it is actually a weak challenge to K since some relevant alternatives accounts are fully consistent with K. For an example, we have only to adapt the safe indication view so as to make it clear that it is a relevant alternatives account (Luper 1984, 1987c, 2006).
The safe indication view can be adapted in two steps. First, we say that an alternative to p, A, is relevant if and only if the following condition is met:
- (SRA)
- In S’s circumstances, A might hold.
Thus any possibility that is remote is automatically irrelevant, failing SRA. Second, we say that A is ruled out on the basis of R if and only if the following condition is met:
- (SIR)
- were R to hold A would not hold.
This way of understanding relevant alternatives upholds K. The key point is that if S knows p on the basis of R, and is thus able to rule out p’s relevant alternatives, then S can also rule out q’s relevant alternatives, where q is anything p implies. If R were to hold, q’s alternatives would not.
Apparently, the relevant alternatives account can be construed in a way that supports K as well as a way that does not. Hence Dretske is not well positioned to claim that the relevant alternatives view leads “naturally” to closure failure.
2.3 Closure and Reliabilism
On one version of reliabilism (defended by Ramsey 1931 and Armstrong 1973, among others) one knows p if and only if one arrives at (or sustains) the belief p via a reliable method. Is the reliabilist committed to K? The answer depends on precisely how the relevant notion of reliability is understood. If we understand reliability as tracking theorists do, we will reject closure. But there are other versions of reliabilism which sustain K. For example, the safe indication account is a type of reliabilism. Also, we could say that a true belief p is reliably formed if and only if based on an event that usually would occur only if p (or a p-type belief) were true. Any event that, in this sense, reliably indicates that p is true will also reliably indicate that p’s consequences are true.
3. The Argument From Nonclosure of Knowledge Modes
Dretske argued (2003, 2005) that we should expect K failure because none of the modes of gaining, preserving or extending knowledge are individually closed. Dretske made his point in the form of a rhetorical question: “how is one supposed to get closure on something when every way of getting, extending and preserving it is open (2003: 113–4)?”
3.1 Knowledge Modes and Nonclosure
As examples of modes of gaining, sustaining and extending knowledge Dretske suggested perception, testimony, proof, memory, indication, and information. To say of these items that they are not individually closed is to say that the following modes closure principles, with or without the parenthetical qualifications, are false:
- (PC)
- If S perceives p, and (S believes q because S knows) p entails q, then S perceives q.
- (TC)
- If S has received testimony that p, and (S believes q because S knows) p entails q, then S has received testimony that q.
- (OC)
- If S has proven p, and (S believes q because S knows) p entails q, then S has proven q.
- (RC)
- If S remembers p, and (S believes q because S knows) p entails q, then S remembers q.
- (IC)
- If R indicates p, and (S believes q because S knows) p entails q, then R indicates q.
- (NC)
- If R carries the information p, and (S believes q because S knows) p entails q, then R carries the information q.
And, according to Dretske, each of these principles fails. We may perceive that we have hands, for example, without perceiving that there are physical things.
3.2 Responses to Dretske
There have been various rejoinders to Dretske’s argument from nonclosure of knowledge modes.
First, failure of one or more of the modes closure principles does not imply that K fails. What matters is whether the various modes of knowledge Dretske discusses position us to know the consequences of the things we know. In other words, the issue is whether the following principle is true:
- (T)
- If, while knowing p via perception, testimony, proof, memory, or something that indicates or carries the information that p, S believes q because p entails q, then S knows q.
Second, theorists have defended some of these modes closure principles, such as PC, IC and NC. Dretske rejects these three principles because he thinks perception, indication and information are best analyzed in terms of conclusive reasons, which undermines closure. But the three principles (or something very much like them) may be defended if we analyze perception, indication and information in terms of safe indication. Consider IC and NC. Both are true if we analyze indication and information as follows:
R indicates p iff p would be true if R held.
R carries the information that p iff p would be true if R held.
A version of PC may be defended if we make use of Dretske’s own notion of indirect perception (1969). Consider a scientist who studies the behavior of electrons by watching bubbles they leave behind in a cloud chamber. The electrons themselves are invisible, but the scientist can perceive that the (invisible) electrons are moving in certain ways by perceiving that the (visible) bubbles left behind are arranging themselves in specific ways. What we directly perceive positions us to perceive various things indirectly. Now assume that when we directly or indirectly perceive p, and this causes us to believe q, where p entails q, we are positioned to perceive q indirectly. Then we are well on our way to accepting some version of PC, such as, for example:
- (SPC)
- If S perceives p, and this causes S to believe q, then S perceives q.
4. The Argument From Not (Easily) Knowable Propositions
Another anticlosure argument is that there are some sorts of propositions we cannot know unless perhaps we take extraordinary measures, yet such propositions are entailed by mundane claims whose truth we do know. Since this would be impossible if K were correct, K must be false. The same difficulty is sometimes discussed under the heading problem of easy knowledge, since some theorists (Cohen 2002) believe that certain things are difficult to know, in the sense that they cannot be known by deduction from banal knowledge. The argument has different versions depending on which propositions are said to be hard knowledge. According to Dretske (and perhaps Nozick as well), we cannot easily know that limiting propositions or heavyweight propositions are true. These resemble propositions Moore (1959) considered certainly true and that Wittgenstein (1969) declared to be unknowable (but Wittgenstein considered them unknowable on the dubious grounds that they must be true if we are to entertain doubts). Another possibility is that we cannot easily know lottery propositions. A special case of the argument from unknowable propositions starts with the claim that we cannot know the falsity of skeptical hypotheses. We will consider this third view in the next section.
4.1 The Argument from Limiting Propositions
Dretske did not clearly delineate the class of propositions he called “limiting” (in 2003) or “heavyweight” (in 2005). Some of the examples he provided are “There is a past,” “There are physical objects,” and “I am not being fooled by a clever deception.” He appeared to think that these propositions have a property we may call “elusiveness,” where p is elusive for me if and only if p’s falsity would not change my experiences. But being limiting does not coincide with being elusive. If there were no physical objects, my experiences would be changed dramatically, since I would not exist. So some limiting propositions are not elusive. As to whether all elusive claims are limiting, it is hard to say, because of the squishiness of the term “limiting”. Not-mule is elusive, but is it limiting?
Can’t we know limiting propositions? If not, and if we do know things that entail them, Dretske thought he had further support for his conclusive reasons view, assuming, as he did, that his view rules out our knowing limiting propositions (while allowing knowledge of things that entail them). However, this assumption is false (Hawthorne 2005, Luper 2006). We do have conclusive reason to believe some limiting propositions, such as that there are physical objects. Still, Dretske might abandon the notion of a limiting proposition in favor of the notion of elusive propositions, and cite, in favor of his conclusive reasons view, and against K, the facts that we cannot know elusive claims but we can know things that imply them.
In order to rule out knowledge of limiting/elusive propositions, Dretske offered two sorts of argument, which we may call the argument from perception and the argument from pseudocircularity.
The argument from perception starts with the claims that (a) we do not perceive that limiting/elusive claims hold and (b) we do not know, via perception, that limiting/elusive claims hold. Since it is hard to see how else we could know limiting/elusive propositions, (a) and (b) are good grounds for concluding that we just do not know that they hold.
There is no doubt that (a) and (b) have considerable plausibility. Nonetheless, they are controversial. To explain the truth of (a) and (b), Dretske counted on his conclusive reasons analysis of perception. His critics may cite the safe indication account of perception as the basis for rejecting (a) and (b). Luper (2006), for example, argues against both, chiefly on the grounds that we can perceive and know some elusive claims (such as not-mule) indirectly, by directly perceiving claims (such as zeb) that entail them.
Dretske suggested another reason for ruling out knowledge of limiting/elusive claims. He thought we can know banal facts (e.g., we ate breakfast) without knowing limiting/elusive claims they entail (e.g., the past is real) so long as those limiting/elusive claims are true, but we cannot then turn around and employ the former as our basis for knowing the latter. Suppose we take ourselves to know some claim, q, by inferring it from another claim, p, which we know, but our knowing p in the first place depends on the truth of q. Call this pseudocircular reasoning. According to Dretske, pseudocircular reasoning is unacceptable, and yet it is precisely what we rely on when we attempt to know limiting/elusive claims such as denials of skeptical hypotheses by deducing them from ordinary knowledge claims that entail them: we will not know the latter in the first place unless the former are true. The problem Dretske here raised was pressed earlier by critics of broadly reliabilist accounts of knowledge, such as Richard Fumerton (1995, 178). Jonathan Vogel (2000) discusses it under the heading bootstrapping, the procedure employed when, e.g., someone who has no initial evidence about the reliability of a gas gauge, comes to believe p on several different occasions because the gauge indicates p, and thereby knows p according to reliabilist accounts of knowledge, then infers that the gauge is reliable, by induction. By bootstrapping we may move—illegitimately, according to Vogel—from beliefs formed through a reliable process to the knowledge that those beliefs were arrived at through a reliable process. One may know p using a gauge in the first instance only if that gauge is reliable; hence, to conclude it is reliable solely on the basis of its track record involves pseudocircular reasoning.
Theorists have long objected to knowledge claims whose truth depends on a fact that itself has not been established, especially if that fact is merely taken for granted. It is also standard to reject any knowledge claim whose pedigree smacks of circularity. Both worries arise if we claim to know that one proposition, q, is true on the grounds that it is entailed by a second proposition, p, even though the truth of q was taken for granted in coming to know that p is true. Many theorists will reject pseudocircular reasoning on precisely these traditional grounds. Dretske did not share the first worry but he did raise the second, the concern about pseudocircular reasoning. But there is a growing body of work that breaks with tradition and defends some forms of epistemic circularity (this work is heavily criticized, in turn, on the grounds that it is open to versions of traditional objections). Max Black (1949) and Nelson Goodman (1955) were early examples; others include Van Cleve 1979 and 2003; Luper 2004; Papineau 1992; and Alston 1993. Dretske himself meant to break with tradition, writing under the banner of ‘externalism.’ He explicitly said that most, if not all, of our mundane knowledge claims depend on facts we have not established. Indeed, he cited this as a virtue of his conclusive reasons view. Yet nothing in the nature of the conclusive reasons account rules out our knowing limiting propositions using pseudocircular reasoning, which leaves his reservations mysterious. A set of jar-ish experiences can constitute a conclusive reason for believing jar, a jar of cookies is in front of me. If I then believe objects, there are physical objects, because it is entailed by jar, I have conclusive reason for believing objects, a limiting proposition. (If objects were false, jar would be too, and I would lack my jar-ish experiences.)
Dretske might have fallen back on the view that the conclusive reasons account rules out knowing elusive, as opposed to limiting, claims through pseudocircular reasoning, because we lack conclusive reasons for elusive claims no matter what sort of reasoning we employ. But this does not put Dretske’s account at odds with pseudocircular reasoning. And even this more limited position can be challenged (adapting a charge against Nozick in Shatz 1987). We might insist that p itself is a conclusive reason for believing q when we know p and p entails q. After all, assuming p entails q, if q were false so would p be. On this strategy we have a further argument for K: if S knows p (relying on some conclusive reason R), and S believes q because S knows p entails q, S has a conclusive reason for believing q, namely p (rather than R), and hence S knows q.
Another doubt about knowing elusive claims deductively via mundane claims is that this maneuver is improperly ampliative. Cohen claims that knowing the table is red does not position us to know “I am not a brain-in-a-vat being deceived into believing that the table is red” nor “it’s not the case that the table is white [but] illuminated by red lights” (2002: 313). In the transition from the former to the latter, our knowledge appears to have been amplified improperly. This concern may be due at least in large part to lack of precision in the application of entailment or deductive implication (Klein 2004). Let red be the proposition that the table is red, white the proposition that the table is white, and light the proposition that the table is being illuminated by a red light. Red does not entail anything about the conditions under which the table is illuminated. In particular it does not entail the conjunction, light & not-white. The most we can infer is that the conjunction, white & light, is false, and that gives us no information whatever about the lighting conditions of the table. One could as easily infer the falsity of the conjunction, white & not-light. No amplification of the original known proposition, red, has come about.
4.2 The Argument from Lottery Propositions
It seems apparent that I do not know not-win, I will not win the state lottery tonight, even though my odds for hitting it big are vanishingly small. But suppose my heart’s desire is to own a 10 million dollar villa in the French Riviera. It seems plausible to say that I know not-buy, I will not buy that villa tomorrow, since I lack the means, and that I know the conditional, if win then buy, i.e., tomorrow I will buy the villa if I win the state lottery tonight. From the conditional and not-buy it follows that not-win, so, given closure, knowing the conditional and not-buy positions me to know not-win. As this reasoning shows, the unknowability of claims like not-win together with the knowability of claims like not-buy position us to launch another challenge to closure.
Let a lottery proposition be a proposition, like not-win, that (at least normally) is supportable only on the grounds that its probability is very high but less than 1. Vogel (1990, 2004) and Hawthorne (2004, 2005) have noted that a great number of propositions that do not actually involve lotteries resemble lottery propositions in that they can be given a probability that is close to but less than 1. Such propositions might be described as lotteryesque. The events mentioned in a claim can be subsumed under indefinitely many reference classes, and there is no authoritative way to choose which among these determines the probability of the subsumed events. By carefully selecting among these classes we can often find ways to suggest that the probability of a claim is less than 1. Take, for example, not-stolen, the proposition that the car you just parked in front of the house has not been stolen: by selecting the class, red cars stolen from in front of your house in the last hour, we can portray the statistical probability of not-stolen as 1. But by selecting, cars stolen in the U.S., we can portray the probability as significantly less than 1. If, like lottery propositions, lotteryesque propositions are not easily known, they increase the pressure on the closure principle, since they are entailed by a wide range of mundane propositions which become unknowable, given closure.
How great a threat to K (and GK) are lottery and lotteryesque propositions? The matter is somewhat controversial. However, there is a great deal to be said for treating lottery propositions one way and lotteryesque propositions another.
As for lottery propositions: several theorists suggest that we do not in fact know that they are true because knowing them requires believing them because of something that establishes their truth, and we (normally) cannot establish the truth of lottery propositions. There are various ways to understand what is meant by “establishing” the truth of a claim. Dretske, as we have seen, thinks that knowledge entails having a conclusive reason for thinking as we do. David Armstrong (1973, p. 187) said that knowledge entails having a belief state that “ensures” truth. Safe indication theorists suggest that we know things when we believe them because of something that safely indicates their truth. And Harman and Sherman (2004, p. 492) say that knowledge requires believing as we do because of something “that settles the truth of that belief.” On all four views, we fail to know that a claim is true when our only grounds for believing it is that it is highly likely. However, the unknowability of lottery propositions is not a substantial threat to closure, since it is not obvious that there are propositions that are both known to be true and that entail lottery propositions. Consider the example discussed earlier: the conditional if win then buy together with not-buy. If I know these, then, by GK, I know not-win, a lottery proposition. But it is quite plausible to deny that I do know these. After all, I might win the lottery.
Now consider lotteryesque propositions. We cannot defend closure by denying that we know any mundane proposition that entails a lotteryesque proposition since it is clear that we know that many things are true that entail lotteryesque propositions. To defend closure we must instead say that lotteryesque propositions are knowable. They differ from genuine lottery propositions in that they may be supportable on grounds that establish their truth. If I base my belief not-stolen solely on crime statistics, I will fail to know that it is true. But I can instead base it on observations, such as having just parked it in my garage, and so forth, that, under the circumstances, establish that not-stolen holds.
5. The Argument From Skepticism
According to Dretske and Nozick, we can account for the appeal of skepticism and explain where it goes wrong if we accept their view of knowledge and reject K. Rejecting knowledge closure is therefore the key to resolving skepticism. Given the importance of insight into the problem of skepticism, they would seem to have a good case for denying closure. Let us consider the story they present, and some worries about its acceptability.
5.1 Skepticism and Antiskepticism
Dretske and Nozick focused on a form of skepticism that combines K with the assumption that we do not know that skeptical hypotheses are false. For example, I do not know not-biv: I am not a brain in a vat on a planet far from earth being deceiving by alien scientists. On the strength of these assumptions, skeptics argue that we do not know all sorts of commonsense claims that entail the falsity of skeptical hypotheses. For example, since not-biv is entailed by h, I am in San Antonio, skeptics may argue as follows:
- (1)
- K is true; i.e., if, while knowing p, S believes q because S knows that p entails q, then S knows q.
- (2)
- h entails not-biv.
- (3)
- So if I know h and I believe not-biv because I know it is entailed by h then I know not-biv.
- (4)
- But I do not know not-biv.
- (5)
- Hence I do not know h.
Dretske and Nozick were well aware that this argument can be turned on its head, as follows:
- (1)
- K is true; i.e., if, while knowing p, S believes q because S knows that p entails q, then S knows q.
- (2)
- h entails not-biv.
- (3)
- So if I know h and I believe not-biv because I know it is entailed by h then I know not-biv.
- (4)′
- I do know h.
- (5)′
- Hence I do know not-biv.
Turning tables on the skeptic in this way was roughly Moore’s (1959) antiskeptical strategy. (Tendentiously, some writers now call this strategy dogmatism). However, instead of K, Moore presupposed the truth of a stronger principle:
- (PK)
- If, while knowing p, S believes q because S knows that q is entailed by S’s knowing p, then S knows q.
Unlike K, PK underwrites Moore’s famous argument: Moore knows he is standing; his knowing that he is standing entails that he is not dreaming; therefore, he knows (or rather is in a position to know) that he is not dreaming.
5.2 Tracking and Skepticism
According to Dretske and Nozick, skepticism is appealing because skeptics are partially right. They are correct when they say that we do not know that skeptical hypotheses fail to hold. For I do not track not-biv: if biv were true, I would still have the experiences that lead me to believe that biv is false. Something similar can be said about antiskepticism: antiskeptics are correct when they say we know all sorts of commonsense claims that entail the falsity of skeptical hypotheses. Having gotten this far, however, skeptics appeal to K, and argue that since I would know not-biv if I knew h, then I must not know h after all, while Moore-style antiskeptics appeal to K in order to conclude that I do know not-biv. But this is precisely where skeptics and antiskeptics alike go wrong, for K is false. Consider the position skeptics are in. Having accepted the tracking view—as they do when they deny that we know skeptical hypotheses are false—skeptics cannot appeal to the principle of closure, which is false on the tracking theory. We track (hence know) the truth of ordinary knowledge claims yet fail to track (or know) the truth of things that follow, such as that incompatible skeptical hypotheses are false.
One shortcoming of this story is that it cannot come to terms with all types of skepticism. There are two main forms of skepticism (and various sub-categories): regress (or Pyrrhonian) skepticism, and indiscernability (Cartesian) skepticism. At best, Dretske and Nozick have provided a way of dealing with the latter.
Another worry about Dretske’s and Nozick’s response to Cartesian skepticism is that it forces us to give up K (as well as GK, and closure across instantiation and simplification). Given the intuitive appeal of these principles, some theorists have looked for alternative ways of explaining skepticism, which they then offer as superior in part on the grounds that they do no violence to K. Consider two possibilities, one offered by advocates of the safe indication theory, and one by contextualists.
5.3 Safe Indication and Skepticism
Advocates of the safe indication theory accept the gist of the tracking theorist explanation of the appeal of skepticism but retain the principle of closure. One reason skepticism tempts us is that we tend to confuse CR with SI (Sosa 1999, Luper 1984, 1987c, 2003a). After all, CR—if p were false, R would not hold—closely resembles SI—R would hold only if p were true. When we run the two together, we sometimes apply CR and conclude that we do not know that skeptical scenarios do not hold. Then we shift back to the safe indication account, and go along with skeptics when they appeal to the principle of entailment, which is sustained by the safe indication account, and conclude that ordinary knowledge claims are false. But, as Moore claimed, skeptics are wrong when they say we do not know that skeptical hypotheses are false. Roughly, we know skeptical possibilities do not hold since (given our circumstances) they are remote.
Skepticism might also result from the assumption that, if a belief formation method M were, in some situation, to yield a belief without enabling us to know the truth of that belief, then it cannot ever generate bona fide knowledge (of that sort of belief), no matter what circumstances it is used in. (M must be strengthened somehow, say with a supplemental method, or with evidence about the circumstances at hand, if knowledge is to be procured.) This assumption might rest on the idea that any belief M yields is, at best, accidentally correct, if in any circumstances M yields a false or an accidentally correct belief (Luper 1987b,c). On this assumption, we can rule out a method of belief formation M as a source of knowledge merely by sketching circumstances in which M yields a belief that is false or accidentally correct. Traditional skeptical scenarios suffice; so do Gettieresque situations. Externalist theorists reject the assumption, saying that M can generate knowledge when used in circumstances under which the belief it yields is not accidentally correct. In highly Gettierized circumstances M must put us in an especially strong epistemic position if M is to generate knowledge; in ordinary circumstances, less exacting methods can produce knowledge. The standards a method must meet to produce knowledge depend on the context in which it is used. This view, on which the requirements for a subject or agent S to know p vary with S’s context (e.g., how exacting S’s method of belief formation must be to yield knowledge depend on S’s circumstances), might be called agent-centered (or subject) contextualism. Both tracking theorists and safe indication theorists defend agent-centered contextualism.
5.4 Contextualism and Skepticism
Theorists writing under the label “contextualism,” such as David Lewis (1979, 1996), Stewart Cohen (1988, 1999), and Keith DeRose (1995), offer a related way of explaining skepticism without denying closure. For clarity, we might call them speaker-centered (or attributor) contextualists since they contrast their view with agent-centered contextualism. According to (speaker-centered) contextualists, whether it is correct for a judge to attribute knowledge to someone depends on that judge’s context, and the standards for knowledge differ from context to context. When the man on the street judges knowledge, the applicable standards are relatively modest. But an epistemologist takes all sorts of possibilities seriously that are ignored by ordinary folk, and so must apply quite stringent standards in order to reach correct assessments. What passes for knowledge in ordinary contexts does not qualify for knowledge in contexts where heightened criteria apply. Skepticism is explained by the fact that the contextual variation of epistemic standards is easily overlooked. Skeptics note that in the epistemic context it is inappropriate to grant anyone knowledge. However, skeptics assume—falsely—that what goes in the epistemic context goes in all contexts. They assume that since those who take skepticism seriously must deny anyone knowledge, then everyone, regardless of context, should deny anyone knowledge. Yet people in ordinary contexts are perfectly correct in claiming that they know all sorts of things.
Furthermore, the closure principle is correct, contextualists say, so long as it is understood to operate within given contexts, not across contexts. That is, so long as we stay within a given context, we know the things we deduce from other things we know. But if I am in an ordinary context, knowing I am in San Antonio, I cannot come to know, via deduction, that I am not a brain in a vat on a distant planet, since the moment I take that skeptical possibility seriously, I transform my context into one in which heightened epistemic standards apply. When I take the vat possibility seriously, I must wield demanding standards that rule out my knowing I am not a brain in a vat. By the same token, these standards preclude my knowing I am in San Antonio. Thinking seriously about knowledge undermines our knowledge.
6. Closure of Rational Belief
To say that justified belief is closed under entailment is to say that something like one of the following principles is correct (or that both are):
- (J)
- If, while justifiably believing p, S believes q because S knows p entails q, then S justifiably believes q.
- (GJ)
- If, while justifiably believing various propositions, S believes p because S knows that they entail p, then S justifiably believes p.
However, GJ generates paradoxes (Kyburg 1961). To see why, notice that if the chances of winning a lottery are sufficiently remote, I am justified in believing that my ticket, ticket 1, will lose. I am also justified in believing that ticket 2 will lose, and that 3 will lose, and so on. However, I am not justified in believing the conjunction of these propositions. If I were, I would justifiably believe that no ticket will win. If a proposition is justified when probable enough, lottery examples undermine GJ. No matter how great the probability that suffices for justification, unless that probability is 1, in some lotteries we will be justified in believing, of an arbitrary ticket, that it will lose, and thus, by GJ, we will be justified in believing that all of the tickets will lose.
Even if we reject GJ, it does not follow that we must reject GK, which concerns knowledge closure. Consider the lottery example again. How justified we are in believing that ticket 1 will lose depends on how probable its losing is. Now, the probability that ticket 2 will lose is equal to the probability that ticket 1 will lose. The same goes for each ticket. However, consider the conjunction, Ticket 1 will lose & ticket 2 will lose. The probability of this conjunctive proposition is less than the probability of either of its conjuncts. Suppose we continue to add conjuncts. For example, next in line will be: Ticket 1 will lose & ticket 2 will lose & ticket 3 will lose. Each time a conjunct is added, the probability of the resulting proposition is still lower. This illustrates the fact that we can begin with a collection of propositions each of which surpasses some threshold level of justification (let it be whatever is necessary for a belief to count as “justified” according to GJ) and, by conjoining them, we can end up with a proposition which falls below that threshold level of justification. We may “justifiably believe” each conjunct, but not the conjunction, so GJ fails. However, we need not reject GK on these grounds. Even if we grant that we justifiably believe that Ticket 1 will lose is true we might deny that we know that this proposition is true. We might take the position that if we believe some proposition p on the basis of its probability, nothing less than a probability of 1 will suffice to enable us to know that it is true. In that case GK will not succumb to our objection to GJ, for if the probability of two or more propositions is 1 then the probability of their conjunction is also 1.
We can reject GJ. Should we also reject J? The status of this principle is much more controversial. Some theorists argue against it using counterexamples like Dretske’s own zebra case: because the zebra is in plain sight, you seem fully justified in believing zeb, but it is not so clear that you are justified in believing not-mule, even if you deduce this belief from zeb. Anyone who rejects K on the grounds that K sanctions the knowledge of limiting or heavyweight propositions (discussed earlier) is likely to reject J on similar grounds: justifiably believing that we have hands, it might seem, does not position us to justifiably believe that there are physical objects even if we see that the former entails the latter.
One response is that cases such as Dretske’s do not count against J, but rather against the following principle (of the transmissibility of evidence):
- (E)
- If e is evidence for p, and p entails q, then e is evidence for q.
Even if we reject this principle, it does not follow that justification is not closed under entailment, as Peter Klein (1981) pointed out. Arguably, for justification closure, all that is necessary is that when, given all of our relevant evidence e, we are justified in believing p, we also have sufficient justification for believing each of p’s consequences. Our justification for p’s consequences need not be e. Instead, it might be p itself, which is, after all, a justified belief. And since p entails its consequences, it is sufficient to justify them. Moreover, any good evidence we have against a consequence of p counts against p itself, preventing us from being justified in believing p in the first place, so if we are justified in believing p, considering all our evidence, pro and con, we will not have overwhelming evidence against propositions entailed by p. (A similar move could be defended against the tracking theorists when they deny the closure of knowledge: if we track p, and believe q by deducing it from p, then we track q if we take p as our basis for believing q.) Looked at in this way, J seems plausible. (There is a substantial literature on the transmissibility of evidence and its failure; see, for example, Crispin Wright (1985) and Martin Davies (1998).
Some final observations can be made using Roderick Firth’s (1978) distinction between propositional and doxastic justification. Proposition p has propositional justification for S if and only if, given the grounds S possesses, p would count as rational. That p has propositional justification for S does not require that S actually base p on these grounds, or even that S believe p. Whether S’s belief has doxastic justification depends on S’s actual grounds for believing p: if, on these grounds, p would count as rational, then p possesses doxastic justification. Consider the following principles:
- (JD)
- If p is doxastically justified for S, and p entails q, then q is doxastically justified for S.
- (JP)
- If p is propositionally justified for S, and p entails q, then q is propositionally justified for S.
Clearly JD faces two fatal objections. First, we might fail to believe some of the things implied by our beliefs. Second, we may have perfectly respectable reasons for believing something p, yet, failing to see that p entails q, we might not be aware of any grounds for believing q, or, worse, we might believe q for bogus reasons. But neither difficulty threatens JP. First, propositional justification does not entail belief. Second, S might be propositionally justified in believing q on the basis of p whether or not S fails to see that p entails q, and even if S believes q for bogus reasons. As further support for JP, we might cite the fact that, if p entails q, whatever counts against q also counts against p.
Bibliography
- Alston, W., 1993, The Reliability of Sense Perception, Ithaca: Cornell University Press.
- Armstrong, D., 1973, Belief, Truth and Knowledge, Cambridge: Cambridge University Press.
- Audi, R., 1995, “Deductive Closure, Defeasibility and Scepticism: A Reply to Feldman.” Philosophical Quarterly, 45: 494–499.
- Becker, K., 2009, Epistemology Modalized, New York: Routledge.
- Black, M., 1949, “The Justification of Induction,” Language and Philosophy, Cornell University Press.
- Black, T., and Murphy, P., 2007, “In Defense of Sensitivity”, Synthese, 154(1): 53–71.
- Bogdan, R.J., 1985, “Cognition and Epistemic Closure,” American Philosophical Quarterly, 22: 55–63.
- BonJour, L., 1987, “Nozick, Externalism, and Skepticism,” in Luper 1987a, 297–313.
- Brueckner, A., 1985a, “Losing Track of the Sceptic,” Analysis, 45: 103–104.
- –––, 1985b, “Skepticism and Epistemic Closure,” Philosophical Topics, 13: 89–117.
- –––, 1985c, “Transmission for Knowledge Not Established,” Philosophical Quarterly, 35: 193–196.
- –––, 2012, “Roush on Knowledge: Tracking Redux?,” in K. Becker and T. Black (eds.), The Sensitivity Principle in Epistemology, Cambridge: Cambridge University Press.
- Cohen, S., 1987, “Knowledge, Context, and Social Standards,” Synthese, 73: 3–26.
- –––, 1988, “How to be a Fallibilist,” Philosophical Perspectives 2: Epistemology, Atascadero, CA: Ridgeview, 91–123.
- –––, 1999, “Contextualism, Skepticism, and the Structure of Reasons,” Philosophical Perspectives 13: Epistemology, Atascadero, CA: Ridgeview, 57–89.
- –––, 2002, “Basic Knowledge and the Problem of Easy Knowledge,” Philosophy and Phenomenological Research, 65.2: 309–329.
- Davies, M., 1998, “Externalism, Architecturalism, and Epistemic Warrant,” in Crispin Wright, Barry Smith, and Cynthia Macdonald (eds.), Knowing Our Own Minds, Oxford: Oxford University Press, pp. 321–361.
- DeRose, K., 1995, “Solving the Skeptical Problem,” Philosophical Review, 104: 1–52.
- Dretske, F., 1969, Seeing and Knowing, Chicago: University of Chicago Press.
- –––, 1970, “Epistemic Operators,” Journal of Philosophy, 67: 1007–1023.
- –––, 1971, “Conclusive Reasons,” Australasian Journal of Philosophy, 49: 1–22.
- –––, 1972, “Contrastive Statements,” Philosophical Review, 81: 411–430.
- –––, 2003, “Skepticism: What Perception Teaches,” in Luper 2003b, pp. 105–118.
- –––, 2005, “Is Knowledge Closed Under Known Entailment?” in Steup 2005.
- Feldman, R., 1995, “In Defense of Closure,” Philosophical Quarterly, 45: 487–494.
- Firth, R., 1978, “Are Epistemic Concepts Reducible to Ethical Concepts?” in Alvin Goldman and Jaegwon Kim (eds.), Values and Morals, Dordrecht: D. Reidel Publishing Co.
- Fumerton, R., 1995, Metaepistemology, and Skepticism, Lanham, MD: Rowman and Littlefield.
- Goldman, A., 1976, “Discrimination and Perceptual Knowledge,” Journal of Philosophy, 73: 771–791.
- –––, 1979, “What is Justified Belief?,” in Justification and Knowledge, G.S. Pappas (ed.), Dordrecht: D. Reidel.
- Goodman, N., 1955, Fact, Fiction, and Forecast. (4th ed.), Harvard University Press, 1983.
- Hales, S., 1995, Epistemic Closure Principles, Southern Journal of Philosophy, 33: 185–201.
- Harman, G. and Sherman, B., 2004, “Knowledge, Assumptions, Lotteries,” Philosophical Issues, 14: 492–500.
- Hawthorne, J., 2004, Knowledge and Lotteries, Oxford: Oxford University Press.
- –––, 2005, “The Case for Closure,” in Steup 2005.
- Jaeger, C. 2004, “Skepticism, Information, and Closure: Dretske’s Theory of Knowledge,” Erkenntnis, 61: 187–201.
- Klein, P., 1981, Certainty: A Refutation of Skepticism, Minneapolis, MN: University of Minnesota Press.
- –––, 1995, “Skepticism and Closure: Why the Evil Genius Argument Fails,” Philosophical Topics, 23: 213–236.
- –––, 2004, “Closure Matters: Academic Skepticism and Easy Knowledge,” Philosophical Issues, 14(1): 165–184.
- Kripke, S., 2011, “Nozick on Knowledge,” in Philosophical Troubles (Collected Papers, Volume 1), New York: Oxford University Press.
- Kyburg, H., 1961, Probability and the Logic of Rational Belief, Dordrecht: Kluwer.
- Lewis, D., 1973, Counterfactuals, Cambridge: Cambridge University Press.
- –––, 1979, “Scorekeeping in a Language Game,” Journal of Philosophical Logic, 8: 339–359.
- –––, 1996, “Elusive Knowledge,” Australasian Journal of Philosophy, 74: 549–567.
- Luper, S., 1984, “The Epistemic Predicament: Knowledge, Nozickian Tracking, and Skepticism,” Australasian Journal of Philosophy, 62: 26–50.
- ––– (ed.), 1987a, The, Possibility of Knowledge: Nozick and His Critics, Totowa, NJ: Rowman and Littlefield.
- –––, 1987b, “The Possibility of Skepticism,” in Luper 1987a.
- –––, 1987c, “The Causal Indicator Analysis of Knowledge,” Philosophy and Phenomenological Research, 47: 563–587.
- –––, 2003a, “Indiscernability Skepticism,” in S. Luper 2003b, pp. 183–202.
- –––, (ed.) 2003b, The Skeptics, Hampshire: Ashgate Publishing, Limited.
- –––, 2004, “Epistemic Relativism,” Philosophical Issues, 14, a supplement to Noûs, 2004, 271–295.
- –––, 2006, “Dretske on Knowledge Closure,” Australasian Journal of Philosophy, 84(3): 379–394.
- –––, 2012, “False Negatives,” in K. Becker and T. Black (eds.), The Sensitivity Principle in Epistemology, Cambridge: Cambridge University Press.
- Moore, G. E., 1959, “Proof of an External World,” and “Certainty,” in Philosophical Papers, London: George Allen & Unwin, Ltd.
- Murphy, P., 2005, “Closure Failures for Safety,” Philosophia, 33: 331–334.
- Nozick, R., 1981, Philosophical Explanations, Cambridge: Cambridge University Press.
- Papineau, D., 1992, “Reliabilism, Induction, and Scepticism,” The Philosophical Quarterly, 42: 1–20.
- Pritchard, D., 2007, “Anti-Luck Epistemology,” Synthese, 158: 227–298.
- Ramsey, F. P., 1931, The Foundations of Mathematics and Other Logical Essays, London: Routledge and Kegan Paul.
- Roush, S., 2005, Tracking Truth: Knowledge, Evidence and Science, Oxford: Oxford University Press.
- Sextus Empiricus, 1933a, Outlines of Pyrrhonism, R.G. Bury (trans), London: W. Heinemann, Loeb Classical Library.
- Shatz, D., 1987, “Nozick’s Conception of Skepticism,” in The Possibility of Knowledge, S. Luper (ed.), Totowa, NJ: Rowman and Littlefield.
- Sosa, E., 1999, “How to Defeat Opposition to Moore,” Philosophical Perspectives, 13: 141–152.
- –––, 2003, “Neither Contextualism Nor Skepticism,” in The Skeptics, S. Luper (ed.), Hampshire: Ashgate Publishing, Limited, pp. 165–182.
- –––, 2007, A Virtue Epistemology: Apt Belief and Reflective Knowledge Volume I, Oxford: Oxford University Press.
- –––, 2009, A Virtue Epistemology: Apt Belief and Reflective Knowledge Volume II, Oxford: Oxford University Press.
- Stalnaker, R., 1968, “A Theory of Conditionals,” American Philosophical Quarterly (Monograph No. 2), 98–112.
- Steup, M. and Sosa, E. (eds.), 2005, Contemporary Debates in Epistemology, Malden, MA: Blackwell.
- Stine, G.C., 1971, “Dretske on Knowing the Logical Consequences,” Journal of Philosophy, 68: 296–299.
- –––, 1976, “Skepticism, Relevant Alternatives, and Deductive Closure,” Philosophical Studies, 29: 249–261.
- Van Cleve, J., 1979, “Foundationalism, Epistemic Principles, and the Cartesian Circle,” Philosophical Review, 88: 55–91.
- –––, 2003, “Is Knowledge Easy—or Impossible? Externalism as the Only Alternative to Skepticism,” in S. Luper 2003b, pp. 45–60.
- Vogel, J., 1990, “Are There Counterexamples to the Closure Principle?” in Doubting: Contemporary Perspectives on Skepticism, M. Roth and G. Ross (eds.), Dordrecht: Kluwer Academic Publishers.
- –––, 2000, “Reliabilism Leveled,” Journal of Philosophy, 97: 602–623.
- –––, 2004, “Speaking of Knowledge,” Philosophical Issues, 14: 501–509.
- Williamson, T., 2000, Knowledge and Its Limits, Oxford: Oxford University Press.
- Wittgenstein, L., 1969, On Certainty, G.E.M. Anscombe (trans.), New York: Harper and Row, Inc.
- Wright, C., 1985, “Facts and Certainty,” Proceedings of the British Academy, 71: 429–472.
Academic Tools
How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
Other Internet Resources
[Please contact the author with suggestions.]