Science and Pseudo-Science
The problem of distinguishing between science and pseudoscience is part of the larger task of determining which beliefs are epistemically warranted. This entry clarifies the specific nature of pseudoscience in relation to other categories of non-scientific doctrines and practices, including science denial(ism) and resistance to facts. The major proposed criteria for defining and identifying pseudoscience are discussed and some of their weaknesses are pointed out. There is much more agreement on particular cases than on the general criteria that such judgments should be based upon. This is an indication that there is still much important philosophical work to be done on the relation between science and pseudoscience.
- 1. The importance of disclosing false science
- 2. The “science” of pseudoscience
- 3. The “pseudo” of pseudoscience
- 4. Alternative characterizations of pseudoscience
- 5. Multicriterial approaches
- 6. Two forms of pseudoscience
- 7. Some related terms
- 8. Unity in diversity
- Bibliography
- Academic Tools
- Other Internet Resources
- Related Entries
1. The importance of disclosing false science
We can have both theoretical and practical reasons for distinguishing between real and false science (Mahner 2007, 516). From a theoretical point of view, this distinction is an illuminating perspective that contributes to the philosophy of science in much the same way that the study of fallacies contributes to our knowledge of informal logic and rational argumentation. From a practical point of view, the distinction is important for decision guidance in both private and public life. Since science is our most reliable source of knowledge in a wide range of areas, we need to distinguish scientific knowledge from statements that are falsely claimed to be scientific. There are many areas in which reliance on such statements can have disastrous consequences:
-
Climate policy: The scientific consensus on ongoing anthropogenic climate change leaves no room for reasonable doubt (Cook et al. 2016; Powell 2019). Science denial has considerably delayed climate action, and it is still one of the major factors that impede efficient measures to reduce climate change (Oreskes and Conway 2010; Lewandowsky et al. 2019). Decision-makers and the public need to know how to distinguish between competent climate science and science-mimicking disinformation on the climate.
-
Healthcare: Medical science develops and evaluates treatments according to evidence of their effectiveness and safety. Pseudoscience in healthcare gives rise to ineffective and sometimes dangerous interventions and often lures people away from science-based healthcare. Pseudoscience in preventive medicine makes its victims refrain from vaccination and other efficient means to reduce the risk of disease. Healthcare providers, insurers, government authorities and – most importantly – patients and the general public need guidance on how to distinguish between medical science and medical pseudoscience.
-
Environmental policies: In order to be on the safe side against potential disasters it may be legitimate to take preventive measures when there is valid but yet insufficient evidence of an environmental hazard. This must be distinguished from taking measures against an alleged hazard for which there is no valid evidence at all. Therefore, decision-makers in environmental policy must be able to distinguish between scientific and pseudoscientific claims.
-
Expert testimony: It is essential for the rule of law that courts get the facts right. The reliability of different types of evidence must be correctly determined, and expert testimony must be based on the best available knowledge. Sometimes it is in the interest of litigants to present non-scientific claims as solid science. Therefore courts must be able to distinguish between science and pseudoscience. Philosophers can contribute efficiently to the defence of science against pseudoscience in such contexts (Pennock 2011; Ruse 2021).
-
Science education: The promoters of some pseudosciences (notably creationism) try to introduce their teachings in school curricula. Teachers and school authorities need to have clear criteria of inclusion that protect students against unreliable and disproved teachings.
-
Journalism: When there is scientific uncertainty, or relevant disagreement in the scientific community, this should be covered and explained in media reports on the issues in question. Equally importantly, differences of opinion between on the one hand legitimate scientific experts and on the other hand proponents of scientifically unsubstantiated claims should be described as what they are. Public understanding of topics such as climate change and vaccination has been considerably hampered by organised campaigns that succeeded in making media portray standpoints that have been thoroughly disproved in science as legitimate scientific standpoints (Boykoff and Boykoff 2004; Boykoff 2008). The media need tools and practices to distinguish between legitimate scientific controversies and attempts to peddle pseudoscientific claims as science.
Some discussants have claimed that the very concept of pseudoscience is unnecessary. In their view it would be sufficient, for instance, to note that creationism is wrong, without also asserting that it is a pseudoscience. The major problem with this standpoint is that it erases important differences between pseudoscience and other forms of erroneous beliefs about scientific issues. Contrary to most other types of misbeliefs, pseudoscience comes with immunizing strategies and defence mechanisms that protect it against counterevidence and critical scrutiny (Boudry and Braeckman 2011). This can be illustrated by comparing a typical creationist who believes that humans lived along with Tyrannosaurus rex to a typical paleontologist who has misidentified a femur from an Early Jurassic dinosaur and believes that it belonged to a mammal. The mistaken paleontologist will take colleagues’ arguments seriously and discuss them at depth. (S)he will in all probability be convinced by their arguments if they are strong enough. In contrast, the typical creationist will steer away from the crucial counterarguments. She will make use of the inbuilt subterfuges and evasive strategies of creationism to avoid serious reconsideration of her beliefs (Freudenburg et al. 2008; Nieminen and Mustonen 2014). This type of manoeuvre makes pseudoscience impervious to counterarguments. In consequence, normal scientific argumentation usually has little or no impact on pseudoscientific beliefs (Hornsey 2020). Since pseudoscience has to be dealt with differently from other forms of erroneous beliefs, we need to criteria to identify it.
2. The “science” of pseudoscience
Attempts to define what we today call science have a long history, and they have sometimes been traced back to Aristotle’s Posterior Analytics (Laudan 1983). Cicero’s arguments for dismissing certain methods of divination in his De divinatione has considerable similarities with modern criteria for distinguishing between science and pseudoscience (Fernandez-Beanato 2020a). The Latin word “pseudoscientia” was used already in the first half of the 17th century in discussions about the relationship between religion and empirical investigations (Guldentops 2020, 288n). The oldest known use of the English word “pseudoscience” dates from 1796, when the historian James Pettit Andrew referred to alchemy as a “fantastical pseudo-science” (Oxford English Dictionary). The word has been in frequent use since the 1880s (Thurs and Numbers 2013). Throughout its history the word has had a clearly defamatory meaning (Laudan 1983, 119; Dolby 1987, 204). It would be as strange for someone to proudly describe her own activities as pseudoscience as to boast that they are bad science. Since the derogatory connotation is an essential characteristic of the word “pseudoscience”, an attempt to extricate a value-free definition of the term would not be meaningful. An essentially value-laden term has to be defined in value-laden terms. This is often difficult since the specification of the value component tends to be controversial.
This problem is not specific to pseudoscience, but follows directly from a parallel but somewhat less conspicuous problem with the concept of science. The common usage of the term “science” can be described as partly descriptive, partly normative. When an activity is recognized as science this usually involves an acknowledgement that it has a positive role in our strivings for knowledge. On the other hand, the concept of science has been formed through a historical process, and many contingencies influence what we call and do not call science. Whether we call a claim, doctrine, or discipline “scientific” depends both on its subject area and its epistemic qualities. The former part of the delimitation is largely conventional, whereas the latter is highly normative, and closely connected with fundamental epistemological and metaphysical issues.
Against this background, in order not to be unduly complex a definition of science has to go in either of two directions. It can focus on the descriptive contents, and specify how the term is actually used. Alternatively, it can focus on the normative element, and clarify the more fundamental meaning of the term. The latter approach has been the choice of most philosophers writing on the subject, and will be at focus here. It involves, of necessity, some degree of idealization in relation to common usage of the term “science”, in particular concerning the delimitation of the subject-area of science.
The English word “science” is primarily used about the natural sciences and other fields of research that are considered to be similar to them. Hence, political economy and sociology are counted as sciences, whereas studies of literature and history are usually not. The corresponding German word, “Wissenschaft”, has a much broader meaning and includes all the academic specialties, including the humanities. The German term has the advantage of more adequately delimiting the type of systematic knowledge that is at stake in the conflict between science and pseudoscience. The misrepresentations of history presented by Holocaust deniers and other pseudo-historians are very similar in nature to the misrepresentations of natural science promoted by creationists and homeopaths.
More importantly, the natural, mathematical and social sciences and the humanities are all parts of the same human endeavour, namely systematic and critical investigations aimed at acquiring the best possible understanding of the workings of nature, people, and human society. The disciplines that form this community of knowledge disciplines are increasingly interdependent. Since the second half of the 20th century, integrative disciplines such as astrophysics, evolutionary biology, biochemistry, ecology, quantum chemistry, the neurosciences, and game theory have developed at dramatic speed and contributed to tying together previously unconnected disciplines. These increased interconnections have also linked the sciences and the humanities closer to each other, as can be seen for instance from how historical knowledge relies increasingly on advanced scientific analysis of archaeological findings.
The conflict between science and pseudoscience is best understood with this extended sense of science. On one side of the conflict we find the community of knowledge disciplines that includes the natural, mathematical and social sciences and the humanities. On the other side we find a wide variety of movements and doctrines, such as creationism, astrology, homeopathy, Holocaust denialism and climate science denialism that are in conflict with results and methods that are generally accepted in the community of knowledge disciplines.
Another way to express this is that the problem of how to distinguish between science and pseudoscience has a deeper concern than that of drawing a line around the collection of human activities that we have for various reasons chosen to call “sciences”. The ultimate issue is “how to determine which beliefs are epistemically warranted” (Fuller 1985, 331). In a wider approach, the sciences are fact-finding practices, i.e., human practices aimed at finding out, as far as possible, how things really are (Hansson 2018). Other examples of fact-finding practices in modern societies are journalism, criminal investigations, and the methods used by mechanics to search for the defect in a malfunctioning machine. Fact-finding practices are also prevalent in indigenous societies, for instance in the forms of traditional agricultural experimentation and the methods used for tracking animal prey (Liebenberg 2013). The distinction between science and pseudoscience has much in common with that between accurate and inaccurate journalism, as well as that between properly and improperly performed criminal investigations (Hansson 2018).
3. The “pseudo” of pseudoscience
3.1 Non-, un-, and pseudoscience
The task of distinguishing between science and pseudoscience is often called “the demarcation of science” or “the demarcation of science from pseudoscience.” These phrases are often used interchangeably, and many authors seem to have regarded them as equal in meaning. In their view, the task of drawing the outer boundaries of science is essentially the same as that of drawing the boundary between science and pseudoscience.
This picture is oversimplified. All non-science is not pseudoscience, and science has non-trivial borders to other non-scientific phenomena, such as metaphysics, religion, and various types of non-scientific systematized knowledge. (Mahner (2007, 548) proposed the term “parascience” to cover non-scientific practices that are not pseudoscientific.) Science also has the internal problem of distinguishing between good and bad science.
A comparison of the negated terms related to science can contribute to clarifying the conceptual distinctions. “Unscientific” is a narrower concept than “non-scientific” (not scientific), since the former but not the latter term implies some form of contradiction or conflict with science. “Pseudoscientific” is in its turn a narrower concept than “unscientific”. The latter term differs from the former in covering inadvertent mismeasurements and miscalculations and other forms of bad science performed by scientists who are recognized as trying but failing to produce good science.
Etymology provides us with an obvious starting-point for clarifying what characteristics pseudoscience has in addition to being merely non- or un-scientific. “Pseudo-” (ψευδο-) means false. In accordance with this, the Oxford English Dictionary (OED) defines pseudoscience as follows:
“A pretended or spurious science; a collection of related beliefs about the world mistakenly regarded as being based on scientific method or as having the status that scientific truths now have.”
3.2 Non-science posing as science
Many writers on pseudoscience have emphasized that pseudoscience is non-science posing as science. The foremost modern classic on the subject (Gardner 1957) bears the title Fads and Fallacies in the Name of Science. According to Brian Baigrie (1988, 438), “[w]hat is objectionable about these beliefs is that they masquerade as genuinely scientific ones.” These and many other authors assume that to be pseudoscientific, an activity or a teaching has to satisfy the following two criteria (Hansson 1996):
- (1)
- it is not scientific, and
- (2)
- its major proponents try to create the impression that it is scientific.
The former of the two criteria is central to the concerns of the philosophy of science. Its precise meaning has been the subject of major controversies among philosophers, to be discussed below in Section 4. The second criterion has been less discussed by philosophers, but it needs careful treatment not least since many discussions of pseudoscience (in and out of philosophy) have been confused due to insufficient attention to it. Proponents of pseudoscience often attempt to mimic science by arranging conferences, journals, and associations that share many of the superficial characteristics of science, but do not satisfy its quality criteria. Naomi Oreskes (2019) called this phenomenon “facsimile science”. Blancke and coworkers (2017) called it “cultural mimicry of science”.
3.3 The doctrinal component
An immediate problem with the definition based on (1) and (2) is that it is too permissive. There are phenomena that satisfy both criteria but are not commonly called pseudoscientific. One of the clearest examples of this is fraud committed to further the perpetrator’s career, rather than to promote some deviant claim or doctrine. This type of fraud is clearly an unscientific activity with a high degree of scientific pretence. Thus it satisfies both criteria, (1) and (2). Nevertheless, such fraud is seldom if ever called “pseudoscience”. The reason for this can be clarified with the following hypothetical examples (Hansson 1996).
Case 1: A biochemist performs an experiment that she interprets as showing that a particular protein has an essential role in muscle contraction. There is a consensus among her colleagues that the result is a mere artefact, due to experimental error.
Case 2: A biochemist goes on performing one sloppy experiment after the other. She consistently interprets them as showing that a particular protein has a role in muscle contraction not accepted by other scientists.
Case 3: A biochemist performs various sloppy experiments in different areas. One is the experiment referred to in case 1. Much of her work is of the same quality. She does not propagate any particular unorthodox theory.
According to common usage, 1 and 3 are regarded as cases of bad science, and only 2 as a case of pseudoscience. What is present in case 2, but absent in the other two, is a deviant doctrine. Isolated breaches of the requirements of science are not commonly regarded as pseudoscientific. Pseudoscience, as it is commonly conceived, involves a sustained effort to promote standpoints different from those that have scientific legitimacy at the time.
This explains why scientific fraud committed to boost one’s own career is not usually regarded as pseudoscientific. This type of fraud is seldom associated with a deviant or unorthodox doctrine. To the contrary, fraudulent scientists tend to be anxious that their results be in conformity with the predictions of established scientific theories. Deviations from these would lead to a much higher risk of disclosure.
The term “science” has both an individuated and an unindividuated sense. In the individuated sense, biochemistry and astronomy are different sciences, one of which includes studies of muscle proteins and the other studies of supernovae. The Oxford English Dictionary (OED) defines this sense of science as “a particular branch of knowledge or study; a recognized department of learning”. In the unindividuated sense, the study of muscle proteins and that of supernovae are parts of “one and the same” science. In the words of the OED, unindividuated science is “the kind of knowledge or intellectual activity of which the various ‘sciences‘ are examples”.
Pseudoscience is an antithesis of science in the individuated rather than the unindividuated sense. There is no unified corpus of pseudoscience corresponding to the corpus of science. For a phenomenon to be pseudoscientific, it must belong to one or the other of the particular pseudosciences. In order to accommodate this feature, the above definition can be modified by replacing (2) by the following (Hansson 1996):
- (2′)
- it is part of a non-scientific doctrine whose major proponents try to create the impression that it is scientific.
Most philosophers of science, and most scientists, prefer to regard science as constituted by methods of inquiry rather than by particular doctrines. There is an obvious tension between (2′) and this conventional view of science. This, however, may be as it should since pseudoscience often involves a representation of science as a closed and finished doctrine rather than as a methodology for open-ended inquiry.
3.4 A wider sense of pseudoscience
Sometimes the term “pseudoscience” is used in a wider sense than that which is captured in the definition constituted of (1) and (2′). Contrary to (2′), doctrines that conflict with science are sometimes called “pseudoscientific” in spite of not being advanced as scientific. Hence, Grove (1985, 219) included among the pseudoscientific doctrines those that “purport to offer alternative accounts to those of science or claim to explain what science cannot explain.” Similarly, Lugg (1987, 227–228) maintained that “the clairvoyant’s predictions are pseudoscientific whether or not they are correct”, despite the fact that most clairvoyants do not profess to be practitioners of science. In this sense, pseudoscience is assumed to include not only doctrines contrary to science proclaimed to be scientific but doctrines contrary to science tout court, whether or not they are put forward in the name of science. Arguably, the crucial issue is not whether something is called “science” but whether it is claimed to have the function of science, namely to provide the most reliable information about its subject-matter. To cover this wider sense of pseudoscience, (2′) can be modified as follows (Hansson 1996; 2013):
- (2″)
- it is part of a doctrine whose major proponents try to create the impression that it represents the most reliable knowledge on its subject matter.
Common usage seems to vacillate between the definitions (1)+(2′) and (1)+(2″); and this in an interesting way: In their comments on the meaning of the term, critics of pseudoscience tend to endorse a definition close to (1)+(2′), but their actual usage is often closer to (1)+(2″).
The following examples serve to illustrate the difference between the two definitions and also to clarify why clause (1) is needed:
- A creationist book gives a correct account of the structure of DNA.
- An otherwise reliable chemistry book gives an incorrect account of the structure of DNA.
- A creationist book denies that the human species shares common ancestors with other primates.
- A preacher who denies that science can be trusted also denies that the human species shares common ancestors with other primates.
(a) does not satisfy (1), and is therefore not pseudoscientific on either account. (b) satisfies (1) but neither (2′) nor (2″) and is therefore not pseudoscientific on either account. (c) satisfies all three criteria, (1), (2′), and (2″), and is therefore pseudoscientific on both accounts. Finally, (d) satisfies (1) and (2″) and is therefore pseudoscientific according to (1)+(2″) but not according to (1)+(2′). As the last two examples illustrate, pseudoscience and anti-science are sometimes difficult to distinguish. Promoters of some pseudosciences (notably homeopathy) tend to be ambiguous between opposition to science and claims that they themselves represent the best science.
3.5 The objects of definition
Various proposals have been put forward on exactly what types of entities a definition of pseudoscience should be applied to. Most commonly, the entities described as pseudoscientific are either whole doctrines (e.g. homeopathy) or specific claims (e.g. the claim that MMR vaccine gives rise to autism). In the philosophical discussion, characterizations of pseudoscience have been applied to a wide variety of entities, such as research programs (Lakatos 1974a, 248–249), groups of people with common knowledge aims, and their practices (Bunge 1982, 2001; Mahner 2007), theories (Popper 1962, 1974), practices (Lugg 1992; Morris 1987), scientific problems and questions (Siitonen 1984), and particular inquiries (Kuhn 1974; Mayo 1996). It is probably fair to say that the notion of pseudoscience can be meaningfully applied on each of these levels of description. A much more difficult problem is whether one of these levels is the fundamental level to which assessments on the other levels are reducible. However, it should be noted that appraisals on different levels may be interdefinable. For instance, it is not an unreasonable assumption that a pseudoscientific doctrine is one that contains pseudoscientific statements as its core or defining claims. Conversely, a pseudoscientific statement may be characterized in terms of being endorsed by a pseudoscientific doctrine but not by legitimate scientific accounts of the same subject area.
Derksen (1993) differs from most other writers on the subject in placing the emphasis on the pseudoscientist, i.e. the individual person conducting pseudoscience. His major argument for this is that pseudoscience has scientific pretensions, and such pretensions are associated with a person, not a theory, practice or entire field. However, as was noted by Settle (1971), it is the rationality and critical attitude built into institutions, rather than the personal intellectual traits of individuals, that distinguishes science from non-scientific practices such as magic. The individual practitioner of magic in a pre-literate society is not necessarily less rational than the individual scientist in modern Western society. What she lacks is an intellectual environment of collective rationality and mutual criticism. “It is almost a fallacy of division to insist on each individual scientist being critically-minded” (Settle 1971, 174).
3.6 A time-bound distinction
Some authors have maintained that the distinction between science and pseudoscience must be timeless, i.e., unchanging over time. If this were true, then it would be contradictory to label something as pseudoscience at one but not another point in time. Hence, after showing that creationism is in some respects similar to some doctrines from the early 18th century, one author maintained that “if such an activity was describable as science then, there is a cause for describing it as science now” (Dolby 1987, 207). This argument is based on a fundamental misconception of science. It is an essential feature of science that it methodically strives for improvement through empirical testing, intellectual criticism, and the exploration of new topics. A standpoint or theory cannot be scientific unless it relates adequately to this process of improvement. This means as a minimum that well-founded rejections of previous scientific standpoints have to be accepted. Pseudoscience is a relational concept in the sense that it can only be defined in relation to science. (Hecht 2018, 7–8). Therefore, it is no surprise that yesterday’s science can sometimes become today’s pseudoscience. In the nineteenth century, Newton’s theory of gravity was excellent science. Today, claiming its validity (other than as a highly useful approximation) would be pseudoscientific.
Nevertheless, the mutability of science is one of the factors that renders the distinction between science and pseudoscience difficult. Derksen (1993, 19) rightly pointed out three major reasons why this distinction is sometimes difficult: science changes over time, science is heterogeneous, and established science itself is not free of the defects characteristic of pseudoscience.
4. Alternative characterizations of pseudoscience
Philosophical discussions on the characterization of pseudoscience have usually focused on the missing scientific quality of pseudoscience (rather than on its attempt to mimic science). One option is to base the characterization on the fundamental function that science shares with other fact-finding processes, namely to provide us with the most reliable information about its subject-matter that is currently available. This could lead to the specification of criterion (1) from Section 3.2 as follows:
- (1′)
- it is at variance with the most reliable knowledge about its subject matter that is currently available.
This criterion has the advantages of (i) being applicable across disciplines with highly different methodologies and (ii) allowing for a statement to be pseudoscientific at present although it was not so in an earlier period (or, although less commonly, the other way around). (Hansson 2013) At the same time it removes the practical determination whether a statement or doctrine is pseudoscientific from the purview of armchair philosophy to that of scientists specialized in the subject-matter that the statement or doctrine relates to. However, philosophers have usually opted for criteria that can applied without specialized knowledge in the pertinent subject area.
4.1 The logical positivists
Around 1930, the logical positivists of the Vienna Circle developed various verificationist approaches to science. The basic idea was that a scientific statement could be distinguished from a metaphysical statement by being at least in principle possible to verify. This standpoint was associated with the view that the meaning of a proposition is its method of verification (see the section on Verificationism in the entry on the Vienna Circle). This proposal has often been included in accounts of the demarcation between science and pseudoscience. However, this is not historically quite accurate since the verificationist proposals had the aim of solving a distinctly different demarcation problem, namely that between science and metaphysics.
4.2 Karl Popper and the notion of demarcation
In 1933, Karl Popper published a short text in which he introduced the German term “Abgrenzungsproblem” (later translated as “demarcation problem”) for the identification of claims that should not be classified as scientific. He defined this problem as “the request for a criterion to differentiate between ‘empirical-scientific’ and ‘metaphysical’ assertions (sentences, systems of sentences)” (Popper 1933, 426). In his in Logik der Forschung (1935), he described the demarcation problem as that of finding a criterion to “distinguish between the empirical sciences on the one hand, and mathematics and logic as well as ‘metaphysical’ systems on the other” (Popper 1935, 7; Popper 1959, 34). Pseudoscience was not mentioned in these texts.
In a lecture in 1957, Popper gave a different account of the purpose of a criterion of demarcation. He now said that he had worked with the problem already 1919 with the aim to “distinguish between science and pseudo-science” (Popper 1957, 155). In an essay published in 1962, he specified his purpose as “drawing a line of demarcation between those statements and systems of statements which could be properly described as belonging to empirical science, and others which might, perhaps, be described as ‘pseudo-scientific’ or (in certain contexts) as ‘metaphysical’, or which belonged, perhaps, to pure logic or to pure mathematics” (Popper 1962, 255).
Popper’s choice of the metaphorical term “demarcation” has had considerable influence on the philosophical discussion on pseudoscience. Since the late fifteenth century, the word “demarcate” denotes the setting of geopolitical borders. It usually refers to “the construction of boundary markers in the landscape” to mark the border between two countries (Prescott and Triggs 2008, 12). In discussions on science and pseudoscience, demarcation refers to the process of determining whether some particular claim (or other entity, cf. Section 3.5) is scientific or pseudoscientific. This is a different task than that of providing a complete definition of pseudoscience, and also a different task than that of providing a complete definition of science (Debray 2023). Contrary to a definition of pseudoscience, a demarcation between science and pseudoscience does not tell us how to distinguish between pseudoscience and other types of non-scientific belief systems.
The demarcation metaphor has been criticized for furthering characterizations of pseudoscience that do not amount to full definitions of the concept. It has also been argued that the metaphor is misleading in other ways. Perhaps most importantly, when the boundary between two countries has been demarcated — that is, shown with physical markers in the landscape — then no expertise is required to determine on which side of the border an object is. The use of the demarcation metaphor for the identification of pseudoscience can therefore give the impression that the task of determining whether a particular claim is pseudoscientific or not can be performed without any specialized knowledge in the subject matter (Hansson 2025).
4.3 Popper’s falsificationism
Karl Popper described the demarcation problem as the “key to most of the fundamental problems in the philosophy of science” (Popper 1962, 42). He rejected verifiability as a criterion for a theory or hypothesis to be scientific, rather than pseudoscientific or metaphysical. Instead he proposed as a criterion that the theory be falsifiable, or more precisely that “statements or systems of statements, in order to be ranked as scientific, must be capable of conflicting with possible, or conceivable observations” (Popper 1962, 39).
Popper’s account of falsifiability presupposes a set of potential observational sentences, called the “basic sentences”. The system of basic statements consists of “all self-consistent singular statements of a certain logical form — all conceivable singular statements of fact, as it were” (Popper 1935, 45; Popper 1959, 84. Cf. Popper 1974, 997). In order to be scientific, a statement has to be (potentially) falsifiable “in the simple logical sense of being logically incompatible with some basic statements”. (Popper 1974, 987). “It follows from this,” said Popper “that, for example, universal statements of laws can belong to science, provided they are testable; but it certainly does not that only universal statements can belong to science: singular statements especially can also belong to it, and all singular test statements (basic statements) do.” In this way he used “the ability of clashing with singular statements as a touchstone of the empirical or scientific character of other statements” (Popper 1974, 987–988). Since the property of being a basic statement is a matter of logical form, this criterion of falsifiability depends entirely on logical form. In what seems to be his last statement of his position, Popper again emphasized that the falsifiability criterion “only has to do with the logical structure of sentences and classes of sentences” (Popper 1989/1994, 82). A (theoretical) sentence, he said, is falsifiable if and only if it logically contradicts some (empirical) sentence that describes a logically possible event that it would be logically possible to observe (Popper [1989] 1994, 83). A statement can be falsifiable in this sense although it is not in practice possible to falsify it. It would seem to follow from this interpretation that a statement’s status as scientific or non-scientific does not shift with time. On previous occasions he seems to have interpreted falsifiability differently, and maintained that “what was a metaphysical idea yesterday can become a testable scientific theory tomorrow; and this happens frequently” (Popper 1974, 981, cf. 984).
Popper presented this proposal as a way to draw the line between statements belonging to the empirical sciences and “all other statements – whether they are of a religious or of a metaphysical character, or simply pseudoscientific” (Popper 1962, 39; cf. Popper 1974, 981). This was both an alternative to the logical positivists’ verification criterion and a criterion for distinguishing between science and pseudoscience. Although Popper did not emphasize the distinction, these are of course two different issues (Bartley 1968). Popper conceded that metaphysical statements may be “far from meaningless” (1974, 978–979) but showed no such appreciation of pseudoscientific statements.
Popper’s demarcation criterion has been criticized both for excluding legitimate science (Hansson 2006) and for assigning scientific status to some pseudosciences (Agassi 1991; Mahner 2007, 518–519). Several critics have pointed out that most pseudosciences are characterized by having been thoroughly falsified, rather than by being impossible or even difficult to falsify. Astrology, rightly taken by Popper as an unusually clear example of a pseudoscience, has in fact been tested and thoroughly refuted (Culver and Ianna 1988; Carlson 1985). Similarly, the major threats to the scientific status of psychoanalysis, another of his major targets, do not come from claims that it is untestable but from claims that it has been tested and failed the tests. However, it should be noted that Popper was well aware of this criticism. In response, he said that a doctrine loses its scientific status if its promoters break “the methodological rule that we must accept falsification” (Popper 1974, 985). He took Marxism to be exemplary of a formerly scientific theory, which lost this status when some of its predictions were refuted.
Popper once adopted the view that natural selection is not a proper scientific theory, arguing that it comes close to only saying that “survivors survive”, which is tautological. “Darwinism is not a testable scientific theory, but a metaphysical research program” (Popper 1976, 168). This statement has been criticized by evolutionary scientists who pointed out that it misrepresents evolution. The theory of natural selection has given rise to many predictions that have withstood tests both in field studies and in laboratory settings (Ruse 1977; 2000). However, in a lecture in Darwin College in 1977, Popper retracted his previous view that the theory of natural selection is tautological. He now admitted that it is a testable theory although “difficult to test” (Popper 1978, 344). In 1981, Popper’s criterion of falsifiability was used successfully in an American court to defend science education against creationist interference (Ruse 2021).
4.4 The criterion of puzzle-solving
Thomas Kuhn is one of many philosophers for whom Popper’s view on the identification of pseudoscience was a starting-point for developing their own ideas. Kuhn criticized Popper for characterizing “the entire scientific enterprise in terms that apply only to its occasional revolutionary parts” (Kuhn 1974, 802). Popper’s focus on falsifications of theories led to a concentration on the rather rare instances when a whole theory is at stake. According to Kuhn, the way in which science works on such occasions cannot be used to characterize the entire scientific enterprise. Instead it is in “normal science”, the science that takes place between the unusual moments of scientific revolutions, that we find the characteristics by which science can be distinguished from other activities (Kuhn 1974, 801).
In normal science, the scientist’s activity consists in solving puzzles rather than testing fundamental theories. In puzzle-solving, current theory is accepted, and the puzzle is indeed defined in its terms. In Kuhn’s view, “it is normal science, in which Sir Karl’s sort of testing does not occur, rather than extraordinary science which most nearly distinguishes science from other enterprises”, and therefore a criterion for making this distinction must refer to the workings of normal science (Kuhn 1974, 802). Kuhn’s own criterion is the capability of puzzle-solving, which he sees as an essential characteristic of normal science.
Kuhn exemplified this criterion with a comparison between astronomy and astrology. Since antiquity, astronomy has been a puzzle-solving activity and therefore a science. If an astronomer’s prediction failed, then this was a puzzle that he could hope to solve for instance with more measurements or adjustments of the theory. In contrast, the astrologer had no such puzzles since in that discipline “particular failures did not give rise to research puzzles, for no man, however skilled, could make use of them in a constructive attempt to revise the astrological tradition” (Kuhn 1974, 804). Therefore, according to Kuhn, astrology has never been a science.
Popper disapproved thoroughly of Kuhn’s criterion of puzzle-solving. According to Popper, astrologers are engaged in puzzle-solving, and consequently Kuhn’s criterion commits him to recognize astrology as a science. (Contrary to Kuhn, Popper defined puzzles as “minor problems which do not affect the routine”.) In his view Kuhn’s proposal leads to “the major disaster” of a “replacement of a rational criterion of science by a sociological one” (Popper 1974, 1146–1147).
4.5 Criteria based on scientific progress
Popper’s criterion of falsifiability concerns the logical structure of theories. Imre Lakatos described this criterion as “a rather stunning one. A theory may be scientific even if there is not a shred of evidence in its favour, and it may be pseudoscientific even if all the available evidence is in its favour. That is, the scientific or non-scientific character of a theory can be determined independently of the facts” (Lakatos 1981, 117).
Instead, Lakatos (1970; 1974a; 1974b; 1981) proposed a modification of Popper’s criterion that he called “sophisticated (methodological) falsificationism”. Instead of assessing the scientific standing of isolated hypotheses or theories, he proposed that the objects of assessments should be whole research programmes, each of which produces a series of theories that successively replace each other. In his view, a research program is progressive if the new theories make surprising predictions that are confirmed. In contrast, a degenerating research programme is characterized by theories being fabricated only in order to accommodate known facts. Progress in science is only possible in research programmes in which each new theory has a larger empirical content than its predecessor. If a research program does not satisfy this requirement, then it is pseudoscientific.
According to Paul Thagard (1978, 228), a theory or discipline is pseudoscientific if it satisfies two criteria. One of these is that the theory fails to progress, and the other that “the community of practitioners makes little attempt to develop the theory towards solutions of the problems, shows no concern for attempts to evaluate the theory in relation to others, and is selective in considering confirmations and disconfirmations”. A major difference between this approach and that of Lakatos is that Lakatos would classify a nonprogressive discipline as pseudoscientific even if its practitioners work hard to improve it and turn it into a progressive discipline. (In later work, Thagard has abandoned this approach and instead promoted a form of multi-criterial demarcation (Thagard 1988, 157–173).)
In a somewhat similar vein, Daniel Rothbart (1990) emphasized the distinction between the standards to be used when testing a theory and those to be used when determining whether a theory should at all be tested. The latter, the eligibility criteria, include that the theory should encapsulate the explanatory success of its rival, and that it should yield testable implications that are inconsistent with those of the rival. According to Rothbart, a theory is unscientific if it is not testworthy in this sense.
George Reisch proposed that in order to be scientific, a discipline has be adequately integrated into the other sciences. The various scientific disciplines have strong interconnections that are based on methodology, theory, similarity of models etc. Creationism, for instance, is not scientific because its basic principles and beliefs are incompatible with those that connect and unify the sciences. More generally speaking, says Reisch, an epistemic field is pseudoscientific if it cannot be incorporated into the existing network of established sciences (Reisch 1998; cf. Bunge 1982, 379).
Paul Hoyninengen-Huene (2013) identifies science with systematic knowledge, and proposes that systematicity can be used as a demarcation criterion. However as shown by Naomi Oreskes, this is a problematic criterion, not least since some pseudosciences seem to satisfy it (Oreskes 2019).
4.6 Epistemic norms
A different approach, namely to base demarcation criteria on the value base of science, was proposed by sociologist Robert K. Merton ([1942] 1973). According to Merton, science is characterized by an “ethos”, i.e. spirit, that can be summarized as four sets of institutional imperatives. The first of these, universalism, asserts that whatever their origins, truth claims should be subjected to preestablished, impersonal criteria. This implies that the acceptance or rejection of claims should not depend on the personal or social qualities of their protagonists.
The second imperative, communism, says that the substantive findings of science are the products of social collaboration and therefore belong to the community, rather than being owned by individuals or groups. This is, as Merton pointed out, incompatible with patents that reserve exclusive rights of use to inventors and discoverers. The term “communism” is somewhat infelicitous; “communality” probably captures better what Merton aimed at.
His third imperative, disinterestedness, imposes a pattern of institutional control that is intended to curb the effects of personal or ideological motives that individual scientists may have. The fourth imperative, organized scepticism, implies that science allows detached scrutiny of beliefs that are dearly held by other institutions. This is what sometimes brings science into conflicts with religions and ideologies. Merton described these criteria as belonging to the sociology of science, and thus as empirical statements about norms in actual science rather than normative statements about how science should be conducted (Merton [1942] 1973, 268).
Bright and Heesen (2023) have proposed that communism, Merton’s second norm, can be used as a criterion for distinguishing between science and pseudoscience. In their view, a claim is scientific to the extent that “it is made appropriately available to the scientific community and proprietary rights are not claimed in any way that interferes with fellow researchers accessing, using, or evaluating it” (Bright and Heesen 2023, 254–255). According to this criterion, commercial research that does not comply with academic standards of openness is pseudoscientific even if its methodology and outcomes are flawless. This seems to be an intended consequence of the criterion, since the authors explicitly set out to “criticize commercial research as pseudo-scientific” (ibid, 251). It remains to investigate whether a more extensive Merton-style norm system can be used to distinguish between science and pseudoscience in a plausible way.
5. Multicriterial approaches
Popper’s method of demarcation applied only the single criterion of falsifiability. Most of the other proposals discussed in Section 4 are similarly mono-criterial, of course with Merton’s proposal as a major exception.
5.1 Lists of criteria
Many authors have proposed that a list of criteria, rather than a single criterion, should be used to identify pseudoscience. A large number of such lists (usually with 5–10 criteria) have been published, for instance by Langmuir ([1953] 1989), Gruenberger (1964), Dutch (1982), Bunge (1982), Radner and Radner (1982), Kitcher (1982, 30–54), Grove (1985), Thagard (1988, 157–173), Glymour and Stalker (1990), Derksen (1993, 2001), Vollmer (1993), Ruse (1996, 300–306), Mahner (2007; 2013), Dawes (2013) and Fernandez-Beanato (2020b). One such list reads as follows:
- Belief in authority: It is contended that some person or persons have a special ability to determine what is true or false. Others have to accept their judgments.
- Unrepeatable experiments: Reliance is put on experiments that cannot be repeated by others with the same outcome.
- Handpicked examples: Handpicked examples are used although they are not representative of the general category that the investigation refers to.
- Unwillingness to test: A theory is not tested although it is possible to test it.
- Disregard of refuting information: Observations or experiments that conflict with a theory are neglected.
- Built-in subterfuge: The testing of a theory is so arranged that the theory can only be confirmed, never disconfirmed, by the outcome.
- Explanations are abandoned without replacement. Tenable explanations are given up without being replaced, so that the new theory leaves much more unexplained than the previous one. (Hansson 1983)
There are several ways in which a claim or a doctrine can deviate from what we require of science. Multicriterial approaches have the advantage over monocriterial ones that they can take this diversity into account. Bunge (1982, 372) asserted that many philosophers have failed to provide an adequate definition of science since they have presupposed that a single attribute will do; in his view the combination of several criteria is needed. However, in order to cover all ways in which one can deviate from science an inordinate number of criteria may be needed. Mahner (2013, 38–40) listed twenty criteria for distinguishing between science and pseudoscience and indicated that there could be as many as thirty or fifty of them. Fasce (2017) compiled a list of 70 criteria, gleaned from 21 published lists. It does not seem possible to work with so many criteria, and it is no easy task to choose between them.
5.2 Attempts at unifying multicriterial approaches
Several authors have proposed ways to construct a multi-criterial approach that is more unified and systematic than an unstructured list of criteria. Hirvonen and Karisto proposed that weights can be assigned to the criteria. A limit to the sum of the weights of the satisfied criteria can then be used to distinguish between science and pseudoscience (Hirvonen and Karisto 2022, 708). Such an approach would be more systematic than a mere list, but no principled way of determining these weights seems to be readily available.
Dupré (1993, 242) proposed that science is best understood as a Wittgensteinian family resemblance concept. This would mean that there is a set of features that are characteristic of science, but although every part of science will have some of these features, we should not expect any science or scientific practice to have all of them. A list of criteria for distinguishing between science and pseudoscience can then be seen as a list of common features rather than a list of criteria that are either necessary or sufficient. Several authors have argued that the family resemblance approach can be useful in science education (Irzik and Nola 2011; Park and Brock 2023). As noted by Resnik and Elliott, on this approach “one cannot immediately dismiss a hypothesis (theory, or field of inquiry) as unscientific because it fails to conform to a particular norm; one must engage in a broader, more holistic assessment of the hypothesis” (Resnik and Elliott 2023, 264). Schindler has questioned this application of the family resemblance approach, arguing that it is “questionable whether similarities can really justify kind membership without some prior relevance determination through the kind in question” (Schindler 2018, 224).
Maarten Boudry has proposed another take on lists with several criteria. He endorses the definitional approach that was presented above in Section 3, according to which pseudoscience is defined in terms of the unreliability of its claims and its attempts to present them as reliable knowledge. Furthermore, he proposes that the latter of these defining characteristics can can be operationalized as a set of indicators of pseudoscientificity. “While there are myriad ways in which a theory can fail to be epistemically warranted, there are comparatively fewer ways to create a false impression of epistemic warrant, and these ways are largely similar across different fields of inquiry” (Boudry 2022, 96). A list of ways to create a false impression of reliability can therefore serve as a set of warning signals for the detection of pseudoscience. Such a multicriterial approach has a theoretical basis that makes it more methodical and unified than other lists of criteria for identifying pseudoscience.
6. Two forms of pseudoscience
Some forms of pseudoscience have as their main objective the promotion of a particular theory of their own, whereas others are driven by a desire to fight down some scientific theory or branch of science. The former type of pseudoscience has been called pseudo-theory promotion, and the latter science denial(ism) (Hansson 2017). Pseudo-theory promotion is exemplified by homeopathy, astrology, and ancient astronaut theories. The term “denial” was first used about the pseudoscientific claim that the Nazi holocaust never took place. The phrase “holocaust denial” was in use already in the early 1980s (Gleberzon 1983). The term “climate change denial” became common around 2005 (e.g. Williams 2005). Other forms of science denial are relativity theory denial, tobacco disease denial, hiv denialism, and vaccination denial.
Many forms of pseudoscience combine pseudo-theory promotion with science denialism. For instance, creationism and its skeletal version “intelligent design” are constructed to support a fundamentalist interpretation of Genesis. However, as practiced today, creationism has a strong focus on attempts to repudiate evolution, and it is therefore predominantly a form of science denialism.
The most prominent difference between pseudo-theory promotion and science denial is their different attitudes to conflicts with established science. Science denialism usually proceeds by producing false controversies with legitimate science, i.e., claims that there is a scientific controversy when there is in fact none. This is an old strategy, applied already in the 1930s by relativity theory deniers (Wazeck 2009, 268–269). It has been much used by tobacco disease deniers sponsored by the tobacco industry (Oreskes and Conway 2010; Dunlap and Jacques 2013), and it is currently employed by climate science denialists (Boykoff and Boykoff 2004; Boykoff 2008). However, whereas the fabrication of fake controversies is a standard tool in science denial, it is seldom if ever used in pseudo-theory promotion. To the contrary, advocates of pseudosciences such as astrology and homeopathy tend to describe their theories as conformable to mainstream science.
7. Some related terms
7.1 Scepticism
The term scepticism (skepticism) has at least three distinct usages that are relevant for the discussion on pseudoscience. First, scepticism is a philosophical method that proceeds by casting doubt on claims usually taken to be trivially true, such as the existence of the external world. This has been, and still is, a highly useful method for investigating the justification of what we in practice consider to be certain beliefs. Secondly, criticism of pseudoscience is often called scepticism. This is the term most commonly used by organisations devoted to the disclosure of pseudoscience. Thirdly, opposition to the scientific consensus in specific areas is sometimes called scepticism. For instance, climate science deniers often call themselves “climate sceptics”.
To avoid confusion, the first of these notions can be specified as “philosophical scepticism”, the second as “scientific scepticism” or “defence of science”, and the third as “science denial(ism)”. Adherents of the first two forms of scepticism can be called “philosophical sceptics”, respectively “science defenders”. Adherents of the third form can be called “science deniers” or “science denialists”. Torcello (2016) proposed the term “pseudoscepticism” for so-called climate scepticism.
7.2 Resistance to facts
Unwillingness to accept strongly supported factual statements is a traditional criterion of pseudoscience. (See for instance item 5 on the list of seven criteria cited in Section 5.1.) The term “fact resistance” or “resistance to facts” was used already in the 1990s, for instance by Arthur Krystal (1999, p. 8), who complained about a “growing resistance to facts”, consisting in people being “simply unrepentant about not knowing things that do not reflect their interests”. The term “fact resistance” can refer to unwillingness to accept well-supported factual claims whether or not that support originates in science. It is particularly useful in relation to fact-finding practices that are not parts of science. (Cf. Section 2.)
7.3 Conspiracy theories
Generally speaking, conspiracy theories are theories according to which there exists some type of secret collusion for any type of purpose. In practice, the term mostly refers to implausible such theories that are used to explain social facts that have other, considerably more plausible explanations. Many pseudosciences are connected with conspiracy theories. For instance, one of the difficulties facing anti-vaccinationists is that they have to explain the overwhelming consensus among medical experts that vaccines are efficient. This is often done by claims of a conspiracy:
At the heart of the anti-vaccine conspiracy movement [lies] the argument that large pharmaceutical companies and governments are covering up information about vaccines to meet their own sinister objectives. According to the most popular theories, pharmaceutical companies stand to make such healthy profits from vaccines that they bribe researchers to fake their data, cover up evidence of the harmful side effects of vaccines, and inflate statistics on vaccine efficacy. (Jolley and Douglas 2014)
Conspiracy theories have peculiar epistemic characteristics that contribute to their pervasiveness. (Keeley 1999) In particular, they are often associated with a type of circular reasoning that allows evidence against the conspiracy to be interpreted as evidence for it.
7.4 Bullshit
The term “bullshit” was introduced into philosophy by Harry Frankfurt, who first discussed it in a 1986 essay and later developed the discussion into a book (Frankfurt 1986; 2005). Frankfurt used the term to describe a type of falsehood that does not amount to lying. A person who lies deliberately chooses not to tell the truth, whereas a person who utters bullshit is not interested in whether what (s)he says is true or false, only in its suitability for his or her purpose. Moberger (2020) has proposed that pseudoscience should be seen as a special case of bullshit, understood as “a culpable lack of epistemic conscientiousness”.
7.5 Epistemic relativism
Epistemic relativism is a term with many meanings; the meaning most relevant in discussions on pseudoscience is denial of the common assumption that there is intersubjective truth in scientific matters, which scientists can and should try to approach. Epistemic relativists claim that (natural) science has no special claim to knowledge, but should be seen “as ordinary social constructions or as derived from interests, political-economic relations, class structure, socially defined constraints on discourse, styles of persuasion, and so on” (Buttel and Taylor 1992, 220). Such ideas have been promoted under different names, including “social constructivism”, the “strong programme”, “deconstructionism”, and “postmodernism”. The distinction between science and pseudoscience has no obvious role in epistemic relativism. Some academic epistemic relativists have actively contributed to the promotion of doctrines such as AIDS denial, vaccination denial, creationism, and climate science denial (Hansson 2020, Pennock 2010). However, the connection between epistemic relativism and pseudoscience is controversial. Some proponents of epistemic relativism have confirmed that relativism “is almost always more useful to the side with less scientific credibility or cognitive authority” (Scott et al. 1990, 490). Others have denied that epistemic relativism facilitates or encourages standpoints such as denial of anthropogenic climate change or other environmental problems (Burningham and Cooper 1999, 306).
8. Unity in diversity
Kuhn observed that although his own and Popper’s criteria for identifying pseudoscience are profoundly different, they lead to essentially the same conclusions on what should be counted as science, or, pseudoscience (Kuhn 1974, 803). This convergence of theoretically divergent criteria is a quite general phenomenon. Philosophers and other theoreticians of science differ widely in their views on what science is. Nevertheless, there is virtual unanimity in the community of knowledge disciplines on what claims and doctrines should be classified as pseudoscience. There is widespread agreement for instance that creationism, astrology, homeopathy, Kirlian photography, dowsing, ufology, ancient astronaut theory, Holocaust denialism, Velikovskian catastrophism, and climate change denialism are pseudosciences. There are a few points of controversy, for instance concerning the status of various psychodynamic theories, but the general picture is one of consensus rather than controversy in particular issues of demarcation.
It is in a sense paradoxical that so much agreement has been reached in particular issues in spite of almost complete disagreement on the general criteria that these judgments should presumably be based upon. This puzzle is a sure indication that there is still much important philosophical work to be done on the distinction between science and pseudoscience.
Philosophical reflection on pseudoscience has brought forth other interesting problem areas in addition to the definition and identification of pseudoscience. Examples include related distinctions such as that between science and religion, the relationship between science and reliable non-scientific knowledge (for instance everyday knowledge), the scope for justifiable simplifications in science education and popular science, the nature and justification of methodological naturalism in science (Boudry et al 2010), and the meaning or meaninglessness of the concept of a supernatural phenomenon. Several of these problem areas have as yet not received much philosophical attention.
Bibliography
Cited Works
- Agassi, Joseph, 1991. “Popper’s Demarcation of Science Refuted”, Methodology and Science, 24: 1–7.
- Baigrie, B.S., 1988. “Siegel on the Rationality of Science”, Philosophy of Science, 55: 435–441.
- Bartley III, W. W., 1968. “Theories of Demarcation Between Science and Metaphysics”, pp. 40–64 in Imre Lakatos and Alan Musgrave (eds.), Problems in the Philosophy of Science, Proceedings of the International Colloquium in the Philosophy of Science, London 1965 (Volume 3), Amsterdam: North-Holland Publishing Company.
- Blancke, Stefaan, Maarten Boudry and Massimo Pigliucci, 2017. “Why Do Irrational Beliefs Mimic Science? The Cultural Evolution of Pseudoscience”, Theoria, 83(1): 78–97.
- Boudry, Maarten, 2022. “Diagnosing Pseudoscience—by Getting Rid of the Demarcation Problem”, Journal for General Philosophy of Science, 53(22): 83–101.
- Boudry, Maarten, Stefaan Blancke, and Johan Braeckman, 2010. “How Not to Attack Intelligent Design Creationism: Philosophical Misconceptions About Methodological Naturalism.” Foundations of Science, 153: 227–244.
- Boudry, Maarten and Johan Braeckman,(2011). “Immunizing Strategies and Epistemic Defense Mechanisms”, Philosophia, 39(1): 145–161.
- Boykoff, M. T., 2008. “Lost in Translation? United States Television News Coverage of Anthropogenic Climate Change, 1995–2004”, Climatic Change, 86: 1–11.
- Boykoff, M. T. and J. M. Boykoff, 2004. “Balance as Bias: Global Warming and the U.S. Prestige Press”, Global Environmental Change, 14: 125–136.
- Bright, Liam Kofi and Remco Heesen, 2023. “To Be Scientific Is To Be Communist”, Social Epistemology, 37(3): 249–258.
- Bunge, Mario, 1982. “Demarcating Science from Pseudoscience”, Fundamenta Scientiae, 3: 369–388.
- –––, 2001. “Diagnosing pseudoscience”, in Mario Bunge, Philosophy in Crisis. The Need for Reconstruction, Amherst, N.Y.; Prometheus Books, pp. 161–189.
- Burningham, K., and G. Cooper, 1999. “Being Constructive: Social Constructionism and the Environment”, Sociology, 33(2): 297–316.
- Buttel, Frederick H. and Peter J. Taylor, 1992. “Environmental Sociology and Global Environmental Change: A Critical Assessment”, Society and Natural Resources, 5(3): 211–230.
- Carlson, Shawn, 1985. “A Double Blind Test of Astrology”, Nature, 318: 419–425.
- Cioffi, Frank, 1985. “Psychoanalysis, Pseudoscience and Testability”, pp 13–44 in Gregory Currie and Alan Musgrave, (eds.) Popper and the Human Sciences, Dordrecht: Martinus Nijhoff Publishers.
- Cook, John, Naomi Oreskes, Peter T. Doran, William RL Anderegg, Bart Verheggen, Ed W. Maibach, J. Stuart Carlton, et al., 2016. “Consensus on Consensus: A Synthesis of Consensus Estimates on Human-Caused Global Warming”, Environmental Research Letters, 11: 048002.
- Culver, Roger and Ianna, Philip, 1988. Astrology: True or False, Buffalo: Prometheus Books.
- Dawes, Gregory W., 2018. “Identifying Pseudoscience: A Social Process Criterion”, Journal for General Philosophy of Science, 49(3): 283–298.
- Debray, Stéphanie. 2023. “La Définition de la Pseudoscience chez Sven Ove Hansson: Enjeux, Limites, Perspectives”, Lato Sensu, 10(1): 13–23.
- Derksen, A.A., 1993. “The Seven Sins of Pseudoscience”, Journal for General Philosophy of Science, 24: 17–42.
- –––, 2001. “The Seven Strategies of the Sophisticated Pseudoscience: A Look into Freud’s Rhetorical Tool Box”, Journal for General Philosophy of Science, 32: 329–350.
- Dolby, R.G.A., 1987. “Science and Pseudoscience: The Case of Creationism”, Zygon, 22: 195–212.
- Dunlap, Riley E., and Peter J. Jacques, 2013. “Climate Change Denial Books and Conservative Think Tanks: Exploring the Connection”, American Behavioral Scientist, 57(6): 699–731.
- Dupré, John, 1993. The Disorder of Things: Metaphysical Foundations of the Disunity of Science, Cambridge, MA: Harvard University Press.
- Dutch, Steven I, 1982. “Notes on the Nature of Fringe Science”, Journal of Geological Education, 30: 6–13.
- Fasce, Angelo, 2017. “What Do We Mean When We Speak of Pseudoscience? The Development of a Demarcation Criterion Based on the Analysis of Twenty-one Previous Attempts”, Disputatio. Philosophical Research Bulletin, 6(7): 459–488.
- Feleppa, Robert, 1990. “Kuhn, Popper, and the Normative Problem of Demarcation”, pp. 140–155 in Patrick Grim (ed.) Philosophy of Science and the Occult, 2nd edition, Albany: State University of New York Press.
- Fernandez-Beanato, Damian, 2020a. “Cicero’s Demarcation of Science: A Report of Shared Criteria”, Studies in History and Philosophy of Science (Part A), 83: 97–102.
- –––, 2020b. “The Multicriterial Approach to the Problem of Demarcation”, Journal for General Philosophy of Science, 51: 375–390.
- Frankfurt, Harry G., 1986. “On Bullshit”, Raritan 6(2): 81–100.
- –––, 2005. On Bullshit, Princeton: Princeton University Press.
- Freudenburg, William R., Robert Gramling and Debra J. Davidson, 2008. “Scientific Certainty Argumentation Methods (SCAMs): Science and the Politics of Doubt”, Sociological Inquiry, 78(1): 2–38.
- Fuller, Steve, 1985. “The Demarcation of Science: a Problem Whose Demise Has Been Greatly Exaggerated”, Pacific Philosophical Quarterly, 66: 329–341.
- Gardner, Martin, 1957. Fads and Fallacies in the Name of Science, Dover 1957; expanded version of his In the Name of Science, 1952.
- Gleberzon, William, 1983. “Academic Freedom and Holocaust Denial Literature: Dealing With Infamy”, Interchange, 14(4): 62–69.
- Glymour, Clark and Stalker, Douglas, 1990. “Winning Through Pseudoscience”, pp 92–103 in Patrick Grim (ed.) Philosophy of Science and the Occult, 2nd edition, Albany: State University of New York Press.
- Grove , J.W., 1985. “Rationality at Risk: Science Against Pseudoscience”, Minerva, 23: 216–240.
- Gruenberger, Fred J., 1964. “A Measure for Crackpots”, Science, 145: 1413–1415.
- Guldentops, Guy, 2020. “Nicolaus Ellenbog’s ‘Apologia for the Astrologers’: A Benedictine’s View on Astral Determinism”, Bulletin de Philosophie Médiévale, 62: 251–334.
- Hansson, Sven Ove, 1983. Vetenskap och ovetenskap, Stockholm: Tiden.
- –––, 1996. “Defining Pseudoscience”, Philosophia Naturalis, 33: 169–176.
- –––, 2006. “Falsificationism Falsified”, Foundations of Science, 11: 275–286.
- –––, 2013. “Defining Pseudoscience and Science”, pp. 61–77 in Pigliucci and Boudry (eds.) 2013.
- –––, 2017. “Science Denial as a Form of Pseudoscience”, Studies in History and Philosophy of Science, 63: 39–47.
- –––, 2018. “How Connected are the Major Forms of Irrationality? An Analysis of Pseudoscience, Science Denial, Fact Resistance and Alternative Facts”, Mètode Science Study Journal, 8: 125–131.
- –––, 2020. “Social Constructivism and Climate Science Denial”, European Journal for Philosophy of Science, 10: 37.
- –––, 2025. “Demarcating, Defining and Diagnosing Pseudoscience”, Philosophy of Science, in press.
- Hecht, David K., 2018. “Pseudoscience and the Pursuit of Truth”, in Allison B. Kaufman and James C. Kaufman (eds.) Pseudoscience. The Conspiracy Against Science, Cambridge, Mass.: MIT Press, pp. 3–20.
- Hirvonen, Ilmari and Janne Karisto, 2022. “Demarcation Without Dogmas”, Theoria, 88(3): 701–720.
- Hornsey, Matthew J., 2020. “Why Facts Are Not Enough: Understanding and Managing the Motivated Rejection of Science”, Current Directions in Psychological Science, 29(6): 583–591.
- Hoyninengen-Huene, Paul, 2013. Systematicity. The Nature of Science, Oxford: Oxford University Press.
- Irzik, Gürol, and Robert Nola, 2011. “A Family Resemblance Approach to the Nature of Science for Science Education”, Science and Education, 20(7): 591–607.
- Jolley, Daniel, and Karen M. Douglas, 2014. “The Effects of Anti-Vaccine Conspiracy Theories on Vaccination Intentions”, PloS One, 9(2): e89177.
- Keeley, Brian L., 1999. “Of Conspiracy Theories”, The Journal of Philosophy, 96(3): 109–126.
- Kitcher, Philip, 1982. Abusing Science. The Case Against Creationism, Cambridge, MA: MIT Press.
- Krystal, Arthur, 1999. “At Large and at Small: What Do You Know?”, American Scholar, 68(2): 7–13.
- Kuhn, Thomas S., 1974. “Logic of Discovery or Psychology of Research?”, pp. 798–819 in P.A. Schilpp, The Philosophy of Karl Popper, The Library of Living Philosophers, vol xiv, book ii. La Salle: Open Court.
- Lakatos, Imre, 1970. “Falsification and the Methodology of Research program”, pp 91–197 in Imre Lakatos and Alan Musgrave (eds.) Criticism and the Growth of Knowledge. Cambridge: Cambridge University Press.
- –––, 1974a. “Popper on Demarcation and Induction”, pp. 241–273 in P.A. Schilpp, The Philosophy of Karl Popper (The Library of Living Philosophers, Volume 14, Book 1). La Salle: Open Court.
- –––, 1974b. “Science and Pseudoscience”, Conceptus, 8: 5–9.
- –––, 1981. “Science and Pseudoscience”, pp. 114–121 in S. Brown, et al. (eds.) Conceptions of Inquiry: A Reader, London: Methuen.
- Langmuir, Irving, [1953] 1989. “Pathological Science”, Physics Today, 42(10): 36–48.
- Laudan, Larry, 1983. “The Demise of the Demarcation Problem”, in R.S. Cohan and L. Laudan (eds.), Physics, Philosophy, and Psychoanalysis, Dordrecht: Reidel, pp. 111–127.
- Lewandowsky, Stephan, Toby D. Pilditch, Jens K. Madsen, Naomi Oreskes, and James S. Risbey, 2019. “Influence and Seepage: An Evidence-Resistant Minority Can Affect Public Opinion and Scientific Belief Formation”, Cognition, 188: 124–139.
- Liebenberg, L., 2013. The Origin of Science. The Evolutionary Roots of Scientific Reasoning and Its Implications for Citizen Science, Cape Town: CyberTracker.
- Lugg, Andrew, 1987. “Bunkum, Flim-Flam and Quackery: Pseudoscience as a Philosophical Problem”, Dialectica, 41: 221–230.
- –––, 1992. “Pseudoscience as Nonsense”, Methodology and Science, 25: 91–101.
- Mahner, Martin, 2007. “Demarcating Science from Non-Science”, pp 515-575 in Theo Kuipers (ed.) Handbook of the Philosophy of Science: General Philosophy of Science – Focal Issues, Amsterdam: Elsevier.
- –––, 2013. “Science and Pseudoscience. How to Demarcate After the (Alleged) Demise of the Demarcation Problem”, pp. 29–43 in Pigliucci and Boudry (eds.) 2013.
- Mayo, Deborah G., 1996. “Ducks, Rabbits and Normal Science: Recasting the Kuhn’s-Eye View of Popper’s Demarcation of Science”, British Journal for the Philosophy of Science, 47: 271–290.
- Merton, Robert K., [1942] 1973. “Science and Technology in a Democratic Order”, Journal of Legal and Political Sociology, 1: 115–126, 1942; reprinted as “The Normative Structure of Science”, in Robert K. Merton, The Sociology of Science. Theoretical and Empirical Investigations, Chicago: University of Chicago Press, pp. 267–278.
- Moberger, Victor, 2020. “Bullshit, Pseudoscience and Pseudophilosophy”, Theoria, 86(5): 595–611.
- Morris, Robert L., 1987. “Parapsychology and the Demarcation Problem”, Inquiry, 30: 241–251.
- Nieminen, Petteri and Anne-Mari Mustonen, 2014. “Argumentation and Fallacies in Creationist Writings Against Evolutionary Theory”, Evolution: Education and Outreach, 7: 1–14.
- Oreskes, Naomi, 2019. “Systematicity is Necessary But Not Sufficient: On the Problem of Facsimile Science”, Synthese, 196(3): 881–905.
- Oreskes, Naomi and Erik M. Conway, 2010. Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming, New York: Bloomsbury Press.
- Park, Wonyong and Richard Brock, 2023. “Is There a Limit to Resemblances? Teaching About Science and Pseudoscience from a Family Resemblance Perspective”, Science and Education, 32(5): 1265–1286.
- Pennock, Robert T., 2010. “The Postmodern Sin of Intelligent Design Creationism” Science and Education, 19(6–8): 757–778.
- –––, 2011. “Can’t Philosophers Tell the Difference Between Science and Religion?: Demarcation Revisited”, Synthese, 178(2): 177–206.
- Pigliucci, Massimo, 2013. “The Demarcation Problem. A (Belated) Response to Laudan”, in Pigliucci and Boudry (eds.) 2013, pp. 9–28.
- Pigliucci, Massimo and Maarten Boudry (eds.), 2013. Philosophy of Pseudoscience. Reconsidering the Demarcation Problem. Chicago: Chicago University Press.
- Popper, Karl R., 1933. “Ein Kriterium des empirischen Charakters theoretischer Systeme”, Erkenntnis, 3: 426–428.
- –––, 1935. Logik der Forschung. Zur Erkenntnistheorie der modernen Naturwissenschaft, Wien: Springer.
- –––, 1957. “Philosophy of Science: a Personal Report”, in Cecil Alec Mace (ed.) British Philosophy in the Mid-Century, London: George Allen and Unwin, pp. 155–191.
- –––, 1959. The Logic of Scientific Discovery, London: Hutchinson.
- –––, 1962. Conjectures and Refutations. The Growth of Scientific Knowledge, New York: Basic Books.
- –––, 1974 “Reply to My Critics”, in P.A. Schilpp, The Philosophy of Karl Popper (The Library of Living Philosophers, Volume XIV, Book 2), La Salle: Open Court, pp. 961–1197.
- –––, 1976. Unended Quest London: Fontana.
- –––, 1978. “Natural Selection and the Emergence of the Mind”, Dialectica, 32: 339–355.
- –––, [1989] 1994. “Falsifizierbarkeit, zwei Bedeutungen von”, pp. 82–86 in Helmut Seiffert and Gerard Radnitzky, Handlexikon zur Wissenschaftstheorie, 2nd edition, München: Ehrenwirth GmbH Verlag.
- Powell, James, 2019. “Scientists Reach 100% Consensus on Anthropogenic Global Warming”, Bulletin of Science, Technology and Society, 37(4): 183–184.
- Prescott, Victor and Gillian D. Triggs, 2008. International Frontiers and Boundaries. Law, Politics and Geography, Leiden: Martinus Nijhoff.
- Radner, Daisie and Michael Radner, 1982. Science and Unreason, Belmont CA: Wadsworth.
- Reisch, George A., 1998. “Pluralism, Logical Empiricism, and the Problem of Pseudoscience”, Philosophy of Science, 65: 333–348.
- Resnik, David B., and Kevin C. Elliott, 2023. “Science, Values, and the New Demarcation Problem”, Journal for General Philosophy of Science 54(2): 259–286.
- Rothbart, Daniel, 1990 “Demarcating Genuine Science from Pseudoscience”, in Patrick Grim, ed, Philosophy of Science and the Occult, 2nd edition, Albany: State University of New York Press, pp. 111–122.
- Ruse, Michael, 1977. “Karl Popper’s Philosophy of Biology”, Philosophy of Science, 44: 638–661.
- –––, 2000. “Is Evolutionary Biology a Different Kind of Science?”, Aquinas, 43: 251–282.
- Ruse, Michael (ed.), (1996). But is it science? The Philosophical Question in the Creation/Evolution Controversy, Amherst, NY: Prometheus Books.
- –––, 2021. “The Arkansas Creationism Trial Forty Years On”, in Zuzana Parusniková and David Merritt (eds.) Karl Popper’s Science and Philosophy, Cham: Springer, pp. 257–276.
- Schindler, Samuel, 2018. Theoretical Virtues in Science: Uncovering Reality Through Theory, Cambridge: Cambridge University Press.
- Scott, P., Richards, E., and Martin, B., 1990. “Captives of Controversy. The Myth of the Neutral Social Researcher in Contemporary Scientific Controversies”, Science, Technology, and Human Values, 15(4): 474–494.
- Settle, Tom, 1971. “The Rationality of Science versus the Rationality of Magic”, Philosophy of the Social Sciences, 1: 173–194.
- Siitonen, Arto, 1984. “Demarcation of Science From the Point of View of Problems and Problem-Stating”, Philosophia Naturalis, 21: 339–353.
- Thagard, Paul R., 1978. “Why Astrology Is a Pseudoscience”, Philosophy of Science Association (PSA 1978), 1: 223–234.
- –––, 1988. Computational Philosophy of Science, Cambridge, MA: MIT Press.
- Thurs, Daniel P. and Ronald L. Numbers, 2013. “Science, Pseudoscience and Science Falsely So-Called”, in Pigliucci and Boudry (eds.) 2013, pp. 121–144.
- Torcello, Lawrence, 2016. “The Ethics of Belief, Cognition, and Climate Change Pseudoskepticism: Implications for Public Discourse”, Topics in Cognitive Science, 8: 19–48.
- Vollmer, Gerhard, 1993. Wissenschaftstheorie im Einsatz, Beiträge zu einer selbstkritischen Wissenschaftsphilosophie Stuttgart: Hirzel Verlag.
- Wazeck, Milena, 2009. Einsteins Gegner. Die öffentliche Kontroverse um die Relativitätstheorie in den 1920er Jahren. Frankfurt: campus.
- Williams, Nigel, 2005. “Heavyweight Attack on Climate-Change Denial”, Current Biology, 15(4): R109–R110.
Philosophically Informed Literature on Pseudosciences and Contested Doctrines
Anthroposophy
- Hansson, Sven Ove, 1991. “Is Anthroposophy Science?”, Conceptus 25: 37–49.
- –––, 2022. “Anthroposophical climate science denial”, Critical Research on Religion 10(3): 281–297.
- Staudenmaier, Peter, 2014. Between Occultism and Nazism. Anthroposophy and the Politics of Race in the Fascist Era, Leiden: Brill.
Astrology
- James, Edward W, 1990. “On Dismissing Astrology and Other Irrationalities”, in Patrick Grim (ed.) Philosophy of Science and the Occult, 2nd edition, State University of New York Press, Albany, pp. 28–36.
- Kanitscheider, Bernulf, 1991. “A Philosopher Looks at Astrology”, Interdisciplinary Science Reviews, 16: 258–266.
- Thagard, Paul R., 1978. “Why Astrology Is a Pseudoscience”, Philosophy of Science Association (PSA 1978), 1: 223–234.
Climate science denialism
- Afzali, Mansoor, Gonul Colak and Sami Vähämaa, 2025. “Climate Change Denial and Corporate Environmental Responsibility”, Journal of Business Ethics, 196(1): 31–59.
- Hansson, Sven Ove, 2020. “Social constructionism and climate science denial”, European Journal for Philosophy of Science, 10(3): 1–27.
- McKinnon, Catriona, 2016. “Should We Tolerate Climate Change Denial?”, Midwest Studies in Philosophy, 40(1): 205–216.
- Slater, Matthew H., Joanna K. Huxster, Julia E. Bresticker and Victor LoPiccolo, 2020. “ Denialism as Applied Skepticism: Philosophical and Empirical Considerations”, Erkenntnis 85(4): 871–890.
- Torcello, Lawrence, 2016. “The Ethics of Belief, Cognition, and Climate Change Pseudoskepticism: Implications for Public Discourse”, Topics in Cognitive Science, 8(1): 19–48.
Creationism
- Kitcher, Philip, 1982. Abusing Science. The Case Against Creationism, Cambridge, MA: MIT Press.
- Lambert, Kevin, 2006. “Fuller’s folly, Kuhnian paradigms, and intelligent design”, Social Studies of Science, 36(6): 835–842.
- Pennock, Robert T., 2010. “The postmodern sin of intelligent design creationism”, Science and Education, 19(6–8): 757–778.
- –––, 2011. “Can’t philosophers tell the difference between science and religion?: Demarcation revisited”, Synthese, 178(2): 177–206.
- Pennock, Robert T. and Michael Ruse (eds.), 2009. But is it science? The philosophical question in the creation/evolution controversy, updated edition, Prometheus Books.
- Pigliucci, Massimo, 2007. “The evolution-creation wars: why teaching more science just is not enough”, McGill Journal of Education, 42(2): 285–306.
Feng Shui
- Matthews, Michael R., 2019. Feng Shui: Teaching about science and pseudoscience, Springer.
Holocaust denial
- Lipstadt, Deborah E., 1993. Denying the Holocaust: the growing assault on truth and memory, New York : Free Press.
Parapsychology
- Edwards, Paul, 1996. Reincarnation: A Critical Examination, Amherst NY: Prometheus.
- Flew, Antony, 1980. “Parapsychology: Science or Pseudoscience”, Pacific Philosophical Quarterly, 61: 100–114.
- Hales, Steven D., 2001. “Evidence and the afterlife”, Philosophia, 28(1–4): 335–346.
Psychoanalysis
- Boudry, Maarten, and Filip Buekens, 2011. “The epistemic predicament of a pseudoscience: Social constructivism confronts Freudian psychoanalysis”, Theoria, 77(2): 159–179.
- Cioffi, Frank, 1998. Freud and the Question of Pseudoscience. Chicago: Open Court.
- –––, 2013. “Pseudoscience. The case of Freud’s sexual etiology of the neuroses”, in Pigliucci and Boudry (eds.) 2013, pp. 321–340.
- Grünbaum, Adolf, 1979. “Is Freudian psychoanalytic theory pseudoscientific by Karl Popper’s criterion of demarcation?”, American Philosophical Quarterly, 16: 131–141.
Quackery and non–scientific medicine
- Jerkert, Jesper, 2013. “Why alternative medicine can be scientifically evaluated. Countering the evasions of pseudoscience”, in Pigliucci and Boudry (eds.) 2013, pp. 305–320.
- Smith, Kevin, 2012a. “Against homeopathy–a utilitarian perspective”, Bioethics, 26(8): 398–409.
- –––, 2012b. “Homeopathy is unscientific and unethical”, Bioethics, 26(9): 508–512.
Academic Tools
How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
Other Internet Resources
- The Skeptic’s Dictionary, contains information, links and references about a wide variety of contested claims and phenomena.
- Committee for Skeptical Inquiry, the major international organization promoting scientific investigations of contested phenomena.
- Quackwatch, devoted to critical assessment of scientifically unvalidated health claims.
- Views of modern philosophers, a summary of the views that modern philosophers have taken on astrology, expanded from an article published in Correlation: Journal of Research into Astrology, 14/2 (1995): 33–34.