AR Philosophy >
or, Some Lists Useful in Dealing
There is good science, there is pathological science, and then there is
pseudoscience (which is, simply, a theory, methodology, or practice that is
considered to be without scientific foundation). The several lists below were
culled from different sources and are presented as tools for discernment. Though
most of the rules apply to experimental sciences, physics in particular, the
basic tenets can be generalized without much difficulty to wider applications,
The term "pathological science" was coined by Nobel-laureate chemist Irving
Langmuir in a lecture he gave at General Electric's Knolls Atomic Power
Laboratory in 1953. Langmuir offered several examples of pathological science,
and concluded that:
These are cases where there is no dishonesty involved but where people are
tricked into false results by a lack of understanding about what human beings
can do to themselves in the way of being led astray by subjective effects,
wishful thinking or threshold interactions. These are examples of pathological
science. These are things that attracted a great deal of attention. Usually
hundreds of papers have been published on them. Sometimes they have lasted for
15 or 20 years and then gradually have died away. Now here are the
 The maximum effect that is observed is produced by a causative agent of
barely detectable intensity. For example, you might think that if one onion
root would affect another due to ultraviolet light then by putting on an
ultraviolet source of light you could get it to work better. Oh no! Oh no! It
had to be just the amount of intensity that's given off by an onion root. Ten
onion roots wouldn't do any better than one and it didn't make any difference
about the distance of the source. It didn't follow any inverse square law or
anything as simple as that. And so on. In other words, the effect is
independent of the intensity of the cause. That was true in the mitogenetic
rays and it was true in the N rays. Ten bricks didn't have any more effect
than one. It had to be of low intensity. We know why it had to be of low
intensity: so that you could fool yourself so easily. Otherwise, it wouldn't
work. Davis-Barnes worked just as well when the filament was turned off. They
 Another characteristic thing about them all is that these observations
are near the threshold of visibility of the eyes. Any other sense, I suppose,
would work as well. Or many measurements are necessary many measurements
because of the very low statistical significance of the results. With the
mitogenetic rays particularly, [people] started out by seeing something that
was bent. Later on, they would take a hundred onion roots and expose them to
something, and they would get the average position of all of them to see
whether the average had been affected a little bit ... Statistical
measurements of a very small ... were thought to be significant if you took
large numbers. Now the trouble with that is this. [Most people have a habit,
when taking] measurements of low significance, [of finding] a means of
rejecting data. They are right at the threshold value and there are many
reasons why [they] can discard data. Davis and Barnes were doing that right
along. If things were doubtful at all, why, they would discard them or not
discard them depending on whether or not they fit the theory. They didn't know
that, but that's the way it worked out.
 There are claims of great accuracy. Barnes was going to get the Rydberg
constant more accurately than the spectroscopists could. Great sensitivity or
great specificity we'll come across that particularly in the Allison
 Fantastic theories contrary to experience. In the Bohr theory, the
whole idea of an electron being captured by an alpha particle when the alpha
particles aren't there, just because the waves are there, [isn't] a very
 Criticisms are met by ad hoc excuses thought up on the spur of the
moment. They always had an answer always.
 The ratio of the supporters to the critics rises up somewhere near 50%
and then falls gradually to oblivion. The critics couldn't reproduce the
effects. Only the supporters could do that. In the end, nothing was salvaged.
Why should there be? There isn't anything there. There never was. That's
characteristic of the effect.
(From I. Langmuir, "Pathological Science: scientific
studies based on non-existent phenomena," Physics Today, 1989 (Oct),
36, 47. Transcribed and edited by R. N. Hall.)
A. Cromer, commenting on the above characteristics of pathological science,
(1) Scientists themselves are often poor judges of the scientific
(2) Scientific research is very difficult. Anything that can go wrong will
(3) Science isn't dependent on the honesty or wisdom of scientists.
(4) Real discoveries of phenomena contrary to all previous scientific
experience are very rare, while fraud, fakery, foolishness, and error
resulting from overenthusiasm and delusion are all too common.
Peter Sturrock, Professor of Space Science at Stanford University in
California, offered the following as guidelines to those dealing with anomalous
(1) In studying any phenomenon, face up to the strongest evidence you can
find, even if it is in conflict with current orthodoxies.
(2) Go to the original sources for your data. Do not trust secondary
(3) Deal with "degrees of belief", which can be conveniently characterized
by probabilities. It is important to avoid assigning probability P=0 (complete
disbelief) or P=1 (complete certainty) to any proposition since, if you adopt
either of these values, that value can never be changed no matter how much
evidence you subsequently receive.
(4) Focus on evidence and testing.
(5) Subdivide the work into categories so different people take on
(6) Where possible work in teams; first because a combination of expertise
may be required, and secondly, because a team is more likely to be
self-correcting than someone working alone.
(7) In theoretical analyses, list all assumptions. This seems a simple,
innocuous request, yet it will not always be easy to put into
Langmuir's observations do not imply that scientists should avoid
controversial topics, but rather that any scientist must exercise prudence and
caution, tentativeness, an appreciation of contextual implications within the
greater scope of systemized scientific knowledge, and an awareness of the
imperfections of human nature which can snare even the wisest of researchers in
a net of fallacy.
Proper scientific methodology usually requires four steps:
(1) Observation. Objectivity is very important at this stage.
(2) The inducement of general hypotheses or possible explanations for what
has been observed. Here one must be imaginative yet logical. Occam's Razor
should be considered but need not be strictly applied: Entia non sunt
multiplicanda, or as it is usually paraphrased, the simplest hypothesis
is the best. Entities should not be multiplied unnecessarily.
(3) The deduction of corollary assumptions that must be true if the
hypothesis is true. Specific testable predictions are made based on the
(4) Testing the hypothesis by investigating and confirming the deduced
implications. Observation is repeated and data is gathered with the goal of
confirming or falsifying the initial hypothesis.
Pseudoscience often omits the last two steps above. The boundary between
pathological science and outright pseudoscience is not distinct. Both are
usually marked by a strenuous objection to allowing others to try to prove one's
fantastic theories to be wrong, while immediately meeting every objection with
ad hoc hypotheses, denials of conflicting data, and ad hominem attacks. The
importance of peer review is rejected wholesale and criticism is often
discounted altogether or at best vaguely addressed.
Charles Babbage (1792-1871), professor of mathematics at Cambridge
University, described three forms of outright scientific dishonesty with regard
(1) Trimming: the smoothing of irregularities to make the data look
extremely accurate and precise.
(2) Cooking: retaining only those results that fit the theory while
discarding others that do not.
(3) Forging: inventing some or all of the research data that are reported,
and even reporting experiments or procedures to obtain those data that were
Kenneth Feder listed six basic motives for scientific fraud:
(1) Financial gain. Books and television programs proposing outlandish
theories earn millions each year.
(2) The pursuit of fame. The desire to find the first, the oldest, the
long-lost, and the thought-to-have-been-mythological provides incentive for
the invention, alteration, or exaggeration of data.
(3) Nationalistic or racial pride. Many attempt through deception to
glorify their ancestors, and by extension themselves, by attributing to them
grand and important achievements that are in reality undeserved.
(4) Religious interests. Adherents to particular religions sometimes
succumb to the temptation to falsely prove through archaeology the validity of
(5) The desire for a more "romantic" past. There are those who reject what
they view as "mundane" theories in favor of those that are more exciting, such
as the proposal of lost continents, ancient astronauts, and advanced
(6) Mental instability. Some unsound claims are the fruits of unsound
(Liberally edited and paraphrased. K. Feder, Frauds,
Myths, and Mysteries: Science and Pseudoscience in Archaeology, Mayfield
Michael Shermer listed 25 fallacies that lead us to believe weird things:
(1) Theory influences observation. Heisenberg wrote, "What we observe is
not nature itself but nature exposed to our method of questioning." Our
perception of reality is influenced by the theories framing our examination of
(2) The observer changes the observed. The act of studying an event can
change it, an effect particularly profound in the social sciences, which is
why psychologists use blind and double-blind controls.
(3) Equipment constructs results. How we make and understand measurements
is highly influenced by the equipment we use.
(4) Anecdotes do not make science. Stories recounted in support of a claim
are not scientific without corroborative evidence from other sources or
physical proof of some sort.
(5) Scientific language does not make a science. Dressing up a belief in
jargon, often with no precise or operational definitions, means nothing
without evidence, experimental testing, and corroboration.
(6) Bold statements do not make claims true. The more extraordinary the
claim, the more extraordinarily well-tested the evidence must be.
(7) Heresy does not equal correctness. Being laughed at by the mainstream
does not mean one is right. The scientific community cannot be expected to
test every fantastic claim that comes along, especially when so many are
logically inconsistent. If you want to do science, you have to learn to play
the game of science. This involves exchanging data and ideas with colleagues
informally, and formally presenting results in conference papers,
peer-reviewed journals, books, and the like.
(8) Burden of proof. It is the person who makes the extraordinary claim who
has the burden of proving the validity of the evidence.
(9) Rumors do not equal reality. Repeated tales are not of necessity
(10) Unexplained is not inexplicable. Many people think that if they
themselves cannot explain something that it must be inexplicable and therefore
a true mystery of the paranormal.
(11) Failures are rationalized. In science, the value of negative findings
is high, and honest scientists will readily admit their mistakes.
Pseudoscientists ignore or rationalize failures.
(12) After-the-fact reasoning. Also known as "post hoc, ergo propter
hoc," literally "after this, therefore because of this." At its basest
level, this is a form of superstition. As Hume taught us, the fact that two
events follow each other in sequence does not mean they are connected
causally. Correlation does not mean causation.
(13) Coincidence. In the paranormal world, coincidences are often seen as
deeply significant. As the behavioral psychologist B.F. Skinner proved in the
laboratory, the human mind seeks relationships between events and often finds
them even when they are not present.
(14) Representiveness. As Aristotle said, "The sum of the coincidences
equals certainty." We forget most of the insignificant coincidences and
remember the meaningful ones. We must always remember the larger context in
which a seemingly unusual event occurs, and we must always analyze unusual
events for their representiveness of their class of phenomena.
(15) Emotive words and false analogies. Emotive words are used to provoke
emotion and sometimes to obscure rationality. Likewise, metaphors and
analogies can cloud thinking with emotion and steer us onto a side path. Like
anecdotes, analogies and metaphors do not constitute proof. They are merely
tools of rhetoric.
(16) Ad ignoratum. This is an appeal to ignorance or lack of
knowledge, where someone claims that if you cannot disprove a claim it must be
true. In science, belief should come from positive evidence, not a lack of
evidence for or against a claim.
(17) Ad hominem and tu quoque. Literally "to the man" and
"you also," these fallacies redirect the focus from thinking about the idea to
thinking about the person holding the idea. The goal of an ad hominem
attack is to discredit the claimant in hopes that it will discredit the claim.
Similarly for tu quoque. As a defense, the critic is accused of making
the same mistakes attributed to the criticized, and nothing is proved one way
or the other.
(18) Hasty generalization. In logic, the hasty generalization is a form of
improper induction. In life it is called prejudice. In either case,
conclusions are drawn before the facts warrant it.
(19) Overreliance on authorities. We tend to rely heavily on authorities in
our culture, especially if the authority is considered to be highly
intelligent. Authorities, by virtue of their expertise in a field, may have a
better chance of being right in that field, but correctness is certainly not
guaranteed, and their expertise does not necessarily qualify them to draw
conclusions in other areas.
(20) Either-or. Also known as the fallacy of negation or the
false dilemma, this is the tendency to dichotomize the world so that if
you discredit one position, the observed is forced to accept the other. A new
theory needs evidence in favor of it, not just against the opposition.
(21) Circular reasoning. Also known as fallacy of redundancy,
begging the question, or tautology, this occurs when the
conclusion or claim is merely a restatement of one of the premises.
(22) Reductio ad absurdum and the slippery slope. Reductio ad
absurdum is the refutation of an argument by carrying the argument to its
logical end and so reducing it to an absurd conclusion. Surely, if an
argument's consequences are absurd, it must be false. This is not necessarily
so, though sometimes pushing an argument to its limits is a useful exercise in
critical thinking; often this is a way to discover whether a claim has
validity, especially when an experiment testing the actual reduction can be
run. Similarly, the slippery slope fallacy involves constructing a scenario in
which one thing leads ultimately to an end so extreme that the first step
should never be taken.
(23) Effort inadequacies and the need for certainty, control, and
simplicity. Most of us, most of the time, want certainty, want to control our
environment, and want nice, neat, simple explanations. Scientific and critical
thinking does not come naturally. it takes training, experience, and effort.
We must always work to suppress our need to be absolutely certain and in total
control ands our tendency to seek the simple and effortless solution to a
(24) Problem-solving inadequacies. All critical and scientific thinking is,
in a fashion, problem solving. There are numerous psychological disruptions
that cause inadequacies in problem solving. We must all make the effort to
(25) Ideological immunity, or the Planck Problem. In day-to-day life, as in
science, we all resist fundamental paradigm change. Social scientist Jay
Stuart Snelson calls this resistance an ideological immune system:
"educated, intelligent, and successful adults rarely change their most
fundamental presuppositions." As individuals accumulate more knowledge,
theories become more well-founded, and confidence in ideologies is
strengthened. The consequence of this, however, is that we build up an
"immunity" against new ideas that do not corroborate previous ones. Historians
of science call this the Planck Problem, after physicist Max Planck,
who made this observation on what must happen for innovation to occur in
science: "An important scientific innovation rarely makes its way by gradually
winning over and converting its opponents: it rarely happens that Saul becomes
Paul. What does happen is that its opponents gradually die out and that the
growing generation is familiarized with the idea from the beginning."
(Liberally edited and paraphrased. M. Shermer, Why
People Believe Weird Things: Pseudoscience, Superstition, and Other Confusions
of Our Time, W.H. Freeman and Company, 1997)
Catchpenny Mysteries ? copyright 2000 by