++++++++++++++++++++++++++++++++++++++++
Article #1
MIND & MATTER
July 20, 2012
When Bad Theories Happen to Good Scientists
There's
a myth out there that has gained the status of a cliché: that scientists love
proving themselves wrong, that the first thing they do after constructing a
hypothesis is to try to falsify it. Professors tell students that this is the
essence of science.
Yet
most scientists behave very differently in practice. They not only become
strongly attached to their own theories; they perpetually look for evidence
that supports rather than challenges their theories. Like defense attorneys
building a case, they collect confirming evidence.
In
this they're only human. In all walks of life we look for evidence to
support our beliefs, rather than to counter them. This pervasive phenomenon is
known to psychologists as "confirmation bias." It is what keeps all
sorts of charlatans in business, from religious cults to get-rich-quick schemes.
As the philosopher/scientist Francis Bacon noted in 1620: "And such is the
way of all superstition, whether in astrology, dreams, omens, divine judgments,
or the like; wherein men, having a delight in such vanities, mark the events
where they are fulfilled, but where they fail, though this happen much oftener,
neglect and pass them by."
Just
as hypochondriacs and depressives gather ample evidence that they're ill or
ill-fated, ignoring that which implies they are well or fortunate, so
physicians managed to stick with ineffective measures such as bleeding, cupping
and purging for centuries because the natural recovery of the body in most
cases provided ample false confirmation of the efficacy of false cures.
Homeopathy relies on the same phenomenon to this day.
Moreover,
though we tell students in school that, as Karl Popper argued, science works by
falsifying hypotheses, we teach them the very opposite—to build a case by
accumulating evidence in support of an argument.
The
phrase "confirmation bias" itself was coined by a British
psychologist named Peter Wason in 1960. His classic demonstration of why it was
problematic was to give people the triplet of numbers "2-4-6" and ask
them to propose other triplets to test what rule the first triplet followed. Most
people propose a series of even numbers, such as "8-10-12" and on
being told that yes, these numbers also obey the rule, quickly conclude that
the rule is "ascending even numbers." In fact, the rule was simply
"ascending numbers." Proposing odd numbers would have been more
illuminating.
An
example of how such reasoning can lead scientists astray was published last
year. An experiment had seemed to confirm the Sapir-Whorf hypothesis that
language influences perception. It found that people reacted faster when discriminating
a green from a blue patch than when discriminating two green patches (of equal
dissimilarity) or two blue patches, but that they did so only if the patch was
seen by the right visual field, which feeds the brain's left hemisphere, where
language resides.
Despite
several confirmations by other teams, the result is now known to be a fluke,
following a comprehensive series of experiments by Angela Brown, Delwin Lindsey
and Kevin Guckes of Ohio State University. Knowing the word for a color
difference makes it no quicker to spot.
One
of the alarming things about confirmation bias is that it seems to get worse
with greater expertise. Lawyers and
doctors (but not weather forecasters who get regularly mugged by reality)
become more confident in their judgment as they become more senior, requiring
less positive evidence to support their views than they need negative evidence
to drop them.
The
origin of our tendency to confirmation bias is fairly obvious. Our brains were
not built to find the truth but to make pragmatic judgments, check them cheaply and win arguments, whether we are in the
right or in the wrong.
++++++++++++++++++++++++++++++++++++++
Article #2
MIND & MATTER
July 27, 2012
Three Cheers for Scientific Backbiting
By MATT RIDLEY
If,
as I argued last week, scientists are just as prone as everybody else to confirmation
bias—the tendency to look for evidence to support rather than to test your own
ideas—then how is it that science, unlike cults and superstitions, does
change its mind and find new things?
The
answer was spelled out by the psychologist Raymond Nickerson of Tufts
University in a 1998 paper: "It is not so much the critical attitude that
individual scientists have taken with respect to their own ideas that has given
science the success it has enjoyed…but more the fact that individual
scientists have been highly motivated to demonstrate that hypotheses that are
held by some other scientist(s) are false."
Most
scientists do not try to disprove their ideas; rivals do it for them. Only when
those rivals fail is the theory bombproof. The physicist Robert Millikan (who
showed minor confirmation bias in his own work on the charge of the electron by
omitting outlying observations that did not fit his hypothesis) devoted more
than 10 years to trying to disprove Einstein's theory that light consists of
particles (photons). His failure convinced almost everybody but himself that
Einstein was right.
The
solution to confirmation bias in science, then, is not to try to teach it out
of people; it is a deeply ingrained tendency of human nature. Dr. Nickerson
noted that science is replete not only with examples of great scientists
tenaciously persisting with theories "long after the evidence against them
had become sufficiently strong to persuade others without the same vested
interests to discard them" but also with brilliant people who remained
wedded to their pet hates. Galileo rejected Kepler's lunar explanation of
tides; Huygens objected to Newton's concept of gravity; Humphrey Davy detested
John Dalton's atomic theory; Einstein denied quantum theory.
No,
the reason that science progresses despite confirmation bias is partly that it
makes testable predictions, but even more that it prevents monopoly. By
dispersing its incentives among many different centers, it lets scientists
check each other's prejudices. When a discipline defers to a single authority
and demands adherence to a set of beliefs, then it becomes a cult.
A
recent example is the case of malaria and climate. In the early days of
global-warming research, scientists argued that warming would worsen malaria by
increasing the range of mosquitoes. "Malaria and dengue fever are two of
the mosquito-borne diseases most likely to spread dramatically as global
temperatures head upward," said the Harvard Medical School's Paul Epstein
in Scientific American in 2000, in a warning typical of many.
Carried
away by confirmation bias, scientists modeled the future worsening of malaria,
and the Intergovernmental Panel on Climate Change accepted this as a given.
When Paul Reiter, an expert on insect-borne diseases at the Pasteur Institute,
begged to differ—pointing out that malaria's range was shrinking and was
limited by factors other than temperature—he had an uphill struggle.
"After much effort and many fruitless discussions," he said,
"I…resigned from the IPCC project [but] found that my name was still
listed. I requested its removal, but was told it would remain because 'I had
contributed.' It was only after strong insistence that I succeeded in having it
removed."
Yet
Dr. Reiter has now been vindicated. In a recent paper, Peter Gething of Oxford
University and his colleagues concluded that widespread claims that rising mean
temperatures had already worsened malaria mortality were "largely at odds
with observed decreasing global trends" and that proposed future effects
of rising temperatures are "up to two orders of magnitude smaller than
those that can be achieved by the effective scale-up of key control
measures."
The
IPCC, in other words, learned the hard way the value of letting mavericks and
gadflies challenge confirmation bias.
+++++++++++++++++++++++++++++++++
Article #3
MIND & MATTER
August 3, 2012
How Bias Heats Up the Warming Debate
By MATT RIDLEY
I
argued last week that the way to combat confirmation bias—the tendency to
behave like a defense attorney rather than a judge when assessing a theory in
science—is to avoid monopoly. So long as there are competing scientific
centers, some will prick the bubbles of theory reinforcement in which other
scientists live.
For
constructive critics, this is the problem with modern climate science. They
don't think it's a conspiracy theory, but a monopoly that clings to one
hypothesis (that carbon dioxide will cause dangerous global warming) and brooks
less and less dissent. Again and again, climate skeptics are told they should
respect the consensus, an admonition wholly against the tradition of science.
Hot summers are invoked as support for
climate alarmism; cold winters are dismissed as weather.
Last
month saw two media announcements of preliminary new papers on climate. One, by
a team led by physicist Richard Muller of the University of California,
Berkeley, concluded "the carbon dioxide curve gives a better match than
anything else we've tried" for the (modest) 0.8 Celsius-degree rise in
global average temperatures over land during the past half-century—less, if
ocean is included. He may be right, but such curve-fitting reasoning is an
example of confirmation bias. The other, by a team led by the meteorologist
Anthony Watts, a skeptical gadfly, confirmed its view that the Muller team's
numbers are too high—because "reported 1979-2008 U.S. temperature trends
are spuriously doubled" by bad thermometer siting and unjustified
"adjustments."
Much
published research on the impact of climate change consists of confirmation
bias by if-then modeling, but critics also see an increasing confusion between
model outputs and observations. For example, in estimating how much warming is
expected, the most recent report of the Intergovernmental Panel on Climate Change
uses three methods, two based entirely on model simulations.
The
late novelist Michael Crichton, in his prescient 2003 lecture criticizing
climate research, said: "To an outsider, the most significant innovation
in the global-warming controversy is the overt reliance that is being placed on
models.... No longer are models judged by how well they reproduce data from the
real world—increasingly, models provide the data. As if they were themselves a
reality."
It
isn't just models, but the interpretation of real data, too. The rise and fall
in both temperature and carbon dioxide, evident in Antarctic ice cores, was at
first thought to be evidence of carbon dioxide driving climate change. Then it
emerged that the temperature had begun rising centuries earlier than carbon
dioxide. Rather than abandon the theory, scientists fell back on the notion
that the data jibed with the possibility that rising carbon dioxide levels were
reinforcing the warming trend in what's called a positive feedback loop.
Maybe—but there's still no empirical evidence that this was a significant
effect compared with a continuation of whatever first caused the warming.
The
reporting of climate in the media is full of confirmation bias. Hot summers (in
the U.S.) or wet ones (in the U.K.) are invoked as support for climate
alarmism, whereas cold winters are dismissed as weather. Yale University's Dan
Kahan and colleagues polled 1,500 Americans and found that, as they learned
more about science, both believers and nonbelievers in dangerous climate change
"become more skillful in seeking out and making sense of—or if necessary
explaining away—empirical evidence relating to their groups' positions on
climate change and other issues."
As
one practicing scientist wrote anonymously to a blog in 2009: "honestly,
if you know anything about my generation, we will do or say whatever it is we
think we're supposed to do or say. There is no conspiracy, just a slightly
cozy, unthinking myopia. Don't rock the boat."