Oreskes and Conway argue that the “merchants of doubt”—scientists who undermine the scientific consensus about environmental and public health issues—have made an outsized impact on U.S. public policy largely because they know how to take advantage of the media. Self-interested corporations pay them millions of dollars to defend dangerous products by any means necessary, including through outright disinformation. But they present themselves as legitimate, independent experts who are merely raising serious questions about other scientists’ research. And the public often believes them. Oreskes and Conway attribute this pattern to a mismatch between how science works and how the media works. When many different scientists start researching a new question, they frequently disagree and formulate numerous different hypotheses. But once they do enough research and reach a consensus, there is only one “side” remaining in the scientific debate. For instance, in the 1960s, scientists weren’t yet sure whether humanity was emitting enough CO2 to permanently change the earth’s climate. But after researching the subject for several years, nearly every climate scientist reached the same answer: yes.
Nevertheless, no matter how far along this research process is, the U.S. media generally tries to present two different “sides” of the story. The U.S. government first established this norm through a law called the Fairness Doctrine, but now Oreskes and Conway suggest that it’s just an unwritten rule. Yet, when the media looks for multiple “sides” to a settled scientific question, it often ends up giving one person’s unproven opinion the same weight as a consensus that hundreds of scientists have reached after gathering and analyzing evidence for several years. This approach can give viewers the false impression that the science is not yet settled, and that each “side”—the scientists and the fringe contrarians—is making an equally legitimate point. And when corporate public relations departments back such fringe opinions, they often receive far more attention than the actual science. Thus, Oreskes and Conway conclude that to communicate effectively to the public, the popular media must learn to cover science in a new way. This entails taking peer review seriously, investigating the funding sources behind contrarian spokespeople, and most importantly, treating consensus as consensus.
Media Bias ThemeTracker
Media Bias Quotes in Merchants of Doubt
Millions of pages of documents released during tobacco litigation demonstrate these links. They show the crucial role that scientists played in sowing doubt about the links between smoking and health risks. These documents—which have scarcely been studied except by lawyers and a handful of academics—also show that the same strategy was applied not only to global warming, but to a laundry list of environmental and health concerns, including asbestos, secondhand smoke, acid rain, and the ozone hole.
Call it the “Tobacco Strategy.” Its target was science, and so it relied heavily on scientists—with guidance from industry lawyers and public relations experts—willing to hold the rifle and pull the trigger.
Balance was interpreted, it seems, as giving equal weight to both sides, rather than giving accurate weight to both sides.
Did they deserve equal time?
The simple answer is no. While the idea of equal time for opposing opinions makes sense in a two-party political system, it does not work for science, because science is not about opinion. It is about evidence. It is about claims that can be, and have been, tested through scientific research—experiments, experience, and observation—research that is then subject to critical review by a jury of scientific peers. Claims that have not gone through that process—or have gone through it and failed—are not scientific, and do not deserve equal time in a scientific debate.
Likens tried to set the record straight with an article in Environmental Science and Technology entitled “Red Herrings in Acid Rain Research.” But in a pattern that was becoming familiar, the scientific facts were published in a place where few ordinary people would see them, whereas the unscientific claims—that acid rain was not a problem, that it would cost hundreds of billions to fix—were published in mass circulation outlets. It was not a level playing field.
Bad Science was a virtual self-help book for regulated industries, and it began with a set of emphatic sound-bite-sized “MESSAGES”:
1. Too often science is manipulated to fulfill a political agenda.
2. Government agencies … betray the public trust by violating principles of good science in a desire to achieve a political goal.
3. No agency is more guilty of adjusting science to support preconceived public policy prescriptions than the Environmental Protection Agency.
4. Public policy decisions that are based on bad science impose enormous economic costs on all aspects of society.
5. Like many studies before it, EPA’s recent report concerning environmental tobacco smoke allows political objectives to guide scientific research.
6. Proposals that seek to improve indoor air quality by singling out tobacco smoke only enable bad science to become a poor excuse for enacting new laws and jeopardizing individual liberties.
This was the Bad Science strategy in a nutshell: plant complaints in op-ed pieces, in letters to the editor, and in articles in mainstream journals to whom you’d supplied the “facts,” and then quote them as if they really were facts. Quote, in fact, yourself. A perfect rhetorical circle. A mass media echo chamber of your own construction.
Scientists are confident they know bad science when they see it. It’s science that is obviously fraudulent—when data have been invented, fudged, or manipulated. Bad science is where data have been cherry-picked—when some data have been deliberately left out—or it’s impossible for the reader to understand the steps that were taken to produce or analyze the data. It is a set of claims that can’t be tested, claims that are based on samples that are too small, and claims that don’t follow from the evidence provided. And science is bad—or at least weak—when proponents of a position jump to conclusions on insufficient or inconsistent data.
Imagine providing “balance” to the issue of whether the Earth orbits the Sun, whether continents move, or whether DNA carries genetic information. These matters were long ago settled in scientists’ minds. Nobody can publish an article in a scientific journal claiming the Sun orbits the Earth, and for the same reason, you can’t publish an article in a peer-reviewed journal claiming there’s no global warming. Probably well-informed professional science journalists wouldn’t publish it either. But ordinary journalists repeatedly did.
Scientists have faced an ongoing misrepresentation of scientific evidence and historical facts that brands them as public enemies—even mass murderers—on the basis of phony facts.
There is a deep irony here. One of the great heroes of the anti-Communist political right wing—indeed one of the clearest, most reasoned voices against the risks of oppressive government, in general—was George Orwell, whose famous 1984 portrayed a government that manufactured fake histories to support its political program. Orwell coined the term “memory hole” to denote a system that destroyed inconvenient facts, and “Newspeak” for a language designed to constrain thought within politically acceptable bounds.