Fundamentals of the Scientific Method Applied to Climate Change

A friend of mine recently sent me an article from the New Yorker: “The Truth Wears Off: Is there something wrong with the scientific method?” by Jonah Lehrer. In it, the author tracks various scientific hypotheses, from the effectiveness of pharmaceutical drugs to symmetry as a driver of sexual selection (i.e. beauty = good genes). Each of these hypotheses had something in common: the hypothesis was tested, a positive result was found, widespread acceptance followed, but retesting of the hypothesis by others (or in some cases the same author) could not reproduce positive results! Thus, Lehrer concludes that there might be something wrong with the scientific method, that so many studies can produce positive results of phenomena that appear not to be real.

I liked this article a lot. It brought up some good points about publication bias, bias in experimental design, and entrenchment of scientists in their ideas despite evidence to the contrary. These are indeed problems with the field of science as a whole, but it does not indicate that there are issues with the scientific method. In fact, this article demonstrates that the scientific method is working like a charm!

Consider this: Suppose a psychologist is interested in studying ‘verbal overshadowing’, where forcing a subject to verbally describe an object reduces his or her ability to accurately recall the object later (this example is from the article). Even if subjects are chosen at random from a general population, as they should be, there is still a chance that the psychologist might detect a positive effect (i.e. reject the null hypothesis) just by random sampling.

Now, suppose the psychologist happens to detect a positive effect. Scientists are skeptical by nature (people forget this). So the experiment will be replicated by others, under identical or differing conditions, in an attempt to validate the first psychologist’s results. The new studies find no effect. Is there something wrong with the scientific method?

Not at all. It’s working just fine! Example: A new study comes out, proposes a new hypothesis or theory. The new theory undergoes rigorous testing from multiple experimenters in multiple scenarios. If the new theory fails to hold up, it is tossed aside, improved, or reconsidered. If the new theory holds up, it is continuously tested until it is generally accepted. Note that general acceptance only occurs after many independent tests of the theory. This is the scientific method at its best, and this is the process Lehrer describes. Finding a false positive is not an issue in the scientific method, because those claims will be checked, those experiments will be validated, and that theory will be refined.

So can anything be considered to have passed the test? I mean, it seems fairly easy to take a theory, try to replicate it, and fail. But the big scientific ideas, the concepts that make it in to textbooks, the hypotheses and theories accepted as canon by scientists, are the very ones that have withstood this testing: Evolution, Competition, Exponential Growth (under ideal conditions), Sexual Selection, Phenotypic Plasticity, the Structure of DNA, Mitosis/Meiosis and the Mechanisms of Sexual Reproduction, Climate Change (you had to know this was coming), etc. Yes, these are broad concepts, and likely we need very specific information to apply this to very specific systems, but they work.

Climate Change? Yes. Climate change is a fantastic example of the scientific method verifying a process, the opposite of ‘verbal overshadowing’. A new phenomenon was proposed, it has been tested thousands of times by tens of thousands of scientists around the world using hundreds of metrics (i.e. CO2, ice cores, tree rings, etc), hundreds of statistical tests, and has been peer-reviewed thousands of times. In almost every one of these thousands of experiments and analyses, the answer is the same: Climates are changing, the earth is warming up, these changes correlate well with atmospheric CO2 concentrations, and it all started, oddly, shortly after the industrial revolution. Scientists have tested this hypothesis so many times that, if it wasn’t real, we would have found out by now.

Yes, there are issues with the implementation of the scientific method: Publication bias towards positive results, unfriendly reviewers (reviewers are anonymous, so they can be as nasty as they like. If you are providing negative results of a pet hypothesis of one of the reviewers, don’t expect constructive criticism or even to have your article accepted. Of course, there’s no way to know this beforehand), and experimental bias. For example, pharmaceutical companies have a big monetary incentive to get positive results, so they may not sample as randomly as they ought to and certainly have no incentive to double check their positive results with follow-up experiments (leading to high rates of follow-up tests failing to find an effect of the drug). Or suppose an ecologists really wants to make a statement about certain processes, let’s say they’re studying whether marine protected areas are effective and they really want to show that these protected areas work. They might choose unprotected sites they know to be in poor condition to compare with protected sites that they know to be incredibly healthy, even if these sites are not representative of the choices as a whole (i.e. the ecologist is picking and choosing only the few protected sites that work from an array of protected sites that, on average, don’t). They are biasing their site selection, rather than randomly choosing multiple sites of each type. It works exactly the same for people wanting to show that protected areas don’t work.

There’s nothing wrong with proposing a hypothesis that turns out to be mostly wrong. It’s the testing and retesting of this hypothesis that stimulates new research, generates new ideas, and gets people thinking about critical questions. For example, the Metabolic Theory of Ecology (MTE) was a vast, sweeping hypothesis that almost all biological and ecological interactions can be predicted on the basis of metabolism, which varies with temperature (if you’re an ectotherm) and body size. The original article has been cited over 1,000 times in 8 years, and a good number of these citations show that MTE is pretty flawed (it was too big not to be). Does that mean it’s worthless? No! Even if it is wrong, MTE has generated hundreds of new papers, spawned a huge number of hypotheses, and forced ecologists to reconsider the role of temperature in ecological interactions (like plant-herbivore interactions, because those are the most important, clearly). Or, if you’re me, you started out as huge fan of MTE, thought it could be used to predict absolutely everything (I still think there’s a theory out there for that), based an entire dissertation on MTE because you thought it was awesome, wound up disproving it whenever you were actually trying to validate it (oops! I totally did not see that coming….), and now have a much more nuanced view of temperature effects on ecological interactions. From the standpoint of the scientific method, MTE was a huge success. I mean, my whole scientific career to this point is predicated on it and the general concepts it outlines, even if I’m not sure it can do all the things it was initially proposed to do.

So are there problems, yes, but they’re not the ones listed in the New Yorker article. If anything, that article, to me, is a great exposé on the scientific method doing exactly what it’s supposed to do: test, propose, retest, retest, retest again, and reduce/refine/recycle.


4 thoughts on “Fundamentals of the Scientific Method Applied to Climate Change

  1. What worries me about this article is that many more decision makers will read this than a rebuttal published in a high ranking scientific journal such as Science or PNAS. The scientific community, and more specifically the non-medical science community, is going through a major restructuring process. It is well known that the large government agencies that funded basic scientific research (NSF, NASA, EPA) have seen huge budget cuts over the last decade. Universities, NGOs, state and federal agencies are all struggling to fund their basic research, which is the foundation of larger scale projects. When say a politician (especially a Republican) reads this article in the New Yorker, it will only seem to justify his/her past and future cuts to scientific funding, even though the author points out the exact issues that could be easily fixed (i.e. publish more null results, reduce the number of biased studies, increase sample sizes). This is a great article but I just hope it isn’t used in the wrong context.

    • I agree that this article gives a troublingly bad impression of the scientific method. But I don’t think increased sample size is a solution (I don’t think there is a solution, because it’s working fine). Increasing sample size within a study still assumes that a single study can make a hypothesis generally accepted. It’s the replicability across studies that’s important, so what Lehrer seems to have done is picked a few high profile examples of the scientific method disproving hypotheses, where there are just as many examples of hypotheses being supported.

      I guess what I’m saying is, ‘it ain’t broke’. If it is, the solution isn’t more samples within a study, it’s more studies, the exact opposite of what budget cuts will achieve. In fact, by reducing the number of studies attempting to verify previous results, budget cuts will handicap the scientific method and lead to a lot of ‘false positives’. People will publish their positive data (because of publication bias) and there will be fewer retests of that hypothesis to try and debunk it or generalize it.

  2. Pingback: The Spread of the "Debate Is Over" Syndrome - Page 2

  3. You’re correct about what the scientific method is, but completely mistaken about how it has been applied to climate studies. Numerous predictions have indeed been made to falsify the hypothesis of anthropogenic global warming, and the hypothesis has failed these tests every single time. The predictive value of the hypothesis stands at zero. The very term “hockey stick graph” is now a term of derision.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s