Feed on

Sometimes it is extremely rewarding to get into the details and nitty gritty of an issue and other times it is pure torture and wheel spinning. For an example of the former, one can do no better then AW Montford’s incredible reconstruction (pardon the pun for those of you in the know) of the events leading to the publication and debate since the publication of the famous climate “hockey stick.”

Just a brief background. The hockey stick is the picture that is typically used to demonstrate that the warming the earth is experiencing today is unprecedented in recent human history. The controversy surrounds the possibility that this is not exactly true. Note that this is not a denier position at all. The debate surrounding the hockey stick says nothing about whether the world is warming today, but rather whether warming that we know took place in the past was of a similar magnitude. The famous IPCC courtesy of Michael Mann says “no”  — that today’s warming is unprecedented. Others argue that the answer is yes, that today’s warming is certainly within the range of “recent” human experience. The drivers of this evidence are Steve McIntyre and Ross McKitrick.

The book is a thorough examination of the tools scientists must use to reconstruct historical temperature records. After all, it has only been the past 100+ years or so where reliable thermometer readings have been occurring. Historical temperatures must be estimated from various indirect methods, including the extraction of ice cores and tree rings. I recommend the book simply as a way to understand how this is done, and to understand how these data are used with current temperature records and known climate models to predict temperature changes due to current and future CO2 emissions.

The startling part of the book is simply the breathtaking examples of both how difficult the science actually is (and how uncertain it is) and especially the shenanigans played by researchers, journal editors and international governing bodies in massaging data, suppressing data, suppressing research inquiries, and more. I’ll just spend some time illustrating a few random difficulties.

First, today, here is one way that tree ring records are used to reconstruct historical temperature series (p. 47-8):

The other way … involves taking lots of proxy series, which are sometimes not even responding to their local temperatures, and seeing if some sort of correlation can be found with temperature measurements somewhere in the wider vicinity.

Brief scientific interlude here. Tree rings are thought to grow faster and wider when it is warmer and though to grow more slowly and narrower when it is cooler. So, if one measures tree ring widths from old growth trees, one can make an approximation as to whether the temperatures were warmer during that time. Of course, lots of other things impact tree ring growth, and true science would be able to control for all other factors which impact it, which of course is impossible here. The problem indicated in the above quote is that a good many tree rings do not seem to be exhibiting the predicted behavior. But never fear, the paleoclimatologists have found a way around it!

The Fritts method involves a certain leap of faith to trust that trees that are not responding to their own local temperature can nevertheless detect a signal in a wider temperature index. You have to believe in the existence of something called ‘teleconnections’, whereby temperatures in a possibly distant part of the world affect the climate in the locale of the tree in such as way as to affect its growth, and in a consistent manner. If this sounds implausible to you, then you are not alone. However, the reality of the mechanism is accepted by the paleoclimate community …

No comment necesary.  Try this episode on for size (p. 236):

Her first bombshell was a slide in which she discussed (wintercow: she being Rosanne D’Arrigo, a widely published and famous climatologist from Columbia University) the issue of ‘cherrypicking’ – a term used to describe scientists examining the data records before processing and removing those which might give the wrong answer (wintercow: a bit like economists do, in fact)…

D’Arrigo was startlingly straightforward on the subject. Cherrypicking, she said, was necessary if you wanted to make cherry pie … (nobody) took her up on this admission.

In fact, D’Arrigo was not alone in her apparent belief that it is scientifically acceptable to cherrypick data. She and her close collaborator, Gordon Jacoby, had published a widely cited paper in which they selected ten sites from a total of 36 studied, justifying the omission of 26 on the grounds that they had selected only the most temperature influenced (wintercow emphasis). What made it worse … (they) refused to archive the data from the 26 eliminated series, arguing that because they didn’t have a temperature signal, they were better left out of the archive. When McIntyre had written to the journal concerned, asking that they obtain the missing data on his behalf, Jacoby promptly refused the request.

Maybe I’m cherrypicking incidents? Um, no. The book is just startlingly dizzying in the number of cases just like this. I’m certainly not a conspiracy theorist, but these incidents say a hell of a lot about the “climate” of climate science. Imagine if a major econometric study did the same thing. Suppose I wanted to estimate the impact of the increase in the minimum wage on unemployment. Imagine that I took all 50 states, and removed from the data set 36 states that showed no increase in unemployment when the minimum wage went up there, and used only 14 states in my analysis because those were the only states where unemployment increased when the mandated wage increased (after all, that’s what the theory predicts)? I’d be banned from the economics profession for publishing a paper claiming that the minimum wage caused unemployment based on this methodology.

Maybe this episode trumps the other two (p. 264-5). An NAS panel was convened to dig deeper into some of the questions raised in this book:

In a talk he gave at his own Texas A&M University, North (chairman of the panel) explained to his audience the way the panel had worked.

We didn’t do any research in this project, we just took a look at the papers that were existing and we tried to draw some kinds of conclusions from them. So here we had twelve people around the table, all with very different backgrounds from one another and we just kind of winged it to see … so that’s what you do in that kind of expert panel…

North said these words, not with any sense of dissatisfaction … It was just one more dismaying revelation from the Hockey Stick affair — faced with the most important scientific questions for decades, asked to study and report on a subject of incalculable economic, political and social importance, a group of distinguished scientists got round a table, talked about some papers and just ‘kind of winged it.’

3 Responses to “How to Make Cherry Pie, or How to Formulate a Consensus”

  1. Speedmaster says:

    No worries, if they lie it’s only because they need to as they know what’s best for you.

  2. Michael says:

    You can make a lot of money by telling people what they want to hear. Goes back to biblical times, even.

  3. Rod says:

    Back in 1992 I served on a township committee whose mission was to write an “Open Space and Environmental Protection Plan,” and I was one of four farmers on the committee, which also had four non-farmers on it. We four farmers sought to protect our property rights against the open spacers.

    At one point in our deliberations, our county planning commission wrote a few paragraphs on what quantity of public open space would be enough to ensure that people who did not own any property would be happy. The county planner cited two sources for the acreage quantities: the National Park and Recreation Association, headquartered in Alexandria, Virginia; and the Delaware Valley Regional Planning Commission.

    The NPRA was the park lobby. (Alexandria, Virginia)

    The DVRPC was headquartered in Bensalem, PA, so I called these folks to find out what source or research led them to the conclusion that we needed about 4,000 acres per ten thousand residents. The guy on the other end of the phone was candid and honest: he said that he and three other planners at the DVRPC had huddled in a bull session and came up with an eyeball estimate of how much land was enough to satisfy the public’s needs. In other words, it was a WAG, a wild-assed guess. He felt comfortable with this estimate because it came from the somewhat considered judgment of experienced planners.

    So here’s how it works: your planning consultant cites sources that do not depend on any research or sound method, and then your report becomes the source for someone else, who can say that the Upper Hanover Township Open Space and Environmental Protection Plan says you need 4,214 acres of public open space per ten thousand serfs.


Leave a Reply