The detection of global warming and its attribution to the human cause had always been a task fraught with seemingly irreducible uncertainties. These had not subsided towards the end of 1995 when the pressure was mounting to deliver on political expectations introduced in the late 1980s. In a previous post we considered whether the Nov 1995 IPCC Working Group 1 meeting in Madrid was a tipping point in the corruption of climatology. Here we take a closer look at the science behind the ‘Chapter 8 Controversy’ in a longer essay broken into 2 posts (Part II here).
Mirror, Mirror hanging in the sky
Won’t you look down what’s hap’n here below
Could this really be it? The first faint image of man in the sky?
Ben Santer had just placed a transparency under the lens to project this colour pattern high upon the conference wall. It is the first afternoon of the Working Group 1 Plenary in Madrid, and this great council of nations from across the entire globe is persuaded to study the significance of its strange contours before getting down to their principal task. And so they should study it, for this is a game-changer striking at the nub of what the IPCC is all about. Although obscure, here is an image of the impact of human industry on the atmosphere above. At least part of the recent warming has at last been attributed to industrial emissions. If not for this, then why these near one hundred delegations flown in from all corners of the globe? There they are carefully positioned at arched rows of labelled bureaus across this cavernous auditorium. As they listen to live translations of Santer’s explanation, not a few of them must be gazing up in wonder: Could this really be what man hath wrought?
The science of Detection and Attribution had come a long way fast, and in recognising the significance of these recent developments the chairman, Sir John Houghton, opened the conference with a surprising change to the agenda. The main purpose of this Working Group 1 Plenary is to consider, modify and unanimously approve—as a faithful synopsis of the entire scientific report—every line of a drafted non-technical summary. To inform this approval process, the lead authors of the chapters would first give a 10 minute overview of their finished chapters and respond to any queries from the delegations. But in addition to this, Houghton proposed a special extended presentation on the Detection and Attribution issue, the subject of the contentious Chapter 8. This is apposite to the great interest and controversy so evidently building around this topic. But it is also necessary, as Santer in his presentation ‘went to considerable lengths to stress,’ because the recent developments in the science ‘justify a stronger statement on attribution’ than that given in the draft Executive Summary that the delegates hold in their hands. On this topic, the Executive Summary is entirely out-of-date. [AustDelRpt p6]
The pace of Detection and Attribution (D&A) research in the early 1990s was such that the avant garde was evolving way ahead of, and out of pace with, the plodding progress of peer-review publication. Moreover, through 1995 it was becoming clearer that the science had advanced beyond the early drafts of the Second Assessment, from which was drafted the Summary for PolicyMakers and its Executive Summary. Not only was the research advancing, but a breakthrough had been achieved such that the human signal in the warming could now be observed for the first time emerging above the noise of natural variability.
For the significance of this finding in the history of the IPCC, consider that back in 1990 Tom Wigley had submitted a draft of the First Assessment that was conclusive: The human impact on climate had NOT been detected. And so he was asked: How long before we can expect to see the signal? His response is recorded in the final section of the published Chapter 8. Given expected advances in modelling and another ½ degree of warming, attribution should be achievable. But this much warming is not expected for ‘a decade or more,’ and this assessment was repeated in the Rio Supplement of 1992 [FAR p253-4]. Wigley’s prediction turned out to be wrong because he did not anticipate the breakthrough in the research that was just around the corner. This breakthrough meant that the human signal could be seen even despite the pause in warming of the early 1990s.
Most of the scientists involved in this advanced D&A work are on board as authors contributing to the new Chapter 8. Moreover, its co-ordinating lead author is also the lead author of the two most important recent papers delivering these D&A results, including the one that explains the significance of our ‘Mirror in the Sky’ above. So here in Madrid, at the final plenary of Working Group 1, Ben Santer is the man. Here is the man leading this advance in the science who is now explaining it directly to those who most needed to know.
As Bert Bolin had just put it in his opening address, the Summary for PolicyMakers is ‘the scientist’s window into politics.’ And this is the meeting that serves to polish that glass. In fact, the first task for the conference is to polish the Executive Summary of the policymakers’ summary—those few pages that the policymakers back home are most likely to look through. The profound policy implications of the proposed updates to this Executive Summary is known to all, and so the idea is that once these advances in the science are explained, the Plenary will agree to seize the day, and incorporate them into a revised Summary.
Santer’s presentation is well received by an overwhelming majority of the delegates who are evidently on side with the proposal to update the Report according to this new evidence. An ad hoc ‘small group’ is quickly formed to rewrite the D&A section of the Executive Summary. This group first convenes that first evening following Santer’s talk. After much discussion the following day, it manages to deliver a new version of the D&A section for approval at the Plenary on the morning of the 3rd and final day. [AustDelRpt]
What is remarkable about this late intervention is that few people in the world, few scientists—even climate scientists—had seen this representation of the terrible distortion human industry had affected in the sky; few had seen it before Ben Santer places this transparency under the lens on the first day of Madrid. But by the time he finishes his talk the most important audience in the world evidently understands its significance. This is shaping up as the realisation of a spectacular communion of the most advanced science with inter-national governance—more than the scientist-founders of the inter-governmental panel could have ever anticipated—a marvellous triumph for the communication of knowledge to power, and, in every way, it is happening on a global scale.
A marvel it might have been if it weren’t for those present at the conference with no interest in the message so communicated. One or two delegations with great stake in the continuing growth of the fossil fuel industry seemed to take every opportunity to push back, compromise, slow down, derail, and sabotage a consensus on this issue. And so the triumph at Madrid, later celebrated by Houghton—a triumph of science over vested interests—was challenged on every line, on every word that might so much as suggest attribution of climate change to the human cause. It had been a long journey to Madrid, and even when the word-by-word battle was over on that final desperate night—even when they had just managed to deliver that fateful phase, ‘a discernible human influence’—still the war was not yet won.
We now know that the political war had only just begun. But the question for this blog is not about the politics, but about the science. Had it really triumphed here? Or had it got lost in the enthusiasm for some greater cause?
One False Dawn
Few of those familiar with the natural heat exchange of the atmosphere, which go into the making of our climates and weather, would be prepared to admit that the activities of man could have any influence upon phenomena of so vast a scale. In the following paper I hope to show that such influence is not only possible, but is actually occurring at the present time.
So Guy Callendar began his presentation one Wednesday afternoon in the winter of 1938. He had been cordially invited to present his findings to a meeting of the Royal Meteorological Society. Although an amateur in this field, he proceeded to impress the experts with extraordinary learning and research into every aspects of the topic, bringing it all together in support of the conclusion that more than half of the warming over the last 30 years could be attributed to increasing ‘sky radiation’ due to industrial emissions of CO2.
Early speculation that human industry emissions of CO2 might one day contribute some (welcome) warming was rejected at the turn of the 19th century when it was established that most of the wavelength bands of reflected heat blocked by CO2 were already blocked by (the almost ubiquitous) water vapour. These results were announced at the beginning of what is often regarded as the most sustained and widespread warming trend in the instrumental records of the northern climes. Now at the end of this period, a British steam engineer, Guy Callendar, had gathered some new data, revived the theory and announced that he had found in this warming a detectable human influence.
Impressed though they were by the depth and breadth of research, the assembled experts rejected this conclusion. There was no question ‘that there had been a real climatic change during the past thirty or forty years,’ but this recent warming is easily explained as but a phase in the natural variability of climate, similar to those evident in the not so distant past. And anyway, if emissions were having an impact, it would not be so straight forward as the calculations suggest. ‘An increase in the absorbing power of the atmosphere would not be a simple change in temperature’ but it would have a more complicated impact on energy transfer in the general circulation, including on vertical heat dissipation into the upper atmosphere and so to space. [Discussion follows the paper]
While climatic change on a geological scale had been generally accepted for decades, British meteorology was also already familiar and comfortable with natural climatic change on smaller time scales, even down to changes across the centuries of historical times. In fact, some Society Fellows at Callendar’s presentation had been investigating climatic change in and through its impact on civilisation at least since one of the most distinguished among them, C E P Brooks, had returned enthused by a famous geological congress in Stockholm, 1910, convened to discuss the new evidence (from varves) of climatic changes during the Holocene. And there were already various theories to account for these fluctuations and trends across the centuries. The most popular candidates for ‘external climate forcing’ are those that continue to this day. They all concerned variations in the radiation entering the lower atmosphere. This would be due to variations in the energy output of the sun related to the sunspot cycle, or otherwise due to variations in the transparency of the atmosphere, especially due to fine volcanic dust. [Brooks 1926 Ch 17 & 22]
Unperturbed by this polite rejection of his attribution theory—or perhaps driven by it—Callendar continued to gather more evidence illustrating the mechanism of the warming. But what he really needed if he had any chance of overcoming the scepticism of the entire meteorological establishment was to show some evidence of causation in the climate data itself. He needed to find evidence in the records that at least some of the recent warming was not entirely due to the natural ‘internal’ variability of climate, but that there was a general trend. And then he needed to somehow find a way to show that this trend was caused by emissions, and not by any of the other candidates for external climate forcing. Callendar did find a way, but not until the 1960s, and by then it was too late.
Others had already noticed, as he had, that the pattern of the recent warming was uneven across the latitudinal zones. But now Callendar began to look more closely at these zonal patterns for signs of the cause. He did this by modelling the pattern of warming that might be expected from the various proposed causes, and then compared them to observations. What he found was that the solar cycle and volcanic dust is indeed necessary to explain fluctuations of a few years duration. As for the overall trend, greater warming in high latitudes had been observed, especially in the northern hemisphere, while the tropics were not only experiencing less warming but also less rain. Callendar determined that a decline in stratospheric volcanic dust over this time may have influenced this trend, but the main features of the observed pattern were incompatible with this cause. However, the observed warming pattern was ‘not incompatible with the hypothesis of increased carbon dioxide radiation.’ [Callendar 1961]
Why this pattern analysis failed to gain traction at this time is unclear. Where Callendar was noticed by professional climatologist it was often with derision. Even after sending Keeling off to set up the monitoring station on Mauna Loa, Roger Ravelle would not even let Callendar off first base, refusing to believe that CO2 could have increased as much as Callendar claimed (and as is now confirmed). Callendar’s book Climate and Carbon Dioxide was never published. Just how embittered Callendar was towards his critics is disputed, but perhaps it was the climate itself that had the last word. While the data in Callendar’s warming pattern analysis cut off around 1950, the article was not published until 1961, after which time it was hard to find any warming to analyse. In the early 1960s England was hit with a series of famously harsh winters, and in Fleming’s biography there is a marvellous photo, oozing symbolism, of Callendar shovelling snow after a blizzard in 1962. The following winter of 1963 was the one everyone remembers as The Big Freeze . One more winter and Callendar, with his global warming theory, was dead. By this time all the talk was about What’s causing the cooling?
A New Beginning in the Little Cooling
As the cooling extended through the 1960s and into the 1970s, the Ice Age scare was slipping in and out of the popular consciousness. But the scientific debate was now wide open. It was not only about whether the cooling was cyclic or a trend. There was also growing speculation about the human impact and whether this was, or would lead to, a net cooling…or warming? Either way, the signature of modern man in the atmosphere was now all too apparent. Look up at the jet contrails seeding linear clouds across the sky. Could this be having a cooling impact? Look down on the Indian plains totally obscured by tropospheric plumes of dust. Was this dust blocking the sunlight actually cooling the atmosphere over the entire sub-continent? What about soot and photochemical smog? And then there were all those tonnes of fine transparent particles thrown up by our everyday industry—what Reid Bryson famously called the ‘human volcano.’ These ‘aerosols’ suspended at various levels in the troposphere were considered for their cooling effect. Or their warming effect? No one was really sure. Accurate monitoring of CO2 was now showing a steady rise, but there were still the old questions about how much of its welcome warming could be realised in the climate on top of the already considerable effect of water vapour and the poorly understood dampening feedback in clouds and circulation. So while popular anxiety was on the side of cooling—Were we actually bringing on the Ice Age?—the scientific establishment was holding out pretty firmly and saying in reports and even sometimes in the press: Well, we don’t really know.1
Then the cooling was off, and during the 1980s the popular anxiety was soon pushed the other way. Warming was now bad. And so, whereas CO2 emissions had first played the hero (warding off the next Ice Age), now it played the villain in a locus of controversy that had shifted (as had the funding) into the Manichean polity of the USA. Already in 1981 Revelle and Schneider were giving testimony to Congress, and a paper by the new head of NASA’s Goddard Institute delivered the first New York Times front page story for the Global Warming Scare.
In this paper James Hansen introduced a new global mean temperature anomaly based on the instrumental record, and it quickly usurped a Russian chart popular since introduced to Anglophone science in 1969. The Russians and the Americans followed Callendar with their line emerging out of a late century dip after the warmer 1860s. While Callendar’s chart showed a steady rise, the Russians showed a warming followed by smaller cooling off (and presenting this curve as suggesting solar forcing). Hansen’s chart initially showed this cool-warm-cool pattern, but with a new warming commencing in the 1970s that had not yet surpassed Calendar’s peak. Then, as the years rolled by, NASA’s graph was extended to show a persistent, if jagged, climb. Already in 1988 Hansen was able to show the US Congress how the anomaly had now sailed past the mid-century peak. As other global temperature indicators emerged, it became conventional to chart from the 1880s a century-long warming trend, albeit with a penultimate hesitation. And then it also became common to start the line in that 1960s-70s dip. [see this blog GMT page]
While these global mean temperature anomalies clearly show a warming trend, they are also used to support attribution to the human cause. The recent warming is so extraordinarily steep and extended by comparison with normal climate variability, so the argument goes, that it is extremely unlikely to be entirely due to natural climate variability. If there is no natural explanation then this supports the expected human cause. This kind of negative argument for human attribution is found in Callendar’s 1938 paper, and it appears again to support Hansen’s claim that there is only a 1% chance that the warming since 1975 is ‘accidental.’ This ‘accidental’ refers to the normal random natural variability which is established by Hansen as outside 3 standard deviations from the 1951-80 mean in the instrumental record.
One of the great difficulties of such arguments is the question of how to establish natural variability. In this respect Hansen’s argument is narrower than most. In the first place it is based on the assumption that global temperature over 1951-80 is randomly distributed around this mean, and then, secondly, on the expectation that future temperature will conform to the normal distribution of this random variable so defined. Such an analysis must exclude any knowledge we may have about trends, cycles and their causes. And so perhaps it is no surprise that when this extraordinary attribution argument was subject to the first assessment by the IPCC, it was dismissed with all the others in much the same way as the Royal Meteorological Society had dismissed Callendar’s claim 50 years earlier, viz., upon ‘strong evidence that changes of similar magnitude and rate have occurred prior to this century [FAR p245].’
But this line of argument refused to die, and it re-appears again in the Second Assessment Report that was accepted at Madrid. The new Chapter 8 cites six studies published since the First Assessment finding a significant trend, and four other studies that find no significant trend. Behind claims of non-significance is the suggestion that the data show ‘a low-frequency cyclic component that is in phase with and explains most of the observed trend.’ The First Assessment had been much more explicit on this point, suggesting that we might be experiencing a continuation of the warming out of the Little Ice Age, back up into the realm of the Medieval Warm Period [FAR 7.2.1 p203]. In the Second Assessment such a natural explanation is no longer the null hypothesis. As an ‘alternative explanation’ it is rejected upon stronger evidence of computer simulated unforced natural variability [SAR p422b pdf].The argument in the new Chapter 8 turns on an analysis of three climate models that unanimously re-establish Hansen’s conclusion by finding an ‘accidental’ probability below 5% for the warming since the mid-1970s. But not only since the 1970s! The warming from the mid-1960s, from the 50s, and for every time interval right back to the 1890s; these are all beyond the 95% percentile in the random distribution given by all three models.
Of course an explanation for such an extraordinary trend could have been sought with some of the standing natural candidates for external forcing, such as solar and volcanic. In fact, it is rare in the entire field of the D&A research surveyed by the Chapter to properly consider these natural forces even as ‘alternative explanations’ for the warming. This is of concern on the very term established in the Chapter’s own introduction: anthropogenic attribution ‘requires the consideration and elimination of all other plausible non-anthropogenic mechanisms’ [SAR p413]; and these should all be ‘tested in a rigorous way‘ [p414b]. Moreover, the failure to do so is obscured in various concluding remarks—more so in the published Chapter than the earlier drafts—by an ambiguity in the term ‘natural variability.’
The repeated conclusion of the unlikelihood of attribution to ‘natural variability’ suggests the unlikelihood of attribution to any natural cause. And it is often unclear to the reader when and which external forces are included in, or excluded from, the account of natural variability. If only chance fluctuations are removed then no other external causes are eliminated at all. Indeed, the Chapter impresses upon the reader the importance of the recent discovery in the background ‘noise’ of the negative forcing due to industrial sulphate aerosols [see below]. What we realise is that this background ‘noise’ is only what you don’t know at the time, or otherwise choose not to acknowledge. Thus, our probability arguments-by-exclusion only exclude the random internal variability of global mean temperature as supposed by the modellers on the evidence they have, or otherwise as they chose to acknowledge. Perhaps we could grant them this conclusion:
…the warming trend to date is unlikely to have occurred by chance due to internally-generated variability of the climate system, although this explanation cannot be ruled out. This, however, does not preclude the possibility that a significant part of the trend is due to natural forcing factors. Trend significance provides circumstantial support for the existence of an anthropogenic component to climate change, but does not directly address the attribution issue. (SAR 18 Apr p20, 21-26; SAR 9 Oct 8.9)
So reads the circulated drafts of Chapter 8. In the published version the mention of the possibility of natural forcing is removed. Also removed is the warning at the beginning of the section that ‘while none of these studies has specifically considered the attribution issue, they often draw some attribution-related conclusions, for which there is little justification.’ The road is now clear for the addition of a ‘weak attribution statement:’ that some part of this trend can be ‘attributed to human influences.’ [SAR p423-4]
By the Third Assessment, global mean temperature hardly rates a mention in the D&A section (now Ch 12). When it does, there is a quick deferral to the Hockey Stick Graph (as paraded in Ch 2)—where the unprecedented nature of the warming is so visibly apparent it belies any trend analysis. But it helps us to reflect that already in the previous assessment we find presumed, or implicit, Hockey Sticks already shooting goals for human attribution. Whether low frequency cycles or long range trends, these are brushed aside in the uncertainty statements, or excluded from the conclusions, or obscured beyond a truncated shaft—like ‘warmest since 1400.’ It is no wonder then, that the Hockey Team were surprised by all the fuss when their graphically realised Hockey Stick appeared in the Third Assessment—however so much straighter and extended the proxy shaft and the instrumental blade.
From the Idiot Number to the Fingerprint
Now this argument upon global mean temperature could not, and never did, stand alone. The ‘idiot number’ (as one interviewed D&A Lead Author called it) has always been criticised as an oversimplified, inaccurate and biased indicator of the global climate—and anyway, so long as it’s variability remained minimal (<1oC), it is virtually meaningless. [see, this blog, GMT page]
Additional evidence is required. And so in the 1980s, Callendar’s (belatedly applied) warming pattern argument was revived. In fact, Hansen uses pattern analysis in his congressional speech, right after giving his 1% chance of ‘accidental’ warming:
Moreover, if you look at the next level of detail in the global temperature change, there are clear signs of the greenhouse effect. Observational data suggests a cooling in the stratosphere while the ground is warming. The data suggest somewhat more warming over land and sea ice regions than over open ocean, more warming at high latitudes than at low latitudes, and more warming in the winter than in summer. [p40 pdf]
Not Hansen, nor anyone, ever saw a perfect fit – nor did they expect it because other forces were at play. But with these pattern results coming on top of the negative argument on the global mean temperature trend, the overall case is very strong: ‘the greenhouse effect has been detected, and it is changing our climate now.’
During the late 1980s the analysis of warming patterns was revolutionised on the one side by advances in computer modelling and on the other side by the availability of observational data. Improving time-series observations of the superficial and vertical patterns of change in the thermal structure of the atmosphere were now compared with increasingly sophisticated 3-D computer models, and these ‘pattern correlation studies’ dominated the renewed attempts to find the human ‘fingerprint.’ In fact, it was this quest for the so-called human ‘fingerprint’ that would eventually bring home the bacon for the D&A section of the IPCC Second Assessment. But success did not come right away. Despite what they might claim, and not just Hansen’s work, but all the studies in the late 1980s failed to detect the human ‘fingerprint’—at least according to the IPCC First Assessment: Tom Wigley had slapped a giant FAIL over each and every one of them.
Hints of the breakthrough are already evident in the 1992 Supplementary Report, but it came just too late for the Earth Summit. At this time the D&A folks were already running simulations to give not just the warming impact of greenhouse gas emissions but also the localised cooling impact of the industrial sulphate aerosols pumping into the troposphere. Now the models began predicting more subdued warming—especially over the industrial north—and these results better resembled observations. Global mean temperature simulations now dipping at the peak of the post-war boom before sulphate pollution controls had an impact in the 1970s—a nod to the Little Cooling that was deeper in the north. More importantly, the zonal, regional and vertical pattern of the mixed human signal (CO2 + SO4) was now evident in the observations.
But this human impact is not detected when Callendar claimed to find it. Rather, our Mirror in the Sky pattern at the top of this blog—the one that had such an impact at Madrid—this gives the warming since 1963. [Santer 1995b pdf, also Karoly 1994] That is, the signal only emerges from the depth of those harsh winters—the very year of the Big Freeze—that helped kill off human attribution the first time around. The rejection of earlier attribution is not entirely about diminished data. As the IPCC moved with increasing conviction towards human attributing of the later warming, it simultaneously moved to attribute most of this earlier warming to natural causes [see e.g., TAR p699]. And so, after the false start under Wigley, casting and recasting the argument on new evidence, the IPCC arrived at a consistent vindication of Hansen’s testimonial: the human signal is detectable, but only in the 2nd half of the 20th century. And this is perhaps why you still hear a lot about Hansen’s heroic (but dubious!) attribution argument, and why the pioneering work of Guy Callendar is left out in the cold. Yes Callendar was an outsider, but he also cuts a dismal figure shovelling snow off the human warming theory while it lay buried in the ditch of the Little Cooling.