Tag Archives: UAH

The House Science Climate Model Show Trial

The show is over, and it went pretty much as Alice F. predicted it would. Lamar Smith has passed his verdict on the morning’s proceedings in strangely untheatrical style:

https://twitter.com/jim_hunt/status/847123725963198464

My own mileage certainly varied from Lamar’s! Here’s a hasty summary of events via the distorting lens of Twitter:

 

A more detailed analysis of United States’ House Committee on Science, Space and Technology’s “show trial” of climate models will follow in due course, but for now if you so desire you can watch the entire event on YouTube:

I’ll have to at least watch the bit where my live feed cut out as Dana Rohrabacher slowly went ballistic with Mike Mann:

https://twitter.com/jim_hunt/status/847109097103216643

Please bear in mind that correlation does not necessarily imply causation!

Rohrabacher-20170329-1

I wonder whether at this juncture Mike wishes he’d taken David Titley’s advice?

Nevertheless, given our long running campaign against the climate science misinformation frequently printed in the Mail on Sunday it gives us great pleasure to reprint in full the following extract from his written testimony today:

For proper context, we must consider the climate denial myth du jour that global warming has “stopped”. Like most climate denial talking points, the reality is pretty much the opposite of what is being claimed by the contrarians. All surface temperature products, including the controversial UAH satellite temperature record, show a clear long-term warming trend over the past several decades:

Mann-ExhibitA

We have now broken the all-time global temperature record for three consecutive years and a number of published articles have convincingly demonstrated that global warming has continued unabated despite when one properly accounts for the vagaries of natural short-term climate fluctuations. A prominent such study was published by Tom Karl and colleagues in 2015 in the leading journal Science. The article was widely viewed as the final nail in the “globe has stopped warming” talking point’s coffin.

Last month, opinion writer David Rose of the British tabloid the Daily Mail — known for his serial misrepresentations of climate change and his serial attacks on climate scientists, published a commentary online attacking Tom Karl, accusing him of having “manipulated global warming data” in the 2015 Karl et al article. This fake news story was built entirely on an interview with a single disgruntled former NOAA employee, John Bates, who had been demoted from a supervisory position at NOAA for his inability to work well with others.

Bates’ allegations were also published on the blog of climate science denier Judith Curry (I use the term carefully—reserving it for those who deny the most basic findings of the scientific community, which includes the fact that human activity is substantially or entirely responsible for the large-scale warming we have seen over the past century — something Judith Curry disputes). That blog post and the Daily Mail story have now been thoroughly debunked by the actual scientific community. The Daily Mail claim that data in the Karl et al. Science article had been manipulated was not supported by Bates. When the scientific community pushed back on the untenable “data manipulation” claim, noting that other groups of scientists had independently confirmed Karl et al’s findings, Bates clarified that the real problem was that data had not been properly archived and that the paper was rushed to publication. These claims too quickly fell apart.

Though Bates claimed that the data from the Karl et al study was “not in machine-readable form”, independent scientist Zeke Hausfather, lead author of a study that accessed the data and confirmed its validity, wrote in a commentary “…for the life of me I can’t figure out what that means. My computer can read it fine, and it’s the same format that other groups use to present their data.” As for the claim that the paper was rushed to publication, Editor-in-chief of Science Jeremy Berg says, “With regard to the ‘rush’ to publish, as of 2013, the median time from submission to online publication by Science was 109 days, or less than four months. The article by Karl et al. underwent handling and review for almost six months. Any suggestion that the review of this paper was ‘rushed’ is baseless and without merit. Science stands behind its handling of this paper, which underwent particularly rigorous peer review.”

Shortly after the Daily Mail article went live, a video attacking Karl (and NOAA and even NASA for good measure) was posted by the Wall Street Journal. Within hours, the Daily Mail story spread like a virus through the right-wing blogosphere, appearing on numerous right-wing websites and conservative news sites. It didn’t take long for the entire Murdoch media empire in the U.S, U.K. and elsewhere to join in, with the execrable Fox News for example alleging Tom Karl had “cooked” climate data and, with no sense of irony, for political reasons.

Rep. Lamar Smith (R-TX), chair of this committee has a history25 of launching attacks on climate science and climate scientists. He quickly posted a press release praising the Daily Mail article, placing it on the science committee website, and falsely alleging that government scientists had “falsified data”. Smith, it turns out, had been planning a congressional hearing timed to happen just days after this latest dustup, intended to call into question the basis for the EPA regulating carbon emissions. His accusations against Karl and NOAA of tampering with climate data was used in that hearing to claim that the entire case for concern over climate change was now undermined.

That’s pretty much the way we see things too Mike!

 

[Edit – March 31st]

In the aftermath of Wednesday’s hearing, the accusations are flying in all directions. By way of example:

https://twitter.com/jim_hunt/status/847443788880429057

No clarification has yet been forthcoming from Dr. Pielke.

The denialosphere is of course now spinning like crazy attempting to pin something, anything, on Michael Mann. Over at Climate Depot Marc Morano assures his loyal readers that:

Testifying before Congress, climate scientist Michael Mann denies any affiliation or association to the Climate Accountability Institute despite his apparent membership on the Institute’s Council of Advisors.

Whilst correctly quoting Dr. Mann as saying:

I can provide – I’ve submitted my CV you can see who I’m associated with and who I am not.

Here’s the video Marc uses to support his case:

Meanwhile over on Twitter:

 

[Edit – April 1st]

Today is All Fools’ Day, but this is no joke. Last night Judith Curry posted an article on her “Climate Etc.” blog entitled “‘Deniers,’ lies and politics“. Here is an extract from it:

Mann ‘denies’ being associated with the Climate Accountability Institute [link to above Marc Morano video]. Julie Kelly writes in an article Michael Mann Embarrasses Himself Before Congress:

“Turns out Mann appears to be a bit of a denier himself. Under questioning, Mann denied being involved with the Climate Accountability Institute even though he is featured on its website as a board member. CAI is one of the groups pushing a scorched-earth approach to climate deniers, urging lawmakers to employ the RICO statute against fossil-fuel corporations. When asked directly if he was either affiliated or associated with CAI, Mann answered “no.” [JC note: Mann also lists this affiliation on his CV]

Some additional ‘porkies’ are highlighted in an article by James Delingpole.

Now the first thing to note is that I’d already explained the context of Mr. Mann’s “interrogation” by Rep. Clay Higgins on Judith’s blog several times:

At the risk of repeating myself Mann said, and I quote:

“I’ve submitted my CV. You can see who I’m ‘associated’ with”

His CV states, quoted by McIntyre:

McIntyreMannCV

Why on Earth Judith chose to repeat the “CAI” allegation is beyond me.

Secondly, Prof. Mann is NOT featured on the CAI website as a board member. He is instead listed as a member of their “Council of Advisors”.

Thirdly, quoting James Delingpole as a source of reliable information about anything “climate change” related is also beyond me. Needless to say Mr. Delingpole also repeats the CAI nonsense, whilst simultaneously plagiarising our long standing usage of the term “Porky pie“!

All of which brings me on to my next point. In the video clip above Rep. Higgins can be heard to say:

These two organisations [i.e the Union of Concerned Scientists & the Climate Accountability Institute], are they connected directly with organised efforts to prosecute man influenced climate sceptics via RICO statutes?

to which Dr. Mann replied:

The way you’ve phrased it, I would find it extremely surprising if what you said was true.

Higgins-20170329-1

Now please skip to the 1 hour 31:33 mark in the video of the full hearing to discover what Marc Morano left out. Rep. Higgins asks Dr. Mann:

Would you be able to at some future date provide to this committee evidence of your lack of association with the organisation Union of Concerned Scientists and lack of your association with the organisation called Climate Accountability Institute? Can you provide that documentation to this committee Sir?

This is, of course, a “when did you stop beating your wife” sort of a question. How on Earth do you prove a “lack of association with an organisation”. Supply a video of your entire life? Dr. Mann responded less pedantically:

You haven’t defined what “association” even means here, but it’s all in my CV which has already been provided to Committee.

So what on Earth are Rep. Higgins and ex. Prof. Curry on about with all this “RICO” business? With thanks to Nick Stokes on Judith’s blog, the document he refers to seems to be the only evidence for the insinuations:

It turns out that what the congressman was probably referring to was a workshop they mounted in 2012 (not attended by Mann), which explored the RICO civil lawsuit mounted against tobacco companies.

It does mention for example “the RICO case against the tobacco companies” but it never mentions anything that might conceivably be (mis)interpreted as “pushing a scorched-earth approach to climate deniers”.

That being the case, why on Earth do you suppose Judith Curry chose to mention that phrase on her blog last night and why did Clay Higgins choose to broach the subject on Wednesday?

 

[Edit – April 2nd]

Perhaps this really is an April Fools’ joke? Over on Twitter Stephen McIntyre continues to make my case for me. Take a look:

https://twitter.com/jim_hunt/status/848397908802248704

And he’s not the only one! Alice F.’s sixth sense tells her that another Storify slideshow will be required to do this saga justice!

Should Climate Scientists Boycott Congressional Hearings?

In answer to the question posed in our title for today, retired Rear Admiral David Titley certainly seems to think so. According to his article for the Washington Post‘s “Capital Weather Gang”:

Unless you’ve been living under a (melting) ice shelf recently, you know by now the U.S. House of Representatives Committee on Science Space and Technology is holding a climate science hearing Wednesday to probe the “assumptions, policy implications and scientific method.”

This hearing, whose witnesses consist of one mainstream climate scientist and three other witnesses whose views are very much in the minority, is remarkably similar in structure and scope to the climate hearing Sen. Ted Cruz (R-Tex.) conducted in December 2015 titled “Data or Dogma”? So similar that two of the five witnesses from the Cruz hearing will also testify on Wednesday.

In the past, the science community has participated in these hearings, even though questioning the basics of climate change is akin to holding a hearing to examine whether Earth orbits the sun.

Enough!

As our regular reader(s) will be aware we have been characterising today’s hearing as a “show trial“, and David Titley agrees:

For years, these hearings have been designed not to provide new information or different perspectives to members of Congress but, rather, to perpetuate the myth that there is a substantive and serious debate within the science community regarding the fundamental causes or existence of human-caused climate change.

Quite so David, but next comes a more controversial message seemingly aimed at his Penn State colleague Michael Mann, who is due to appear before the House Committee on Science, Space and Technology later today:

We should no longer be duped into playing along with this strategy.

Despite sending many skilled science communicators to testify at the hearings over the years and even when scoring tactical victories, the strategic effect of participating at these hearings has been to sustain the perception of false equivalence, a perception only exaggerated by the majority’s ability to select a grossly disproportionate number of witnesses far removed from mainstream science (it’s not coincidence that Judith Curry, professor emeritus, Georgia Institute of Technology, and John Christy, professor of atmospheric sciences, University of Alabama at Huntsville, are called upon so often by the Republicans).

A better response would be to simply boycott future hearings of this kind and to call out these hearings for what they are: a tactic to distract the public from a serious policy debate over how to manage both the short- and long-term risks of climate change. These hearings are designed to provide theatrics, question knowledge that has been well understood for more than 150 years, and leave the public with a false sense that significant uncertainty and contention exist within the science community on this issue.

“Boycott future hearings” then, but perhaps not today’s? We will discover what Michael Mann has to say later today, assuming he turns up! Ex Rear Admiral Titley does have experience of similar “show trials”. Here is a recording of what he said to Senator Ted Cruz’s so called “Data or Dogma: Promoting Open Inquiry in the Debate over the Magnitude of Human Impact on Earth’s Climate” hearing:

According to David Titley’s written testimony:

A combination of multiple, independent sources of data provide the basis to the latest conclusion from the Intergovernmental Panel on Climate Change:

“Warming of the climate system is unequivocal, and since the 1950’s, many of the observed changes are unprecedented over decades to millennia…
Human influence on the climate system is clear. This is evident from the increasing greenhouse gas concentrations in the atmosphere, positive radiative forcing, observed warming, and understanding of the climate system.”

We should not be surprised; these conclusions rest on science discovered in the 19th century by Fourier, Tyndall, Arrhenius and their colleagues and validated by many scientists in the subsequent decades.

It is worth noting that private industry independently arrived at these same conclusions decades ago. t is worth noting that private in
dustry independently arrived at these same conclusions decades
ago. Recently released documents show that in 1980 Exxon researchers projected the impacts on global temperature due to increasing greenhouse gasses with astonishing accuracy.

From David Titley’s verbal testimony:

The more we looked at the data, the more we saw that not only were the air temperatures coming up, but the water temperatures were coming up, the sea level was coming up, the glaciers were retreating, the oceans were acidifying. When you put all those independent lines of evidence together, coupled with a theory that was over 100 years old that had stood the test of time, it kinda made sense.

Does it mean we know everything? No, but does it mean we know enough that we should be considering this and acting? Yes, it’s called risk management and that’s what we were doing.

Apparently David thinks Ted Cruz wasn’t listening particularly carefully in December 2015, and that Lamar Smith won’t be listening carefully to Prof. Mann today.

A Report on the State of the Arctic in 2017

Our title for today is borrowed then modified from the title of a Global Warming Policy Foundation report entitled “The State of the Climate in 2016”. The associated GWPF press release assures us that:

A report on the State of the Climate in 2016 which is based exclusively on observations rather than climate models is published today.

Compiled by Dr Ole Humlum, Professor of Physical Geography at the University Centre in Svalbard (Norway), the new climate survey is in sharp contrast to the habitual alarmism of other reports that are mainly based on computer modelling and climate predictions.

Prof Humlum said: “There is little doubt that we are living in a warm period. However, there is also little doubt that current climate change is not abnormal and not outside the range of natural variations that might be expected.

However it seems as though the sharp contrast to other reports is that the GWPF’s effort is evidently hot off their porky pie production line. By way of example, Prof. Humlum’s “white paper” is not “based exclusively on observations rather than climate models” nor is it “The World’s first” such “State of the Climate Survey”. As Dr. Roy Spencer of the University of Alabama in Huntsville pointed out on Watts Up With That of all places:

Ummm… I believe the Bulletin of the AMS (BAMS) annual State of the Climate report is also observation-based…been around many years.

Meanwhile on Twitter Victor Venema of the University of Bonn pointed out that:

and Mark McCarthy of the UK Met Office added that:

All in all there are several “alternative facts” in just the headline and opening paragraph of the GWPF’s press release, which doesn’t augur well for the contents of the report itself!

It’s no coincidence (IMHO!) that a day later the United States’ House Committee on Science, Space and Technology announced their planned hearing “show trial” on March 29th entitled “Climate Science: Assumptions, Policy Implications, and the Scientific Method“:

Date: Wednesday, March 29, 2017 – 10:00am
Location: 2318 Rayburn House Office Building

Dr. Judith Curry

President, Climate Forecast Applications Network; Professor Emeritus, Georgia Institute of Technology

Dr. John Christy

Professor and Director, Earth System Science Center, NSSTC, University of Alabama at Huntsville; State Climatologist, Alabama

Dr. Michael Mann

Professor, Department of Meteorology and Atmospheric Science, Pennsylvania State University

Dr. Roger Pielke Jr.

Professor, Environmental Studies Department, University of Colorado

John Christy doesn’t seem to have a Twitter account, but the other three “expert witnesses” announced there involvement, as revealed in this slideshow of learned (and not so learned!) comments on Twitter:

 

You may have noticed that in response to the GWPF’s propaganda I pointed them at a “State of the Arctic in 2017” report of my own devising which is in actual fact “based exclusively on observations rather than climate models” and looks like this:

NSIDC-Max-2017

NASA Worldview “false-color” image of the Bering Sea on March  22nd 2017, derived from the MODIS sensor on the Terra satellite
NASA Worldview “false-color” image of the Bering Sea on March 22nd 2017, derived from the MODIS sensor on the Terra satellite
NASA Worldview “false-color” image of the Kara Sea on March  22nd 2017, derived from the MODIS sensor on the Terra satellite
NASA Worldview “false-color” image of the Kara Sea on March 22nd 2017, derived from the MODIS sensor on the Terra satellite
Synthetic aperture radar image of the Wandel Sea on March 21st 2017, from the ESA Sentinel 1B satellite
Synthetic aperture radar image of the Wandel Sea on March 21st 2017, from the ESA Sentinel 1B satellite

We feel sure that Lamar Smith and the House Committee on Science, Space and Technology won’t comprehend the significance of those observations, but will nonetheless be pleased to see the GWPF’s report become public knowledge shortly before their planned hearing next week.

We also feel sure they were pleased to view the contents of another recent “white paper” published under the GWPF banner. The author was ex Professor Judith Curry, and the title was “Climate Models for the Layman“. Lamar Smith et al. certainly seem to qualify as laymen, and Judith’s conclusion that:

There is growing evidence that climate models are running too hot and that climate sensitivity to carbon dioxide is at the lower end of the range provided by the IPCC.

will no doubt be grist to their climate science bashing mill next Wednesday. Unfortunately that conclusion is yet another “alternative fact” according to the non laymen.

This report, however, does little to help public understanding; well, unless the goal is to confuse public understanding of climate models so as to undermine our ability to make informed decisions. If this is the goal, this report might be quite effective.

That certainly seems to be the goal of the assorted parties involved, and consequently we cannot help but wonder if the David and Judy Show will put on another performance this coming Sunday morning? Paraphrasing William Shakespeare:

Friends, Romans, countrymen, lend me your ears;
Lamar Smith comes to bury Michael Mann, not to praise him

Shock News! 19 years without warming?

In the wake of the 2015/16 El Niño, recent weeks have seen the denialoblogosphere inundated with assorted attempts to proclaim yet again that there have been “19 years without warming”. In another of his intermittent articles Great White Con guest author Bill the Frog once again proves beyond any shadow of a doubt that all the would be emperors are in actual fact wearing no clothes. Without further preamble:

Ho-hum! So here we go again. The claim has just been made that the UAH data now shows no warming for about 19 years. A couple of recently Tweeted comments on Snow White’s Great White Con blog read as follows…

For the nth time, nearly 19 years with no significant warming. Not at all what was predicted!!

and

Just calculated UAH using jan: R^2 = 0.019 It’s PROVABLE NO DISCERNIBLE TREND. You’re talking complete bollocks!!!

Before one can properly examine this “no warming in 19 years” claim, not only is it necessary to establish the actual dataset upon which the claim is based, but one must also establish the actual period in question. A reasonable starting point within the various UAH datasets would be the Lower Troposphere product (TLT), as most of us do live in the lower troposphere – particularly at the very bottom of the lower troposphere.

However, one also has to consider which version of the TLT product is being considered. The formally released variant is Version 5.6, but Version 6 Beta 5 is the one that self-styled “climate change sceptics” have instantly and uncritically taken to their hearts.

The formally released variant (Ver 5.6) is the one currently used by NOAA’s National Centers for Environmental Information (NCEI) on their Microwave Sounding Unit climate monitoring page. As can be seen from the chart and the partial table (sorted on UAH values), the warmest seven years in the data include each of the four most recent years…

image002

 

The above chart clearly doesn’t fit the bill for the “no warming in 19 years claim”, so, instead we must look to the Version 6 Beta 5 data. Before doing so, a comparison of the two versions can be quite revealing…

image003

 

In terms of its impact on trend over the last 19 years or so, an obvious feature of the version change is that it boosts global temperatures in 1998 by just over 0.06° C, whilst lowering current temperatures by just over 0.08° C. Therefore, the net effect is to raise 1998 temperatures by about 0.15° C relative to today’s values. That is why some people refer to a mere 0.02° C difference between 1998 and 2016, whereas, NOAA NCEI shows this as 0.17° C.

(It is not difficult to imagine the hue and cry that would have gone up if one of the other datasets, such as NASA Gistemp, NOAA’s own Global Land & Ocean anomalies or the UK’s HadCRUT, had had a similar adjustment, but in the opposite direction.)

Anyway, it seemed pretty obvious which variant would have been used, so the task was merely to establish the exact start point. It transpires that if one ignores all the data before January 1998, and then superimposes a linear trend line, one does indeed get an R² value of ~ 0.019, as mentioned in one of the Tweets above. (The meaning of an R² value, as well as its use and misuse, will be discussed later in this piece.) In the interim, here is the chart…

image005

 

Those with some skill in dealing with such charts will recognise the gradient in the equation of the linear trend line,

Y = 0.0004X + 0.1119

As the data series is in terms of monthly anomalies, this value needs to be multiplied by 120 in order to convert it to the more familiar decadal trend form of +0.048° C/decade. Whilst this is certainly less than the trend using the full set of data, namely +0.123° C/decade, it is most assuredly non-zero.

The full dataset (shown below) gives a far more complete picture, as well as demonstrating just how egregious the selection of the start point had been. Additionally, this also highlights just how much of an outlier the 1998 spike represents. Compare the size of the upward 1998 spike with the less impressive downward excursion shown around 1991/92. This downward excursion was caused by the spectacular Mt Pinatubo eruption(s) in June 1991. The Pinatubo eruption(s) did indeed have a temporary effect upon the planet’s energy imbalance, owing, in no small measure, to the increase in planetary albedo caused by the ~ 20 million tons of sulphur dioxide blasted into the stratosphere. On the other hand, the 1997/98 El Nino was more of an energy-redistribution between the ocean and the atmosphere. (The distinction between these types of event is sadly lost on some.)

image007

 

The approach whereby inconvenient chunks of a dataset are conveniently ignored is a cherry-picking technique known as “end point selection”. It can also be used to actually identify those areas of the data which are most in line with the “climate change is a hoax” narrative. Once again, it is extremely easy to identify such regions of the data. A simple technique is to calculate the trend from every single starting date up until the present, and to then only focus on those which match the narrative. Doing this for the above dataset gives the following chart…

image009

 

It can clearly be seen that the trend value becomes increasingly stable (at ~ +0.12° C/decade) as one utilises more and more of the available data. The trend also clearly reaches a low point – albeit briefly – at 1998. This is entirely due to the impact that the massive 1997/98 El Nino had on global temperatures – in particular on satellite derived temperatures.

The observant reader will have undoubtedly noticed that the above chart ends at ~ 2006. The reason is simply because trend measurements become increasingly unstable as the length of the measurement period shortens. Not only do “short” trend values become extreme and meaningless, they also serve to compress the scale in areas of genuine interest. To demonstrate, here is the remainder of the data. Note the difference in the scale of the Y-axis…

image011

 

Now, what about this thing referred to as the R² value? It is also known as the Coefficient of Determination, and can be a trap for the unwary. First of all, what does it actually mean? One description reads as follows…

It is a statistic used in the context of statistical models whose main purpose is either the prediction of future outcomes or the testing of hypotheses, on the basis of other related information. It provides a measure of how well observed outcomes are replicated by the model, based on the proportion of total variation of outcomes explained by the model.

In their online statistics course, Penn State feel obliged to offer the following warning…

Unfortunately, the coefficient of determination r² and the correlation coefficient r have to be the most often misused and misunderstood measures in the field of statistics.

The coefficient of determination r² and the correlation coefficient r quantify the strength of a linear relationship. It is possible that r² = 0% and r = 0, suggesting there is no linear relation between x and y, and yet a perfect curved (or “curvilinear” relationship) exists.

Here is an example of what that means. Consider a data series in which the dependent variable is comprised of two distinct signals. The first of these is a pure Sine-wave and the second is a slowly rising perfectly linear function.

The formula I used to create such a beast was simply… Y = SinX + 0.00005X (where x is simply angular degrees)

The chart, with its accompanying R² value, is shown below, and, coincidentally, does indeed bear quite a striking resemblance to the Keeling Curve

image013

 

The function displayed is 100% predictable in its nature, and therefore it would be easy to project this further along the X-axis with 100% certainty of the prediction. If one were to select ANY point on the curve, moving one complete cycle to the right would result in the Y-value incrementing by EXACTLY +0.018 (360 degrees x 0.00005 = 0.018 per cycle). A similar single-cycle displacement to the left would result in a -0.018 change in the Y-value. In other words, the pattern is entirely deterministic. Despite this, the R² value is only 0.0139 – considerably less than the value of 0.0196 derived from the UAH data.

(NB There is a subtlety in the gradient which might require explanation. Although the generating formula clearly adds 0.00005 for each degree, the graph shows this as 0.00004 – or, more accurately as 0.000038. The reason is because the Sine function itself introduces a slight negative bias. Even after 21 full cycles, this bias is still -0.000012. Intuitively, one can visualise this as being due to the fact that the Sine function is positive over the 0 – 180 degree range, but then goes negative for the second half of each cycle. As the number of complete cycles rises, this bias tends towards zero. However, when one is working with such functions in earnest, this “seasonal” bias would have to be removed.)

Summing this up, trying to fit a linear trend to the simple (sine + ramp) function is certain to produce an extremely low R² value, as, except for the two crossover points each cycle, the difference between the actual value and the trend leaves large unaccounted-for residuals. To be in any way surprised by such an outcome is roughly akin to being surprised that London has more daylight hours in June than it does in December.

So what is the paradox with the low R² value for the UAH data? There isn’t one. Just as it would be daft to expect a linear trend line to map onto the future outcomes on the simple (sine + ramp) graph above, it would be equally daft to expect a linear trend line to be in any way an accurate predictor for the UAH monthly anomalies over the months and years to come.

Nevertheless, even although a linear trend produces such a low R² value when applied to the (sine + ramp), how does it fare as regards calculating the actual trend? Once the “seasonal” bias introduced by the presence of the sine function is removed, the calculated trend is EXACTLY equal to the actual trend embedded in the generating function.

Anyway, even if the period from January 1998 until January 2017 did indeed comprise the entirety of the available data, why would anyone try to use a linear trend line? If someone genuinely believed that the mere fact of a relatively higher R² value provided a better predictor, then surely that person might play around with some of the other types of trend line instantly available to anyone with Excel? (Indeed, any application possessing even rudimentary charting features would be suitable.) For example, a far better fit to the data is obtained by using a 5th order polynomial trend line, as shown below…

image015

 

Although this is something of a “curve fitting” exercise, it seems difficult to argue with the case that this polynomial trend line looks a far better “fit” to the data than the simplistic linear trend. So, when someone strenuously claims that a linear trend line applied to a noisy dataset producing an R² value of 0.0196 means there had been no warming, then there are some questions which need to be asked and adequately answered:

Does that person genuinely not know what they’re talking about, or is their intention to deliberately mislead? (NB It should be obvious that these two options are not mutually exclusive, and therefore both could apply.)

Given that measurements of surface temperature, oceanic heat content, sea level rise, glacial mass balance, ice-sheet mass balance, sea ice coverage and multiple studies from the field of phenology all confirm global warming, why, one wonders, would somebody concentrate on an egregiously selected subset of a single carefully selected variant of a single carefully selected dataset?

How to Make a Complete RSS of Yourself (With Sausages)

In the wake of the recent announcement from NASA’s Goddard Institute for Space Studies that global surface temperatures in February 2016 were an extraordinary 1.35 °C above the 1951-1980 baseline we bring you the third in our series of occasional guest posts.

Today’s article is a pre-publication draft prepared by Bill the Frog, who has also authorised me to reveal to the world the sordid truth that he is in actual fact the spawn of a “consumated experiment” conducted between Kermit the Frog and Miss Piggy many moons ago. Please ensure that you have a Microsoft Excel compatible spreadsheet close at hand, and then read on below the fold.


­­­­In a cleverly orchestrated move immaculately timed to coincide with the build up to the CoP21 talks in Paris, Christopher Monckton, the 3rd Viscount Monckton of Brenchley, announced the following startling news on the climate change denial site, Climate Depot

Morano1

Upon viewing Mr Monckton’s article, any attentive reader could be forgiven for having an overwhelming feeling of déjà vu. The sensation would be entirely understandable, as this was merely the latest missive in a long-standing series of such “revelations”, stretching back to at least December 2013. In fact, there has even been a recent happy addition to the family, as we learned in January 2016 that …

Morano2

The primary eye-candy in Mr Monckton’s November article was undoubtedly the following diagram …

https://lh3.googleusercontent.com/-xAdiohdkcU4/VjpSKNYP9SI/AAAAAAACa8Q/639el4qIzpM/s720-Ic42/monckton1.png

Fig 1: Copied from Nov 2015 article in Climate Depot

It is clear that Mr Monckton has the ability to keep churning out virtually identical articles, and this is a skill very reminiscent of the way a butcher can keep churning out virtually identical sausages. Whilst on the subject of sausages, the famous 19th Century Prussian statesman, Otto von Bismarck, once described legislative procedures in a memorably pithy fashion, namely that … “Laws are like sausages, it is better not to see them being made”.

One must suspect that those who are eager and willing to accept Mr Monckton’s arguments at face value are somehow suffused with a similar kind of “don’t need to know, don’t want to know” mentality. However, some of us are both able and willing to scratch a little way beneath the skin of the sausage. On examining one of Mr Monckton’s prize sausages, it takes all of about 2 seconds to work out what has been done, and about two minutes to reproduce it on a spreadsheet. That simple action is all that is needed to see how the appropriate start date for his “pause” automatically pops out of the data.

However, enough of the hors d’oeuvres, it’s time to see how those sausages get made. Let’s immediately get to the meat of the matter (pun intended) by demonstrating precisely how Mr Monckton arrives at his “no global warming since …” date. The technique is incredibly straightforward, and can be done by anyone with even rudimentary spreadsheet skills.

One basically uses the spreadsheet’s built-in features, such as the SLOPE function in Excel, to calculate the rate of change of monthly temperature over a selected time period. The appropriate command would initially be inserted on the same row as the first month of data, and it would set to range to the latest date available. This would be repeated (using a feature such as Auto Fill) on each subsequent row, down as far as the penultimate month. On each row, the start date therefore advances by one month, but the end date remains fixed. (As the SLOPE function is measuring rate of change, there must be at least two items in the range, that’s why the penultimate month would also be the latest possible start date.)

That might sound slightly complex, but if one then displays the results graphically, it becomes obvious what is happening, as shown below…

Fig 2: Variation in temperature gradient. End date June 2014

On the above chart (Fig 2), it can clearly be seen that, after about 13 or 14 years of stability, the rate of change of temperature starts to oscillate wildly as one looks further to the right. Mr Monckton’s approach has been simply to note the earliest transition point, and then declare that there has been no warming since that date. One could, if being generous, describe this as a somewhat naïve interpretation, although others may feel that a stronger adjective would be more appropriate. However, given his classical education, it is difficult to say why he does not seem to comprehend the difference between veracity and verisimilitude. (The latter being the situation when something merely has the appearance of being true – as opposed to actually being the real thing.)

Fig 2 is made up from 425 discrete gradient values, each generated (in this case) using Excel’s SLOPE function. Of these, 122 are indeed below the horizontal axis, and can therefore be viewed as demonstrating a negative (i.e. cooling) trend. However, that also means that 70% show a trend that is positive. Indeed, if one performs a simple arithmetic average across all 425 data points, the integration thus obtained is 0.148 degrees Celsius per decade.

(In the spirit of honesty and openness, it must of course be pointed out that the aggregated warming trend of 0.148 degrees Celsius/decade thus obtained has just about the same level of irrelevance as Mr Monckton’s “no warming since mm/yy” claim. Nether has any real physical meaning, as, once one gets closer to the end date(s), the values can swing wildly from one month to the next. In Fig 2, the sign of the temperature trend changes 8 times from 1996 onwards. A similar chart created at the time of his December 2013 article would have had no fewer than 13 sign changes over a similar period. This is because the period in question is too short for the warming signal to unequivocally emerge from the noise.)

As one adds more and more data, a family of curves gradually builds up, as shown in Fig 3a below.

Fig 3a: Family of curves showing how end-date also affects temperature gradient

It should be clear from Fig 3a that each temperature gradient curve migrates upwards (i.e. more warming) as each additional 6-month block of data comes in. This is only to be expected, as the impact of isolated events – such as the temperature spike created by the 1997/98 El Niño – gradually wane as they get diluted by the addition of further data. The shaded area in Fig 3a is expanded below as Fig 3b in order to make this effect more obvious.

Fig 3b: Expanded view of curve family

By the time we are looking at an end date of December 2015, the relevant curve now consists of 443 discrete values, of which just 39, or 9%, are in negative territory. Even if one only considers values to the right of the initial transition point, a full 82% of these are positive. The quality of Mr Monckton’s prize-winning sausages is therefore revealed as being dubious in the extreme. (The curve has not been displayed, but the addition of a single extra month – January 2016 – further reduces the number of data points below the zero baseline to just 26, or 6%.) To anyone tracking this, there was only ever going to be one outcome, eventually, the curve was going to end up above the zero baseline. The ongoing El Niño conditions have merely served to hasten the inevitable.

With the release of the February 2016 data from RSS, this is precisely what happened. We can now add a fifth curve using the most up-to-date figures available at the time of writing. This is shown below as Fig 4.

Fig 4: Further expansion of curve families incorporating latest available data (Feb 2016)

As soon as the latest (Feb 2016) data is added, the fifth member of the curve family (in Fig 4) no longer intersects the horizontal axis – anywhere. When this happens, all of Mr Monckton’s various sausages reach their collective expiry date, and his entire fantasy of “no global warming since mm/yy” simply evaporates into thin air.

Interestingly, although Mr Monckton chooses to restrict his “analysis” to only the Lower Troposphere Temperatures produced by Remote Sensing Systems (RSS), another TLT dataset is available from the University of Alabama in Huntsville (UAH). Now, this omission seems perplexing, as Mr Monckton took time to emphasise the reliability of the satellite record in his article dated May 2014.

In his Technical Note to this article, Mr Monckton tells us…

The satellite datasets are based on measurements made by the most accurate thermometers available – platinum resistance thermometers, which not only measure temperature at various altitudes above the Earth’s surface via microwave sounding units but also constantly calibrate themselves by measuring via spaceward mirrors the known temperature of the cosmic background radiation, which is 1% of the freezing point of water, or just 2.73 degrees above absolute zero. It was by measuring minuscule variations in the cosmic background radiation that the NASA anisotropy probe determined the age of the Universe: 13.82 billion years.

Now, that certainly makes it all sound very easy. It’s roughly the metaphorical equivalent of the entire planet being told to drop its trousers and bend over, as the largest nurse imaginable approaches, all the while gleefully clutching at a shiny platinum rectal thermometer. Perhaps a more balanced perspective can be gleaned by reading what RSS themselves have to say about the difficulties involved in Brightness Temperature measurement.

When one looks at Mr Monckton’s opening sentence referring to “the most accurate thermometers available”, one would certainly be forgiven for thinking that there must perforce be excellent agreement between the RSS and UAH datasets. This meme, that the trends displayed by the RSS and UAH datasets are in excellent agreement, is one that appears to be very pervasive amongst those who regard themselves as climate change sceptics. Sadly, few of these self-styled sceptics seem to understand the meaning behind the motto “Nullius in verba”.

Tellingly, this “RSS and UAH are in close agreement” meme is in stark contrast to the views of the people who actually do that work for a living.

Carl Mears (of RSS) wrote an article back in September 2014 discussing the reality – or otherwise – of the so-called “Pause”. In the section of this article dealing with measurement errors, he wrote that …

 A similar, but stronger case can be made using surface temperature datasets, which I consider to be more reliable than satellite datasets (they certainly agree with each other better than the various satellite datasets do!) [my emphasis]

The views of Roy Spencer from UAH concerning the agreement (or, more accurately, the disagreement) between the two satellite datasets must also be considered. Way back in July 2011, Dr Spencer wrote

… my UAH cohort and boss John Christy, who does the detailed matching between satellites, is pretty convinced that the RSS data is undergoing spurious cooling because RSS is still using the old NOAA-15 satellite which has a decaying orbit, to which they are then applying a diurnal cycle drift correction based upon a climate model, which does not quite match reality.

So there we are, Carl Mears and Roy Spencer, who both work independently on satellite data, have views that are somewhat at odds with those of Mr Monckton when it comes to agreement between the satellite datasets. Who do we think is likely to know best?

The closing sentence in that paragraph from the Technical Note did give rise to a wry smile. I’m not sure what relevance Mr Monckton thinks there is between global warming and a refined value for the Hubble Constant, but, for whatever reason, he sees fit to mention that the Universe was born nearly 14 billion years ago. The irony of Mr Monckton mentioning this in an article which treats his target audience as though they were born yesterday appears to have passed him by entirely.

Moving further into Mr Monckton’s Technical Note, the next two paragraphs basically sound like a used car salesman describing the virtues of the rust bucket on the forecourt. Instead of trying to make himself sound clever, Mr Monckton could simply have said something along the lines of … “If you want to verify this for yourself, it can easily be done by simply using the SLOPE function in Excel”. Of course, Mr Monckton might prefer his readers not to think for themselves.

The final paragraph in the Technical Note reads as follows…

Dr Stephen Farish, Professor of Epidemiological Statistics at the University of Melbourne, kindly verified the reliability of the algorithm that determines the trend on the graph and the correlation coefficient, which is very low because the data are highly variable and the trend is flat.

Well, this is an example of the logical fallacy known as “Argument from Authority” combined with a blatant attempt at misdirection. The accuracy of the “… algorithm that determines the trend …” has absolutely nothing to do with Mr Monckton’s subsequent interpretation of the results, although that is precisely what the reader is meant to think. The good professor may well be seriously gifted at statistics, but that doesn’t mean he speaks with any authority about atmospheric science or about satellite datasets.

Also, for the sake of the students at Melbourne University, I would hope that Mr Monckton was extemporizing at the end of that paragraph. It is simply nonsense to suggest that the “flatness” of the trend displayed in his Fig 1 is in any way responsible for the trend equation also having an R2 value of (virtually) zero. The value of the coefficient of determination (R2) ranges from 0 to 1, and wildly variable data can most certainly result in having a value of zero, or thereabouts, but the value of the trend itself has little or no bearing upon this.

The phraseology used in the Technical Note would appear to imply that, as both the trend and the coefficient of determination are effectively zero, this should be interpreted as two distinct and independent factors which serve to corroborate each other. Actually, nothing could be further from the truth.

The very fact that the coefficient of determination is effectively zero should be regarded as a great big blazing neon sign which says “the equation to which this R2 value relates should be treated with ENORMOUS caution, as the underlying data is too variable to infer any firm conclusions”.

To demonstrate that a (virtually) flat trend can have an R2 value of 1, anyone can try inputting the following numbers into a spreadsheet …

10.0 10.00001 10.00002 10.00003 etc.

Use the Auto Fill capability to do this automatically for about 20 values. The slope of this set of numbers is a mere one part in a million, and is therefore, to all intents and purposes, almost perfectly is flat. However, if one adds a trend line and asks for the R2 value, it will return a value of 1 (or very, very close to 1). (NB When I tried this first with a single recurring integer – i.e. absolutely flat – Excel returned an error value. That’s why I suggest using a tiny increment, such as the 1 in a million slope mentioned above.)

Enough of the Technical Note nonsense, let’s looks at the UAH dataset as well. Fig 5 (below) is a rework of the earlier Fig 2, but this time with the UAH dataset added, as well as an equally weighted (RSS+UAH) composite.

Fig 5: Comparison between RSS and UAH (June 2014)

The difference between the RSS and UAH results makes it clear why Mr Monckton chose to focus solely on the RSS data. At the time of writing this present article, the RSS and UAH datasets each extended to February 2016, and Fig 6 (below) shows graphically how the datasets compare when that end date is employed.

Fig 6: Comparison between RSS and UAH (Feb 2016)

In his sausage with the November 2015 sell-by-date, Mr Monckton assured his readers that “…The UAH dataset shows a Pause almost as long as the RSS dataset.

Even just a moment or two spent considering the UAH curves on Fig 5 (June 2014) and then on Fig 6 (February 2016) would suggest precisely how far that claim is removed from reality. However, for those unwilling to put in this minimal effort, Fig 7 is just for you.

Fig 7: UAH TLT temperatures gradients over three different end dates.

From the above diagram, it is rather difficult to see any remote justification for Mr Monckton’s bizarre assertion that “…The UAH dataset shows a Pause almost as long as the RSS dataset.

Moving on, it is clear that, irrespective of the exact timeframe, both the datasets exhibit a reasonably consistent “triple dip”. To understand the cause(s) of this “triple dip” in the above diagrams (at about 1997, 2001 and 2009), one needs to look at the data in the usual anomaly format, rather than in gradient format used in Figs 2 – 7.

Fig 8: RSS TLT anomalies smoothed over 12-month and 60-month periods

The monthly data looks very messy on a chart, but the application of 12-month and 60-month smoothing used in Fig 8 perhaps makes some details easier to see. The peaks resulting from the big 1997/98 El Niño and the less extreme 2009/10 event are very obvious on the 12-month data, but the impact of the prolonged series of 4 mini-peaks centred around 2003/04 shows up more on the 60-month plot. At present, the highest 60-month rolling average is centred near this part of the time series. (However, that may not be the case for much longer. If the next few months follow a similar pattern to the 1997/98 event, both the 12- and 60-month records are likely to be surpassed. Given that the March and April RSS TLT values recorded in 2015 were the two coolest months of that year, it is highly likely that a new rolling 12-month record will be set in April 2016.)

Whilst this helps explain the general shape of the curve families, it does not explain the divergence between the RSS and the UAH data. To show this effect, two approaches can be adopted: one can plot the two datasets together on the same chart, or one can derive the differences between RSS and UAH for every monthly value and plot that result.

In the first instance, the equivalent UAH rolling 12- and 60-month values have effectively been added to the above chart (Fig 8), as shown below in Fig 9.

Fig 9: RSS and UAH TLT anomalies using 12- and 60-month smoothing

On this chart (Fig 9) it can be seen that the smoothed anomalies start a little way apart, diverge near the middle of the time series, and then gradually converge as one looks toward the more recent values. Interestingly, although the 60-month peak at about 2003/04 in the RSS data is also present in the UAH data, it has long since been overtaken.

The second approach would involve subtracting the UAH monthly TLT anomalies figures from the RSS equivalents. The resulting difference values are plotted on Fig 10 below, and are most revealing. The latest values on Figs 9 and 10 are for February 2016.

Fig 10: Differences between RSS and UAH monthly TLT values up to Feb 2016

Even without the centred 60-month smoothed average, the general shape emerges clearly. The smoothed RSS values start off about 0.075 Celsius above the UAH values, but by about 1999 or 2000, this delta has risen to +0.15 Celsius. It then begins a virtually monotonic drop such that the 6 most recent rolling 60-month values have gone negative.

NB It is only to be expected that the dataset comparison begins with an offset of this magnitude. The UAH dataset anomalies are based upon a 30-year meteorology stretching from 1981 – 2010. However, RSS uses instead a 20-year baseline running from 1979 – 1998. The mid points of the two baselines are therefore 7 years apart. Given that the overall trend is currently in the order of 0.12 Celsius per decade, one would reasonably expect the starting offset to be pretty close to 0.084 Celsius. The actual starting point (0.075 Celsius) was therefore within about one hundredth of a degree Celsius from this figure.

Should anyone doubt the veracity of the above diagram, hereis a copy of something similar taken from Roy Spencer’s web pages. Apart from the end date, the only real difference is that whereas Fig 9 has the UAH monthly values subtracted from the RSS equivalent, Dr Spencer has subtracted the RSS data from the UAH equivalent, and has applied a 3-month smoothing filter. This is reproduced below as Fig 11.

Fig 11: Differences between UAH and RSS (copied from Dr Spencer’s blog)

This actually demonstrates one of the benefits of genuine scepticism. Until I created the plot on Fig 10, I was sure that the 97/98 El Niño was almost entirely responsible for the apparent “pause” in the RSS data. However, it would appear that the varying divergence from the equivalent UAH figures also has a very significant role to play. Hopefully, the teams from RSS and UAH will, in the near future, be able to offer some mutually agreed explanation for this divergent behaviour. (Although both teams are about to implement new analysis routines – RSS going From Ver 3.3 to Ver 4, and UAH going from Ver 5.6 to Ver 4.0 – mutual agreement appears to be still in the future.)

Irrespective of this divergence between the satellite datasets, the October 2015 TLT value given by RSS was the second largest in that dataset for that month. That was swiftly followed by monthly records for November, December and January. The February value went that little bit further and was the highest in the entire dataset. In the UAH TLT dataset, September 2015 was the third highest for that month, with each of the 5 months since then breaking the relevant monthly record. As with its RSS equivalent, the February 2016 UAH TLT figure was the highest in the entire dataset. In fact, the latest rolling 12-month UAH TLT figure is already the highest in the entire dataset. This would certainly appear to be strange behaviour during a so-called pause.

As sure as the sun rises in the east, these record breaking temperatures (and their effect on temperature trends) will be written off by some as merely being a consequence of the current El Niño. It does seem hypocritical that these people didn’t feel that a similar argument could be made about the 1997/98 event. An analogy could be made concerning the measurement of Sea Level Rise. Imagine that someone – who rejects the idea that sea level is rising – starts their measurements using a high tide value, and then cries foul because a subsequent (higher) reading was also taken at high tide.

This desperate clutching of straws will doubtless continue unabated, and a new “last, best hope” has already appeared in guise of Solar Cycle 25. Way back in 2006, an article by David Archibald appeared in Energy & Environment telling us how Solar Cycles 24 & 25 were going to cause temperatures to plummet. In the Conclusion to this paper, Mr Archibald wrote that …

A number of solar cycle prediction models are forecasting weak solar cycles 24 and 25 equating to a Dalton Minimum, and possibly the beginning of a prolonged period of weak activity equating to a Maunder Minimum. In the former case, a temperature decline of the order of 1.5°C can be expected based on the temperature response to solar cycles 5 and 6.

Well, according to NASA, the peak for Solar Cycle 24 passed almost 2 years ago, so it’s not looking too good at the moment for that prediction. However, Solar Cycle 25 probably won’t peak until about 2025, so that will keep the merchants of doubt going for a while.

Meanwhile, back in the real world, it is very tempting to make the following observations …

  • The February TLT value from RSS seems to have produced the conditions under which certain allotropes of the fabled element known as Moncktonite will spontaneously evaporate, and …
  • If Mr Monckton’s sausages leave an awfully bad taste in the mouth, it could be due to the fact that they are full of tripe.

Inevitably however, in the world of science at least, those who seek to employ misdirection and disinformation as a means to further their own ideological ends are doomed to eventual failure. In the closing paragraph to his “Personal
observations on the reliability of the Shuttle
”, the late, great Richard Feynman used a phrase that should be seared into the consciousness of anyone writing about climate science, especially those who are economical with the truth…

For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.

Remember that final phrase – “nature cannot be fooled”, even if vast numbers of voters can be!