Tag Archives: Guest Post

How to Make a Complete RSS of Yourself (With Sausages)

In the wake of the recent announcement from NASA’s Goddard Institute for Space Studies that global surface temperatures in February 2016 were an extraordinary 1.35 °C above the 1951-1980 baseline we bring you the third in our series of occasional guest posts.

Today’s article is a pre-publication draft prepared by Bill the Frog, who has also authorised me to reveal to the world the sordid truth that he is in actual fact the spawn of a “consumated experiment” conducted between Kermit the Frog and Miss Piggy many moons ago. Please ensure that you have a Microsoft Excel compatible spreadsheet close at hand, and then read on below the fold.


­­­­In a cleverly orchestrated move immaculately timed to coincide with the build up to the CoP21 talks in Paris, Christopher Monckton, the 3rd Viscount Monckton of Brenchley, announced the following startling news on the climate change denial site, Climate Depot

Morano1

Upon viewing Mr Monckton’s article, any attentive reader could be forgiven for having an overwhelming feeling of déjà vu. The sensation would be entirely understandable, as this was merely the latest missive in a long-standing series of such “revelations”, stretching back to at least December 2013. In fact, there has even been a recent happy addition to the family, as we learned in January 2016 that …

Morano2

The primary eye-candy in Mr Monckton’s November article was undoubtedly the following diagram …

https://lh3.googleusercontent.com/-xAdiohdkcU4/VjpSKNYP9SI/AAAAAAACa8Q/639el4qIzpM/s720-Ic42/monckton1.png

Fig 1: Copied from Nov 2015 article in Climate Depot

It is clear that Mr Monckton has the ability to keep churning out virtually identical articles, and this is a skill very reminiscent of the way a butcher can keep churning out virtually identical sausages. Whilst on the subject of sausages, the famous 19th Century Prussian statesman, Otto von Bismarck, once described legislative procedures in a memorably pithy fashion, namely that … “Laws are like sausages, it is better not to see them being made”.

One must suspect that those who are eager and willing to accept Mr Monckton’s arguments at face value are somehow suffused with a similar kind of “don’t need to know, don’t want to know” mentality. However, some of us are both able and willing to scratch a little way beneath the skin of the sausage. On examining one of Mr Monckton’s prize sausages, it takes all of about 2 seconds to work out what has been done, and about two minutes to reproduce it on a spreadsheet. That simple action is all that is needed to see how the appropriate start date for his “pause” automatically pops out of the data.

However, enough of the hors d’oeuvres, it’s time to see how those sausages get made. Let’s immediately get to the meat of the matter (pun intended) by demonstrating precisely how Mr Monckton arrives at his “no global warming since …” date. The technique is incredibly straightforward, and can be done by anyone with even rudimentary spreadsheet skills.

One basically uses the spreadsheet’s built-in features, such as the SLOPE function in Excel, to calculate the rate of change of monthly temperature over a selected time period. The appropriate command would initially be inserted on the same row as the first month of data, and it would set to range to the latest date available. This would be repeated (using a feature such as Auto Fill) on each subsequent row, down as far as the penultimate month. On each row, the start date therefore advances by one month, but the end date remains fixed. (As the SLOPE function is measuring rate of change, there must be at least two items in the range, that’s why the penultimate month would also be the latest possible start date.)

That might sound slightly complex, but if one then displays the results graphically, it becomes obvious what is happening, as shown below…

Fig 2: Variation in temperature gradient. End date June 2014

On the above chart (Fig 2), it can clearly be seen that, after about 13 or 14 years of stability, the rate of change of temperature starts to oscillate wildly as one looks further to the right. Mr Monckton’s approach has been simply to note the earliest transition point, and then declare that there has been no warming since that date. One could, if being generous, describe this as a somewhat naïve interpretation, although others may feel that a stronger adjective would be more appropriate. However, given his classical education, it is difficult to say why he does not seem to comprehend the difference between veracity and verisimilitude. (The latter being the situation when something merely has the appearance of being true – as opposed to actually being the real thing.)

Fig 2 is made up from 425 discrete gradient values, each generated (in this case) using Excel’s SLOPE function. Of these, 122 are indeed below the horizontal axis, and can therefore be viewed as demonstrating a negative (i.e. cooling) trend. However, that also means that 70% show a trend that is positive. Indeed, if one performs a simple arithmetic average across all 425 data points, the integration thus obtained is 0.148 degrees Celsius per decade.

(In the spirit of honesty and openness, it must of course be pointed out that the aggregated warming trend of 0.148 degrees Celsius/decade thus obtained has just about the same level of irrelevance as Mr Monckton’s “no warming since mm/yy” claim. Nether has any real physical meaning, as, once one gets closer to the end date(s), the values can swing wildly from one month to the next. In Fig 2, the sign of the temperature trend changes 8 times from 1996 onwards. A similar chart created at the time of his December 2013 article would have had no fewer than 13 sign changes over a similar period. This is because the period in question is too short for the warming signal to unequivocally emerge from the noise.)

As one adds more and more data, a family of curves gradually builds up, as shown in Fig 3a below.

Fig 3a: Family of curves showing how end-date also affects temperature gradient

It should be clear from Fig 3a that each temperature gradient curve migrates upwards (i.e. more warming) as each additional 6-month block of data comes in. This is only to be expected, as the impact of isolated events – such as the temperature spike created by the 1997/98 El Niño – gradually wane as they get diluted by the addition of further data. The shaded area in Fig 3a is expanded below as Fig 3b in order to make this effect more obvious.

Fig 3b: Expanded view of curve family

By the time we are looking at an end date of December 2015, the relevant curve now consists of 443 discrete values, of which just 39, or 9%, are in negative territory. Even if one only considers values to the right of the initial transition point, a full 82% of these are positive. The quality of Mr Monckton’s prize-winning sausages is therefore revealed as being dubious in the extreme. (The curve has not been displayed, but the addition of a single extra month – January 2016 – further reduces the number of data points below the zero baseline to just 26, or 6%.) To anyone tracking this, there was only ever going to be one outcome, eventually, the curve was going to end up above the zero baseline. The ongoing El Niño conditions have merely served to hasten the inevitable.

With the release of the February 2016 data from RSS, this is precisely what happened. We can now add a fifth curve using the most up-to-date figures available at the time of writing. This is shown below as Fig 4.

Fig 4: Further expansion of curve families incorporating latest available data (Feb 2016)

As soon as the latest (Feb 2016) data is added, the fifth member of the curve family (in Fig 4) no longer intersects the horizontal axis – anywhere. When this happens, all of Mr Monckton’s various sausages reach their collective expiry date, and his entire fantasy of “no global warming since mm/yy” simply evaporates into thin air.

Interestingly, although Mr Monckton chooses to restrict his “analysis” to only the Lower Troposphere Temperatures produced by Remote Sensing Systems (RSS), another TLT dataset is available from the University of Alabama in Huntsville (UAH). Now, this omission seems perplexing, as Mr Monckton took time to emphasise the reliability of the satellite record in his article dated May 2014.

In his Technical Note to this article, Mr Monckton tells us…

The satellite datasets are based on measurements made by the most accurate thermometers available – platinum resistance thermometers, which not only measure temperature at various altitudes above the Earth’s surface via microwave sounding units but also constantly calibrate themselves by measuring via spaceward mirrors the known temperature of the cosmic background radiation, which is 1% of the freezing point of water, or just 2.73 degrees above absolute zero. It was by measuring minuscule variations in the cosmic background radiation that the NASA anisotropy probe determined the age of the Universe: 13.82 billion years.

Now, that certainly makes it all sound very easy. It’s roughly the metaphorical equivalent of the entire planet being told to drop its trousers and bend over, as the largest nurse imaginable approaches, all the while gleefully clutching at a shiny platinum rectal thermometer. Perhaps a more balanced perspective can be gleaned by reading what RSS themselves have to say about the difficulties involved in Brightness Temperature measurement.

When one looks at Mr Monckton’s opening sentence referring to “the most accurate thermometers available”, one would certainly be forgiven for thinking that there must perforce be excellent agreement between the RSS and UAH datasets. This meme, that the trends displayed by the RSS and UAH datasets are in excellent agreement, is one that appears to be very pervasive amongst those who regard themselves as climate change sceptics. Sadly, few of these self-styled sceptics seem to understand the meaning behind the motto “Nullius in verba”.

Tellingly, this “RSS and UAH are in close agreement” meme is in stark contrast to the views of the people who actually do that work for a living.

Carl Mears (of RSS) wrote an article back in September 2014 discussing the reality – or otherwise – of the so-called “Pause”. In the section of this article dealing with measurement errors, he wrote that …

 A similar, but stronger case can be made using surface temperature datasets, which I consider to be more reliable than satellite datasets (they certainly agree with each other better than the various satellite datasets do!) [my emphasis]

The views of Roy Spencer from UAH concerning the agreement (or, more accurately, the disagreement) between the two satellite datasets must also be considered. Way back in July 2011, Dr Spencer wrote

… my UAH cohort and boss John Christy, who does the detailed matching between satellites, is pretty convinced that the RSS data is undergoing spurious cooling because RSS is still using the old NOAA-15 satellite which has a decaying orbit, to which they are then applying a diurnal cycle drift correction based upon a climate model, which does not quite match reality.

So there we are, Carl Mears and Roy Spencer, who both work independently on satellite data, have views that are somewhat at odds with those of Mr Monckton when it comes to agreement between the satellite datasets. Who do we think is likely to know best?

The closing sentence in that paragraph from the Technical Note did give rise to a wry smile. I’m not sure what relevance Mr Monckton thinks there is between global warming and a refined value for the Hubble Constant, but, for whatever reason, he sees fit to mention that the Universe was born nearly 14 billion years ago. The irony of Mr Monckton mentioning this in an article which treats his target audience as though they were born yesterday appears to have passed him by entirely.

Moving further into Mr Monckton’s Technical Note, the next two paragraphs basically sound like a used car salesman describing the virtues of the rust bucket on the forecourt. Instead of trying to make himself sound clever, Mr Monckton could simply have said something along the lines of … “If you want to verify this for yourself, it can easily be done by simply using the SLOPE function in Excel”. Of course, Mr Monckton might prefer his readers not to think for themselves.

The final paragraph in the Technical Note reads as follows…

Dr Stephen Farish, Professor of Epidemiological Statistics at the University of Melbourne, kindly verified the reliability of the algorithm that determines the trend on the graph and the correlation coefficient, which is very low because the data are highly variable and the trend is flat.

Well, this is an example of the logical fallacy known as “Argument from Authority” combined with a blatant attempt at misdirection. The accuracy of the “… algorithm that determines the trend …” has absolutely nothing to do with Mr Monckton’s subsequent interpretation of the results, although that is precisely what the reader is meant to think. The good professor may well be seriously gifted at statistics, but that doesn’t mean he speaks with any authority about atmospheric science or about satellite datasets.

Also, for the sake of the students at Melbourne University, I would hope that Mr Monckton was extemporizing at the end of that paragraph. It is simply nonsense to suggest that the “flatness” of the trend displayed in his Fig 1 is in any way responsible for the trend equation also having an R2 value of (virtually) zero. The value of the coefficient of determination (R2) ranges from 0 to 1, and wildly variable data can most certainly result in having a value of zero, or thereabouts, but the value of the trend itself has little or no bearing upon this.

The phraseology used in the Technical Note would appear to imply that, as both the trend and the coefficient of determination are effectively zero, this should be interpreted as two distinct and independent factors which serve to corroborate each other. Actually, nothing could be further from the truth.

The very fact that the coefficient of determination is effectively zero should be regarded as a great big blazing neon sign which says “the equation to which this R2 value relates should be treated with ENORMOUS caution, as the underlying data is too variable to infer any firm conclusions”.

To demonstrate that a (virtually) flat trend can have an R2 value of 1, anyone can try inputting the following numbers into a spreadsheet …

10.0 10.00001 10.00002 10.00003 etc.

Use the Auto Fill capability to do this automatically for about 20 values. The slope of this set of numbers is a mere one part in a million, and is therefore, to all intents and purposes, almost perfectly is flat. However, if one adds a trend line and asks for the R2 value, it will return a value of 1 (or very, very close to 1). (NB When I tried this first with a single recurring integer – i.e. absolutely flat – Excel returned an error value. That’s why I suggest using a tiny increment, such as the 1 in a million slope mentioned above.)

Enough of the Technical Note nonsense, let’s looks at the UAH dataset as well. Fig 5 (below) is a rework of the earlier Fig 2, but this time with the UAH dataset added, as well as an equally weighted (RSS+UAH) composite.

Fig 5: Comparison between RSS and UAH (June 2014)

The difference between the RSS and UAH results makes it clear why Mr Monckton chose to focus solely on the RSS data. At the time of writing this present article, the RSS and UAH datasets each extended to February 2016, and Fig 6 (below) shows graphically how the datasets compare when that end date is employed.

Fig 6: Comparison between RSS and UAH (Feb 2016)

In his sausage with the November 2015 sell-by-date, Mr Monckton assured his readers that “…The UAH dataset shows a Pause almost as long as the RSS dataset.

Even just a moment or two spent considering the UAH curves on Fig 5 (June 2014) and then on Fig 6 (February 2016) would suggest precisely how far that claim is removed from reality. However, for those unwilling to put in this minimal effort, Fig 7 is just for you.

Fig 7: UAH TLT temperatures gradients over three different end dates.

From the above diagram, it is rather difficult to see any remote justification for Mr Monckton’s bizarre assertion that “…The UAH dataset shows a Pause almost as long as the RSS dataset.

Moving on, it is clear that, irrespective of the exact timeframe, both the datasets exhibit a reasonably consistent “triple dip”. To understand the cause(s) of this “triple dip” in the above diagrams (at about 1997, 2001 and 2009), one needs to look at the data in the usual anomaly format, rather than in gradient format used in Figs 2 – 7.

Fig 8: RSS TLT anomalies smoothed over 12-month and 60-month periods

The monthly data looks very messy on a chart, but the application of 12-month and 60-month smoothing used in Fig 8 perhaps makes some details easier to see. The peaks resulting from the big 1997/98 El Niño and the less extreme 2009/10 event are very obvious on the 12-month data, but the impact of the prolonged series of 4 mini-peaks centred around 2003/04 shows up more on the 60-month plot. At present, the highest 60-month rolling average is centred near this part of the time series. (However, that may not be the case for much longer. If the next few months follow a similar pattern to the 1997/98 event, both the 12- and 60-month records are likely to be surpassed. Given that the March and April RSS TLT values recorded in 2015 were the two coolest months of that year, it is highly likely that a new rolling 12-month record will be set in April 2016.)

Whilst this helps explain the general shape of the curve families, it does not explain the divergence between the RSS and the UAH data. To show this effect, two approaches can be adopted: one can plot the two datasets together on the same chart, or one can derive the differences between RSS and UAH for every monthly value and plot that result.

In the first instance, the equivalent UAH rolling 12- and 60-month values have effectively been added to the above chart (Fig 8), as shown below in Fig 9.

Fig 9: RSS and UAH TLT anomalies using 12- and 60-month smoothing

On this chart (Fig 9) it can be seen that the smoothed anomalies start a little way apart, diverge near the middle of the time series, and then gradually converge as one looks toward the more recent values. Interestingly, although the 60-month peak at about 2003/04 in the RSS data is also present in the UAH data, it has long since been overtaken.

The second approach would involve subtracting the UAH monthly TLT anomalies figures from the RSS equivalents. The resulting difference values are plotted on Fig 10 below, and are most revealing. The latest values on Figs 9 and 10 are for February 2016.

Fig 10: Differences between RSS and UAH monthly TLT values up to Feb 2016

Even without the centred 60-month smoothed average, the general shape emerges clearly. The smoothed RSS values start off about 0.075 Celsius above the UAH values, but by about 1999 or 2000, this delta has risen to +0.15 Celsius. It then begins a virtually monotonic drop such that the 6 most recent rolling 60-month values have gone negative.

NB It is only to be expected that the dataset comparison begins with an offset of this magnitude. The UAH dataset anomalies are based upon a 30-year meteorology stretching from 1981 – 2010. However, RSS uses instead a 20-year baseline running from 1979 – 1998. The mid points of the two baselines are therefore 7 years apart. Given that the overall trend is currently in the order of 0.12 Celsius per decade, one would reasonably expect the starting offset to be pretty close to 0.084 Celsius. The actual starting point (0.075 Celsius) was therefore within about one hundredth of a degree Celsius from this figure.

Should anyone doubt the veracity of the above diagram, hereis a copy of something similar taken from Roy Spencer’s web pages. Apart from the end date, the only real difference is that whereas Fig 9 has the UAH monthly values subtracted from the RSS equivalent, Dr Spencer has subtracted the RSS data from the UAH equivalent, and has applied a 3-month smoothing filter. This is reproduced below as Fig 11.

Fig 11: Differences between UAH and RSS (copied from Dr Spencer’s blog)

This actually demonstrates one of the benefits of genuine scepticism. Until I created the plot on Fig 10, I was sure that the 97/98 El Niño was almost entirely responsible for the apparent “pause” in the RSS data. However, it would appear that the varying divergence from the equivalent UAH figures also has a very significant role to play. Hopefully, the teams from RSS and UAH will, in the near future, be able to offer some mutually agreed explanation for this divergent behaviour. (Although both teams are about to implement new analysis routines – RSS going From Ver 3.3 to Ver 4, and UAH going from Ver 5.6 to Ver 4.0 – mutual agreement appears to be still in the future.)

Irrespective of this divergence between the satellite datasets, the October 2015 TLT value given by RSS was the second largest in that dataset for that month. That was swiftly followed by monthly records for November, December and January. The February value went that little bit further and was the highest in the entire dataset. In the UAH TLT dataset, September 2015 was the third highest for that month, with each of the 5 months since then breaking the relevant monthly record. As with its RSS equivalent, the February 2016 UAH TLT figure was the highest in the entire dataset. In fact, the latest rolling 12-month UAH TLT figure is already the highest in the entire dataset. This would certainly appear to be strange behaviour during a so-called pause.

As sure as the sun rises in the east, these record breaking temperatures (and their effect on temperature trends) will be written off by some as merely being a consequence of the current El Niño. It does seem hypocritical that these people didn’t feel that a similar argument could be made about the 1997/98 event. An analogy could be made concerning the measurement of Sea Level Rise. Imagine that someone – who rejects the idea that sea level is rising – starts their measurements using a high tide value, and then cries foul because a subsequent (higher) reading was also taken at high tide.

This desperate clutching of straws will doubtless continue unabated, and a new “last, best hope” has already appeared in guise of Solar Cycle 25. Way back in 2006, an article by David Archibald appeared in Energy & Environment telling us how Solar Cycles 24 & 25 were going to cause temperatures to plummet. In the Conclusion to this paper, Mr Archibald wrote that …

A number of solar cycle prediction models are forecasting weak solar cycles 24 and 25 equating to a Dalton Minimum, and possibly the beginning of a prolonged period of weak activity equating to a Maunder Minimum. In the former case, a temperature decline of the order of 1.5°C can be expected based on the temperature response to solar cycles 5 and 6.

Well, according to NASA, the peak for Solar Cycle 24 passed almost 2 years ago, so it’s not looking too good at the moment for that prediction. However, Solar Cycle 25 probably won’t peak until about 2025, so that will keep the merchants of doubt going for a while.

Meanwhile, back in the real world, it is very tempting to make the following observations …

  • The February TLT value from RSS seems to have produced the conditions under which certain allotropes of the fabled element known as Moncktonite will spontaneously evaporate, and …
  • If Mr Monckton’s sausages leave an awfully bad taste in the mouth, it could be due to the fact that they are full of tripe.

Inevitably however, in the world of science at least, those who seek to employ misdirection and disinformation as a means to further their own ideological ends are doomed to eventual failure. In the closing paragraph to his “Personal
observations on the reliability of the Shuttle
”, the late, great Richard Feynman used a phrase that should be seared into the consciousness of anyone writing about climate science, especially those who are economical with the truth…

For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.

Remember that final phrase – “nature cannot be fooled”, even if vast numbers of voters can be!

The Telegraph is Wrong Again on Temperature Adjustments

Regular readers will recall that we recently sent The Telegraph a lesson or two about global surface temperature “adjustments”, both of which included “a video by a scientist who has studied such matters”. It seems nobody at The Telegraph, and particularly Christopher Booker, bothered to watch it or do the homework assignments.  The stated view of Jess McAree, Head of Editorial Compliance at the Telegraph Media Group, is that:

Only the most egregious inaccuracy could be significantly misleading.

We therefore take great pleasure in welcoming Dr. Kevin Cowtan from the University of York, the “scientist” mentioned above, who has kindly allowed us to reprint an article of his originally published at Skeptical Science. Please read on below the fold, and don’t forget to do your homework!


There has been a vigorous discussion of weather station calibration adjustments in the media over the past few weeks. While these adjustments don’t have a big effect on the global temperature record, they are needed to obtain consistent local records from equipment which has changed over time. Despite this, the Telegraph has produced two highly misleading stories about the station adjustments, the second including the demonstrably false claim that they are responsible for the recent rapid warming of the Arctic.

In the following video I show why this claim is wrong. But more importantly, I demonstrate three tools to allow you to test claims like this for yourself.

The central error in the Telegraph story is the attribution of Arctic warming (and somehow sea ice loss) to weather station adjustments. This conclusion is based on a survey of two dozen weather stations. But you can of course demonstrate anything you want by cherry picking your data, in this case in the selection of stations. The solution to cherry picking is to look at all of the relevant data – in this case all of the station records in the Arctic and surrounding region. I downloaded both the raw and adjusted temperature records from NOAA, and took the difference to determine the adjustments which had been applied. Then I calculated the trend in the adjustment averaged over the stations in each grid cell on the globe, to determine whether the adjustments were increasing or decreasing the temperature trend. The results are shown for the last 50 and 100 years in the following two figures:

Trend in weather station adjustments over the period 1965-2014, averaged by grid cell. Warm colours show upwards adjustments over time, cold colour downwards. For cells with less than 50 years of data, the trend is over the available period.

Trend in weather station adjustments over the period 1915-2014, averaged by grid cell. Warm colours show upwards adjustments over time, cold colour downwards. For cells with less than 100 years of data, the trend is over the available period.

In the video I demonstrate three tools which are useful in understanding and evaluating temperature adjustments:

A GHCN (global historical climatology network) station report browser. GHCN provide graphical reports on the adjustments made to each station record, but you need to know the station ID to find them. I have created an interactive map to make this easier.

The majority of cells show no significant adjustment. The largest adjustments are in the high Arctic, but are downwards, i.e. they reduce the warming trend. This is the opposite of what is claimed in the Telegraph story. You can check these stations using the GHCN station browser.

The upward adjustments to the Iceland stations, referred to in the Telegraph, predate the late 20th century warming. They occur mostly in the 1960’s, so they only appear in the centennial map. Berkeley Earth show a rather different pattern of adjustments for these stations.

Iceland is a particularly difficult case, with a small network of stations on an island isolated from the larger continental networks. The location of Iceland with respect to the North Atlantic Drift, which carries warm water from the tropics towards the poles, may also contribute to the temperature series being mismatched with records from Greenland or Scotland. However given that the Iceland contribution is weighted according to land area in the global records, the impact of this uncertainty is minimal. Global warming is evaluated on the basis of the land-ocean temperature record; the impact of adjustments on recent warming is minimal, and on the whole record it is small compared to the total amount of warming. As Zeke Hausfather has noted, the land temperature adjustments in the early record are smaller than and in the opposite direction to the sea surface temperature adjustments.

Impact of the weather stations adjustments on the global land-ocean temperature record, calculated using the Skeptical Science temperature record calculator in ‘CRU’ mode.

Manual recalibration of the Iceland records may make an interesting citizen science project. Most of the stations show good agreement since 1970, however they diverge in the earlier record. The challenge is to work out the minimum number of adjustments required to bring them into agreement over the whole period. But the answer may not be unique, and noise and geographical differences may also cause problems. To facilitate this challenge, I’ve made annualized data available for the eight stations as a spreadsheet file.

In the video I demonstrate three tools which are useful in understanding and evaluating temperature adjustments:

  • A GHCN (global historical climatology network) station report browser. GHCN provide graphical reports on the adjustments made to each station record, but you need to know the station ID to find them. I have created an interactive map to make this easier.
  • The Berkeley Earth station browser. The Berkeley Earth station reports provide additional information to help you understand why particular adjustments have been made.
  • The Skeptical Science temperature record calculator. This allows you to construct your own version of the temperature record, using either adjusted or unadjusted data for both the land and sea surface temperatures.

Data for the temperature calculator may be obtained from the following sources:

Finally, here are some interesting papers discussing why adjustments are required.

  • Menne et al (2009) The U.S. historical climatology network monthly temperature data, version 2.
  • Bohm et al (2010) The early instrumental warm-bias: a solution for long central European temperature series 1760–2007.
  • Brunet et al (2010) The minimization of the screen bias from ancient Western Mediterranean air temperature records: an exploratory statistical analysis.
  • Ellis (1890) On the difference produced in the mean temperature derived from daily maximum and minimum readings, as depending on the time at which the thermometers are read

 

Finally, for the moment at least, and lapsing back into our by now familiar (and adversarial?) style:

 

Us:


 

Them:

We’ll keep you posted!

 

Tricks Used by David Rose to Deceive

Regular readers of our so far somewhat surreal reporting from up here in the penthouse suite at the summit of the Great White Con ivory towers will no doubt have noticed that we like to concentrate on the facts about the Arctic, whilst occasionally naively exploring assorted psychological aspects of journeying through the “denialosphere”.

Today, however, we’re branching out in a different direction with the aid of our first ever guest post. It has been carefully crafted by Sou Bundanga of the HotWhopper blog, on the topic of the “journalistic tricks that professional disinformers use”. It covers some of the same ground as a recent post of our own, albeit from a rather different angle. If you would like view the original version on Sou’s blog please click here. Alternatively, please continue below the fold:


This is just a short article to show the journalistic tricks that professional disinformers use. It consists of excerpts from an article by David Rose, who is paid to write rubbish for the Mail on Sunday, a UK tabloid of the sensationalist kind. He’d probably claim that he’s just “doing his job”. His job being to create sensationalist headlines and not bother too much about accuracy, but to try to do it in such a way as to stop the paper ending up in court on the wrong end of a lawsuit. Just. (The paper probably doesn’t mind so much getting taken to the Press Complaints Commission. )

Here is what David Rose wrote last weekend:

The Nasa climate scientists who claimed 2014 set a new record for global warmth last night admitted they were only 38 per cent sure this was true.

First of all notice the use of the word “admitted” – as if it was something that the scientists were forced into, whereas in fact they provided all the information in their press briefing. Notice also that David has taken one number and used it out of context.  The 38% number is the probability that 2014 is the hottest year compared to the probability that 2010 and other hot years are the hottest. 2010, the next hottest year, only got a 23% probability by comparison. Here is the table showing out of 100%, what the different probabilities are:

 

You can see how David misused the 38% number. In fact the odds of it being the hottest year on record are the highest of the lot.

What is David’s next atrocity:

In a press release on Friday, Nasa’s Goddard Institute for Space Studies (GISS) claimed its analysis of world temperatures showed ‘2014 was the warmest year on record’.

The claim made headlines around the world, but yesterday it emerged that GISS’s analysis – based on readings from more than 3,000 measuring stations worldwide – is subject to a margin of error. Nasa admits this means it is far from certain that 2014 set a record at all.

See how David Rose distorts things. How he uses rhetoric, abusing words like “emerged” and “claim” and “admits”. He is also being “economical with the truth” about the “far from certain”. He just made that one up. It may not be “certain”, but it is much more certain than “far from”.  And it is more “certain” that 2014 was the hottest year than that any other year was the hottest year.

If David Rose were arguing that you beat your wife, even though you don’t, he’d probably write it up as:

The so-called scientist claims that he doesn’t beat his wife. He admits that he cannot prove he doesn’t beat his wife. However this journalist can show that it has emerged that his claim is subject to a margin of error.  95% of wife-beaters deny beating their wives.

And I doubt he’d add the confidence limits to the 95% number!

David Rose continues his deception writing:

Yet the Nasa press release failed to mention this, as well as the fact that the alleged ‘record’ amounted to an increase over 2010, the previous ‘warmest year’, of just two-hundredths of a degree – or 0.02C. The margin of error is said by scientists to be approximately 0.1C – several times as much.

That section by David Rose contains the same journalistic tricks of rhetoric, as well as an error of fact. The margin of error of the annual averaged global surface temperature is described in the GISS FAQ as ±0.05°C:

Assuming that the other inaccuracies might about double that estimate yielded the error bars for global annual means drawn in this graph, i.e., for recent years the error bar for global annual means is about ±0.05°C, for years around 1900 it is about ±0.1°C. The error bars are about twice as big for seasonal means and three times as big for monthly means. Error bars for regional means vary wildly depending on the station density in that region. Error estimates related to homogenization or other factors have been assessed by CRU and the Hadley Centre (among others).

If the press release didn’t include any confidence limits, then where did David Rose get his numbers from you may ask? That’s a very good question. It turns out that NOAA and NASA held a press conference, during which they showed some slides and explained the confidence limits, among other things. So David Rose was being very deceitful, wasn’t he. Which isn’t a surprise.

What bit of deception does he swing to next? Well here it is. You be the judge:

As a result, GISS’s director Gavin Schmidt has now admitted Nasa thinks the likelihood that 2014 was the warmest year since 1880 is just 38 per cent. However, when asked by this newspaper whether he regretted that the news release did not mention this, he did not respond. Another analysis, from the Berkeley Earth Surface Temperature (BEST) project, drawn from ten times as many measuring stations as GISS, concluded that if 2014 was a record year, it was by an even tinier amount.

More rhetorical tricks using words like “admitted”. More deception by David Rose. When and how and where did David Rose ask Gavin Schmidt the question? I don’t know. It looks as if it was via an accusatory tweet of the type “have you stopped beating your wife”, like this one on January 17th:


Yet Gavin Schmidt had already responded to David Rose’s tweets about “uncertainties” on January 16th:


 
That’s about it. I’ll leave it to you to decide who is the grand deceiver.

I’d not trust David Rose, denier journo, with a single fact.  It is alleged that he is a master of deception. He’d probably try to claim he is just doing his job.


Thanks very much for that article Sou, and by way of conclusion here’s yet another tweet from Gavin Schmidt, this time from January 24th: