More Of The Usual Hype About Arctic Sea Ice

No sooner has one of the usual suspects claimed that “Arctic Sea Ice Holds Firm” than a few more jump on the same bandwagon. The Global Warming Policy Forum have republished almost the whole of an article penned by our old friend Paul Homewood entitled “More Of The Usual Hype About Arctic Ice“. According to Paul (and Benny):

Far from collapsing, Arctic sea ice area has been remarkably stable in the last decade

He illustrates his point using a slightly different version of the Cryosphere Today Arctic sea ice area anomaly graph employed by Andrew Montford on March 29th. Here’s the latest version:

seaice-anomaly-20160403

For some strange reason Paul (and Benny) neglect to mention that the current anomaly of -1.199 million square kilometers is the most negative it has ever been for the day of the year in the entirety of the Cryosphere Today record going back to 1979. They also neglect to mention that the CT anomaly is typically much larger in summer than in winter, and that despite that fact the CT anomaly never fell as low as -1.199 at any time of year until 2006.

Make sure to follow the first link above for much more on anomalously misleading area graphs. However Paul (and Benny) are not content with just one misleading interpretation of an anomaly graph. The article continues:

With multi year ice continuing to recover from 2008 lows, ice volume has also been growing in the last few years.

BPIOMASIceVolumeAnomaly-201602

Whilst we all eagerly await the release of the PIOMAS Arctic sea ice volume numbers for March 2016, here is an alternative visualisation of the data from Chris Reynolds:

Reg-Vol-Feb16

Volume is the second lowest on record [for February] since 1978 at 20.660 thousand km cubed

In the Central Arctic, where it matters most, sea ice volume at the end of February was only a whisker above where it was in 2012, according to PIOMAS at least. For those with short memories the CT Arctic sea ice area metric reached an *all time low of 2.23401 million square kilometers on September 13th 2012, and an *all time low anomaly of -2.81817 million square kilometers on October 14th 2012.

*Since the Cryosphere Today records began

Claim – Arctic Sea Ice Holds Firm?

Today’s Arctic sea ice claim comes from the Bishop Hill blog of Andrew Montford, which recently stated that:

This morning’s story appears to be the hoary old “Arctic sea ice in freefall” one.

“The Arctic is in crisis. Year by year, it’s slipping into a new state, and it’s hard to see how that won’t have an effect on weather throughout the Northern Hemisphere,” said Ted Scambos, lead scientist at the Colorado-based NSIDC.”

As usual on these occasions, I take a quick look at the Cryosphere Today anomaly page, where I find the sea ice apparently still stuck firmly in “pause” mode.

seaice.anomaly.-20160328

Having inadvertently wended my way onto The Bishop’s Hill via the northerly extension to Eli’s Rabett Warren I felt compelled, as usual, to quibble with Andrew’s “apparently firmly in ‘pause’ mode” claim. Since graphs in comments are not available over on The Hill, or The Rabett Run for that matter, let’s take a look at some graphic representations of the available data over here instead. Commenter “Golf Charlie” asks at The Bishop’s:

With CO₂ levels continuing to rise, why hasn’t temperature risen, and the ice disappeared as predicted?

Let’s see shall we? CO₂ levels are indeed continuing to rise:

Keeling-20160330

Temperature has risen, as predicted:

2015-berk1

Arctic amplification is occuring, as predicted:

Time series of Arctic surface temperature in winter (Dec/Jan/Feb)
Time series of Arctic surface temperature in winter (Dec/Jan/Feb)

Arctic sea ice is disappearing, as predicted:

CT-20160330

CT-Max-2016-Final

Q.E.D?

NSIDC Announce The 2016 Arctic Sea Ice Maximum Extent

In the latest edition of their “Arctic Sea Ice News” the United States’ National Snow and Ice Data Center have announced that:

Arctic sea ice appears to have reached its annual maximum extent on March 24, and is now the lowest maximum in the satellite record, replacing last year’s record low. This year’s maximum extent occurred later than average. A late season surge in ice growth is still possible. NSIDC will post a detailed analysis of the 2015 to 2016 winter sea ice conditions in early April.

NSIDC-20160327

On March 24, 2016, Arctic sea ice likely reached its maximum extent for the year, at 14.52 million square kilometers (5.607 million square miles). This year’s maximum ice extent was the lowest in the satellite record, with below-average ice conditions everywhere except in the Labrador Sea, Baffin Bay, and Hudson Bay. The maximum extent is 1.12 million square kilometers (431,000 square miles) below the 1981 to 2010 average of 15.64 million square kilometers (6.04 million square miles) and 13,000 square kilometers (5,000 square miles) below the previous lowest maximum that occurred last year. This year’s maximum occurred twelve days later than the 1981 to 2010 average date of March 12. The date of the maximum has varied considerably over the years, occurring as early as February 24 in 1996 and as late as April 2 in 2010.

NASA’s Goddard Space Flight Center has also made a similar announcement, which includes this video:

The new record low follows record high temperatures in December, January and February around the globe and in the Arctic. The atmospheric warmth probably contributed to this lowest maximum extent, with air temperatures up to 10 degrees Fahrenheit above average at the edges of the ice pack where sea ice is thin, said Walt Meier, a sea ice scientist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland.

The wind patterns in the Arctic during January and February were also unfavorable to ice growth because they brought warm air from the south and prevented expansion of the ice cover. But ultimately, what will likely play a bigger role in the future trend of Arctic maximum extents is warming ocean waters, Meier said.

“It is likely that we’re going to keep seeing smaller wintertime maximums in the future because in addition to a warmer atmosphere, the ocean has also warmed up. That warmer ocean will not let the ice edge expand as far south as it used to,” Meier said. “Although the maximum reach of the sea ice can vary a lot each year depending on winter weather conditions, we’re seeing a significant downward trend, and that’s ultimately related to the warming atmosphere and oceans.” Since 1979, that trend has led to a loss of 620,000 square miles of winter sea ice cover, an area more than twice the size of Texas.

This year’s record low sea ice maximum extent will not necessarily result in a subsequent record low summertime minimum extent, Meier said. Summer weather conditions have a larger impact than the extent of the winter maximum in the outcome of each year’s melt season; warm temperatures and summer storms make the ice melt fast, while if a summer is cool, the melt slows down.

Neither NASA or the NSIDC comment on one of the striking things about this winter’s NSIDC extent chart, which has effectively “plateaued” during March 2016 following an initial peak of 14.48 million square kilometers on March 2nd, which was only recently exceeded. This is also illustrated by the JAXA Arctic sea ice extent metric, for which the 2016 maximum was 13.96 million square kilometers on February 29th:

JAXA-20160328

Now that the start of 2016 Arctic sea ice melting season has been called, albeit slightly hesitantly, by the experts at the NSIDC let’s also take a look at Cryosphere Today Arctic sea ice area:

CT-20160327

The preliminary peak which we announced on March 16th has also recently been exceeded, but we now feel supremely confident in predicting that the 2016 CT area maximum will be less than 13 million square kilometers for the first time ever in the satellite record.

Thus begins what promises to be a very interesting 2016 Arctic sea ice melting season! As the NSIDC puts it:

There is little correlation between the maximum winter extent and the minimum summer extent—this low maximum does not ensure that this summer will see record low ice conditions. A key factor is the timing of widespread surface melting in the high Arctic. An earlier melt onset is important to the amount of energy absorbed by the ice cover during the summer. If surface melting starts earlier than average, the snow darkens and exposes the ice below earlier, which in turn increases the solar heat input, allowing more ice to melt. With the likelihood that much of the Arctic cover is somewhat thinner due to the warm winter, early surface melting would favor reduced summer ice cover.

HadISST Historical Arctic Sea Ice Concentration Data

Commenter Henry P over on “Steve Goddard’s” (un)Real Science blog poses the following question:

Unfortunately, we do not have any record of ice around 1929. But there was an eyewitness report of the melting of ice around 1923. Noting that Antarctic ice is currently increasing, my question to you Jim is simply this: why do you think that Arctic melt now is more than 87 years ago?

By way of a partial answer to that question, the United Kingdom’s Meteorological Office (Met Office or UKMO for short) maintains the Hadley Centre Sea Ice and Sea Surface Temperature data set (HadISST for short). Unfortunately Henry P is obviously sadly misinformed, since according to the Met Office:

HadISST is a unique combination of monthly globally-complete fields of SST and sea ice concentration on a 1 degree latitude-longitude grid from 1870 to date.

Here is what the HadISST data set reveals for September 1929:

HadISST-19290916

compared to September 2015:

HadISST-20150916

Surely a working pair of Mark 1 eyeballs should be sufficient to allow even a dyed in the wool denizen of (un)Real Science to spot the difference?

Vanishing Svalbard Sea Ice

At the turn of the year we speculated about the potential effect of high temperatures and the swells caused by strong winds on sea ice in the Fram Strait and Barents and Greenland Seas. With the vernal equinox rapidly approaching let’s take stock of the state of Svalbard sea ice. Here’s one the first “visual” satellite images of the area in 2016 recorded yesterday by the Aqua satellite:

NASA Worldview “true-color” image of the Central Arctic north of Svalbard on March 14th 2016, derived from the MODIS sensor on the Aqua satellite
NASA Worldview “true-color” image of the Central Arctic north of Svalbard on March 14th 2016, derived from the MODIS sensor on the Aqua satellite

and here is the equivalent sea ice map from the Norwegian Meteorological Institute:

Svalbard-Map-20160314

Both sources reveal an unseasonable lack of solid sea ice around Svalbard. In fact an intrepid Northwest Passage navigator who didn’t mind the dark might well currently be able to circumnavigate Svalbard!

The Norwegian Meteorological Institute also produce a time series of sea ice area in the Svalbard region based on data from OSI-SAF. It currently looks like this:

osisaf-svalbard_20160314

As sunlight returns to the Central Arctic north of 80 degrees there is an anomalously large area of open water ready to soak up the rays. Here is what the Danish Meteorological Institute timeseries of Central Arctic temperatures looks like at the moment:

DMI-meanT_20160314

and here is the current Svalbard surf forecast from Magic Seaweed:

MSW-20160315

Whilst we speculate on what all this might mean for the Atlantic side of the Arctic over the coming melting season, here’s our new Svalbard Sea Ice page which contains a variety of graphs and maps to help us keep track of events on that part of our planet.

How to Make a Complete RSS of Yourself (With Sausages)

In the wake of the recent announcement from NASA’s Goddard Institute for Space Studies that global surface temperatures in February 2016 were an extraordinary 1.35 °C above the 1951-1980 baseline we bring you the third in our series of occasional guest posts.

Today’s article is a pre-publication draft prepared by Bill the Frog, who has also authorised me to reveal to the world the sordid truth that he is in actual fact the spawn of a “consumated experiment” conducted between Kermit the Frog and Miss Piggy many moons ago. Please ensure that you have a Microsoft Excel compatible spreadsheet close at hand, and then read on below the fold.


­­­­In a cleverly orchestrated move immaculately timed to coincide with the build up to the CoP21 talks in Paris, Christopher Monckton, the 3rd Viscount Monckton of Brenchley, announced the following startling news on the climate change denial site, Climate Depot

Morano1

Upon viewing Mr Monckton’s article, any attentive reader could be forgiven for having an overwhelming feeling of déjà vu. The sensation would be entirely understandable, as this was merely the latest missive in a long-standing series of such “revelations”, stretching back to at least December 2013. In fact, there has even been a recent happy addition to the family, as we learned in January 2016 that …

Morano2

The primary eye-candy in Mr Monckton’s November article was undoubtedly the following diagram …

https://lh3.googleusercontent.com/-xAdiohdkcU4/VjpSKNYP9SI/AAAAAAACa8Q/639el4qIzpM/s720-Ic42/monckton1.png

Fig 1: Copied from Nov 2015 article in Climate Depot

It is clear that Mr Monckton has the ability to keep churning out virtually identical articles, and this is a skill very reminiscent of the way a butcher can keep churning out virtually identical sausages. Whilst on the subject of sausages, the famous 19th Century Prussian statesman, Otto von Bismarck, once described legislative procedures in a memorably pithy fashion, namely that … “Laws are like sausages, it is better not to see them being made”.

One must suspect that those who are eager and willing to accept Mr Monckton’s arguments at face value are somehow suffused with a similar kind of “don’t need to know, don’t want to know” mentality. However, some of us are both able and willing to scratch a little way beneath the skin of the sausage. On examining one of Mr Monckton’s prize sausages, it takes all of about 2 seconds to work out what has been done, and about two minutes to reproduce it on a spreadsheet. That simple action is all that is needed to see how the appropriate start date for his “pause” automatically pops out of the data.

However, enough of the hors d’oeuvres, it’s time to see how those sausages get made. Let’s immediately get to the meat of the matter (pun intended) by demonstrating precisely how Mr Monckton arrives at his “no global warming since …” date. The technique is incredibly straightforward, and can be done by anyone with even rudimentary spreadsheet skills.

One basically uses the spreadsheet’s built-in features, such as the SLOPE function in Excel, to calculate the rate of change of monthly temperature over a selected time period. The appropriate command would initially be inserted on the same row as the first month of data, and it would set to range to the latest date available. This would be repeated (using a feature such as Auto Fill) on each subsequent row, down as far as the penultimate month. On each row, the start date therefore advances by one month, but the end date remains fixed. (As the SLOPE function is measuring rate of change, there must be at least two items in the range, that’s why the penultimate month would also be the latest possible start date.)

That might sound slightly complex, but if one then displays the results graphically, it becomes obvious what is happening, as shown below…

Fig 2: Variation in temperature gradient. End date June 2014

On the above chart (Fig 2), it can clearly be seen that, after about 13 or 14 years of stability, the rate of change of temperature starts to oscillate wildly as one looks further to the right. Mr Monckton’s approach has been simply to note the earliest transition point, and then declare that there has been no warming since that date. One could, if being generous, describe this as a somewhat naïve interpretation, although others may feel that a stronger adjective would be more appropriate. However, given his classical education, it is difficult to say why he does not seem to comprehend the difference between veracity and verisimilitude. (The latter being the situation when something merely has the appearance of being true – as opposed to actually being the real thing.)

Fig 2 is made up from 425 discrete gradient values, each generated (in this case) using Excel’s SLOPE function. Of these, 122 are indeed below the horizontal axis, and can therefore be viewed as demonstrating a negative (i.e. cooling) trend. However, that also means that 70% show a trend that is positive. Indeed, if one performs a simple arithmetic average across all 425 data points, the integration thus obtained is 0.148 degrees Celsius per decade.

(In the spirit of honesty and openness, it must of course be pointed out that the aggregated warming trend of 0.148 degrees Celsius/decade thus obtained has just about the same level of irrelevance as Mr Monckton’s “no warming since mm/yy” claim. Nether has any real physical meaning, as, once one gets closer to the end date(s), the values can swing wildly from one month to the next. In Fig 2, the sign of the temperature trend changes 8 times from 1996 onwards. A similar chart created at the time of his December 2013 article would have had no fewer than 13 sign changes over a similar period. This is because the period in question is too short for the warming signal to unequivocally emerge from the noise.)

As one adds more and more data, a family of curves gradually builds up, as shown in Fig 3a below.

Fig 3a: Family of curves showing how end-date also affects temperature gradient

It should be clear from Fig 3a that each temperature gradient curve migrates upwards (i.e. more warming) as each additional 6-month block of data comes in. This is only to be expected, as the impact of isolated events – such as the temperature spike created by the 1997/98 El Niño – gradually wane as they get diluted by the addition of further data. The shaded area in Fig 3a is expanded below as Fig 3b in order to make this effect more obvious.

Fig 3b: Expanded view of curve family

By the time we are looking at an end date of December 2015, the relevant curve now consists of 443 discrete values, of which just 39, or 9%, are in negative territory. Even if one only considers values to the right of the initial transition point, a full 82% of these are positive. The quality of Mr Monckton’s prize-winning sausages is therefore revealed as being dubious in the extreme. (The curve has not been displayed, but the addition of a single extra month – January 2016 – further reduces the number of data points below the zero baseline to just 26, or 6%.) To anyone tracking this, there was only ever going to be one outcome, eventually, the curve was going to end up above the zero baseline. The ongoing El Niño conditions have merely served to hasten the inevitable.

With the release of the February 2016 data from RSS, this is precisely what happened. We can now add a fifth curve using the most up-to-date figures available at the time of writing. This is shown below as Fig 4.

Fig 4: Further expansion of curve families incorporating latest available data (Feb 2016)

As soon as the latest (Feb 2016) data is added, the fifth member of the curve family (in Fig 4) no longer intersects the horizontal axis – anywhere. When this happens, all of Mr Monckton’s various sausages reach their collective expiry date, and his entire fantasy of “no global warming since mm/yy” simply evaporates into thin air.

Interestingly, although Mr Monckton chooses to restrict his “analysis” to only the Lower Troposphere Temperatures produced by Remote Sensing Systems (RSS), another TLT dataset is available from the University of Alabama in Huntsville (UAH). Now, this omission seems perplexing, as Mr Monckton took time to emphasise the reliability of the satellite record in his article dated May 2014.

In his Technical Note to this article, Mr Monckton tells us…

The satellite datasets are based on measurements made by the most accurate thermometers available – platinum resistance thermometers, which not only measure temperature at various altitudes above the Earth’s surface via microwave sounding units but also constantly calibrate themselves by measuring via spaceward mirrors the known temperature of the cosmic background radiation, which is 1% of the freezing point of water, or just 2.73 degrees above absolute zero. It was by measuring minuscule variations in the cosmic background radiation that the NASA anisotropy probe determined the age of the Universe: 13.82 billion years.

Now, that certainly makes it all sound very easy. It’s roughly the metaphorical equivalent of the entire planet being told to drop its trousers and bend over, as the largest nurse imaginable approaches, all the while gleefully clutching at a shiny platinum rectal thermometer. Perhaps a more balanced perspective can be gleaned by reading what RSS themselves have to say about the difficulties involved in Brightness Temperature measurement.

When one looks at Mr Monckton’s opening sentence referring to “the most accurate thermometers available”, one would certainly be forgiven for thinking that there must perforce be excellent agreement between the RSS and UAH datasets. This meme, that the trends displayed by the RSS and UAH datasets are in excellent agreement, is one that appears to be very pervasive amongst those who regard themselves as climate change sceptics. Sadly, few of these self-styled sceptics seem to understand the meaning behind the motto “Nullius in verba”.

Tellingly, this “RSS and UAH are in close agreement” meme is in stark contrast to the views of the people who actually do that work for a living.

Carl Mears (of RSS) wrote an article back in September 2014 discussing the reality – or otherwise – of the so-called “Pause”. In the section of this article dealing with measurement errors, he wrote that …

 A similar, but stronger case can be made using surface temperature datasets, which I consider to be more reliable than satellite datasets (they certainly agree with each other better than the various satellite datasets do!) [my emphasis]

The views of Roy Spencer from UAH concerning the agreement (or, more accurately, the disagreement) between the two satellite datasets must also be considered. Way back in July 2011, Dr Spencer wrote

… my UAH cohort and boss John Christy, who does the detailed matching between satellites, is pretty convinced that the RSS data is undergoing spurious cooling because RSS is still using the old NOAA-15 satellite which has a decaying orbit, to which they are then applying a diurnal cycle drift correction based upon a climate model, which does not quite match reality.

So there we are, Carl Mears and Roy Spencer, who both work independently on satellite data, have views that are somewhat at odds with those of Mr Monckton when it comes to agreement between the satellite datasets. Who do we think is likely to know best?

The closing sentence in that paragraph from the Technical Note did give rise to a wry smile. I’m not sure what relevance Mr Monckton thinks there is between global warming and a refined value for the Hubble Constant, but, for whatever reason, he sees fit to mention that the Universe was born nearly 14 billion years ago. The irony of Mr Monckton mentioning this in an article which treats his target audience as though they were born yesterday appears to have passed him by entirely.

Moving further into Mr Monckton’s Technical Note, the next two paragraphs basically sound like a used car salesman describing the virtues of the rust bucket on the forecourt. Instead of trying to make himself sound clever, Mr Monckton could simply have said something along the lines of … “If you want to verify this for yourself, it can easily be done by simply using the SLOPE function in Excel”. Of course, Mr Monckton might prefer his readers not to think for themselves.

The final paragraph in the Technical Note reads as follows…

Dr Stephen Farish, Professor of Epidemiological Statistics at the University of Melbourne, kindly verified the reliability of the algorithm that determines the trend on the graph and the correlation coefficient, which is very low because the data are highly variable and the trend is flat.

Well, this is an example of the logical fallacy known as “Argument from Authority” combined with a blatant attempt at misdirection. The accuracy of the “… algorithm that determines the trend …” has absolutely nothing to do with Mr Monckton’s subsequent interpretation of the results, although that is precisely what the reader is meant to think. The good professor may well be seriously gifted at statistics, but that doesn’t mean he speaks with any authority about atmospheric science or about satellite datasets.

Also, for the sake of the students at Melbourne University, I would hope that Mr Monckton was extemporizing at the end of that paragraph. It is simply nonsense to suggest that the “flatness” of the trend displayed in his Fig 1 is in any way responsible for the trend equation also having an R2 value of (virtually) zero. The value of the coefficient of determination (R2) ranges from 0 to 1, and wildly variable data can most certainly result in having a value of zero, or thereabouts, but the value of the trend itself has little or no bearing upon this.

The phraseology used in the Technical Note would appear to imply that, as both the trend and the coefficient of determination are effectively zero, this should be interpreted as two distinct and independent factors which serve to corroborate each other. Actually, nothing could be further from the truth.

The very fact that the coefficient of determination is effectively zero should be regarded as a great big blazing neon sign which says “the equation to which this R2 value relates should be treated with ENORMOUS caution, as the underlying data is too variable to infer any firm conclusions”.

To demonstrate that a (virtually) flat trend can have an R2 value of 1, anyone can try inputting the following numbers into a spreadsheet …

10.0 10.00001 10.00002 10.00003 etc.

Use the Auto Fill capability to do this automatically for about 20 values. The slope of this set of numbers is a mere one part in a million, and is therefore, to all intents and purposes, almost perfectly is flat. However, if one adds a trend line and asks for the R2 value, it will return a value of 1 (or very, very close to 1). (NB When I tried this first with a single recurring integer – i.e. absolutely flat – Excel returned an error value. That’s why I suggest using a tiny increment, such as the 1 in a million slope mentioned above.)

Enough of the Technical Note nonsense, let’s looks at the UAH dataset as well. Fig 5 (below) is a rework of the earlier Fig 2, but this time with the UAH dataset added, as well as an equally weighted (RSS+UAH) composite.

Fig 5: Comparison between RSS and UAH (June 2014)

The difference between the RSS and UAH results makes it clear why Mr Monckton chose to focus solely on the RSS data. At the time of writing this present article, the RSS and UAH datasets each extended to February 2016, and Fig 6 (below) shows graphically how the datasets compare when that end date is employed.

Fig 6: Comparison between RSS and UAH (Feb 2016)

In his sausage with the November 2015 sell-by-date, Mr Monckton assured his readers that “…The UAH dataset shows a Pause almost as long as the RSS dataset.

Even just a moment or two spent considering the UAH curves on Fig 5 (June 2014) and then on Fig 6 (February 2016) would suggest precisely how far that claim is removed from reality. However, for those unwilling to put in this minimal effort, Fig 7 is just for you.

Fig 7: UAH TLT temperatures gradients over three different end dates.

From the above diagram, it is rather difficult to see any remote justification for Mr Monckton’s bizarre assertion that “…The UAH dataset shows a Pause almost as long as the RSS dataset.

Moving on, it is clear that, irrespective of the exact timeframe, both the datasets exhibit a reasonably consistent “triple dip”. To understand the cause(s) of this “triple dip” in the above diagrams (at about 1997, 2001 and 2009), one needs to look at the data in the usual anomaly format, rather than in gradient format used in Figs 2 – 7.

Fig 8: RSS TLT anomalies smoothed over 12-month and 60-month periods

The monthly data looks very messy on a chart, but the application of 12-month and 60-month smoothing used in Fig 8 perhaps makes some details easier to see. The peaks resulting from the big 1997/98 El Niño and the less extreme 2009/10 event are very obvious on the 12-month data, but the impact of the prolonged series of 4 mini-peaks centred around 2003/04 shows up more on the 60-month plot. At present, the highest 60-month rolling average is centred near this part of the time series. (However, that may not be the case for much longer. If the next few months follow a similar pattern to the 1997/98 event, both the 12- and 60-month records are likely to be surpassed. Given that the March and April RSS TLT values recorded in 2015 were the two coolest months of that year, it is highly likely that a new rolling 12-month record will be set in April 2016.)

Whilst this helps explain the general shape of the curve families, it does not explain the divergence between the RSS and the UAH data. To show this effect, two approaches can be adopted: one can plot the two datasets together on the same chart, or one can derive the differences between RSS and UAH for every monthly value and plot that result.

In the first instance, the equivalent UAH rolling 12- and 60-month values have effectively been added to the above chart (Fig 8), as shown below in Fig 9.

Fig 9: RSS and UAH TLT anomalies using 12- and 60-month smoothing

On this chart (Fig 9) it can be seen that the smoothed anomalies start a little way apart, diverge near the middle of the time series, and then gradually converge as one looks toward the more recent values. Interestingly, although the 60-month peak at about 2003/04 in the RSS data is also present in the UAH data, it has long since been overtaken.

The second approach would involve subtracting the UAH monthly TLT anomalies figures from the RSS equivalents. The resulting difference values are plotted on Fig 10 below, and are most revealing. The latest values on Figs 9 and 10 are for February 2016.

Fig 10: Differences between RSS and UAH monthly TLT values up to Feb 2016

Even without the centred 60-month smoothed average, the general shape emerges clearly. The smoothed RSS values start off about 0.075 Celsius above the UAH values, but by about 1999 or 2000, this delta has risen to +0.15 Celsius. It then begins a virtually monotonic drop such that the 6 most recent rolling 60-month values have gone negative.

NB It is only to be expected that the dataset comparison begins with an offset of this magnitude. The UAH dataset anomalies are based upon a 30-year meteorology stretching from 1981 – 2010. However, RSS uses instead a 20-year baseline running from 1979 – 1998. The mid points of the two baselines are therefore 7 years apart. Given that the overall trend is currently in the order of 0.12 Celsius per decade, one would reasonably expect the starting offset to be pretty close to 0.084 Celsius. The actual starting point (0.075 Celsius) was therefore within about one hundredth of a degree Celsius from this figure.

Should anyone doubt the veracity of the above diagram, hereis a copy of something similar taken from Roy Spencer’s web pages. Apart from the end date, the only real difference is that whereas Fig 9 has the UAH monthly values subtracted from the RSS equivalent, Dr Spencer has subtracted the RSS data from the UAH equivalent, and has applied a 3-month smoothing filter. This is reproduced below as Fig 11.

Fig 11: Differences between UAH and RSS (copied from Dr Spencer’s blog)

This actually demonstrates one of the benefits of genuine scepticism. Until I created the plot on Fig 10, I was sure that the 97/98 El Niño was almost entirely responsible for the apparent “pause” in the RSS data. However, it would appear that the varying divergence from the equivalent UAH figures also has a very significant role to play. Hopefully, the teams from RSS and UAH will, in the near future, be able to offer some mutually agreed explanation for this divergent behaviour. (Although both teams are about to implement new analysis routines – RSS going From Ver 3.3 to Ver 4, and UAH going from Ver 5.6 to Ver 4.0 – mutual agreement appears to be still in the future.)

Irrespective of this divergence between the satellite datasets, the October 2015 TLT value given by RSS was the second largest in that dataset for that month. That was swiftly followed by monthly records for November, December and January. The February value went that little bit further and was the highest in the entire dataset. In the UAH TLT dataset, September 2015 was the third highest for that month, with each of the 5 months since then breaking the relevant monthly record. As with its RSS equivalent, the February 2016 UAH TLT figure was the highest in the entire dataset. In fact, the latest rolling 12-month UAH TLT figure is already the highest in the entire dataset. This would certainly appear to be strange behaviour during a so-called pause.

As sure as the sun rises in the east, these record breaking temperatures (and their effect on temperature trends) will be written off by some as merely being a consequence of the current El Niño. It does seem hypocritical that these people didn’t feel that a similar argument could be made about the 1997/98 event. An analogy could be made concerning the measurement of Sea Level Rise. Imagine that someone – who rejects the idea that sea level is rising – starts their measurements using a high tide value, and then cries foul because a subsequent (higher) reading was also taken at high tide.

This desperate clutching of straws will doubtless continue unabated, and a new “last, best hope” has already appeared in guise of Solar Cycle 25. Way back in 2006, an article by David Archibald appeared in Energy & Environment telling us how Solar Cycles 24 & 25 were going to cause temperatures to plummet. In the Conclusion to this paper, Mr Archibald wrote that …

A number of solar cycle prediction models are forecasting weak solar cycles 24 and 25 equating to a Dalton Minimum, and possibly the beginning of a prolonged period of weak activity equating to a Maunder Minimum. In the former case, a temperature decline of the order of 1.5°C can be expected based on the temperature response to solar cycles 5 and 6.

Well, according to NASA, the peak for Solar Cycle 24 passed almost 2 years ago, so it’s not looking too good at the moment for that prediction. However, Solar Cycle 25 probably won’t peak until about 2025, so that will keep the merchants of doubt going for a while.

Meanwhile, back in the real world, it is very tempting to make the following observations …

  • The February TLT value from RSS seems to have produced the conditions under which certain allotropes of the fabled element known as Moncktonite will spontaneously evaporate, and …
  • If Mr Monckton’s sausages leave an awfully bad taste in the mouth, it could be due to the fact that they are full of tripe.

Inevitably however, in the world of science at least, those who seek to employ misdirection and disinformation as a means to further their own ideological ends are doomed to eventual failure. In the closing paragraph to his “Personal
observations on the reliability of the Shuttle
”, the late, great Richard Feynman used a phrase that should be seared into the consciousness of anyone writing about climate science, especially those who are economical with the truth…

For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.

Remember that final phrase – “nature cannot be fooled”, even if vast numbers of voters can be!

DMIGate Skulduggery In a Nutshell

With apologies to O’Reilly Media Inc. here’s a brief history of the “DMIGate” story, viewed through Anthony Watts’ distorting spectacle lenses.

0) Here is the February 14th 2016 edition of the Danish Meteorological Institutes’s long “deprecated” 30% concentration threshold Arctic sea ice extent graph in question:

dmi-30-WUWT-20160214-crop

1) On August 14th 2013 Anthony Watts wrote on his “Watts Up With That” blog:

That’s the old DMI plot, which DMI says we should now use this one on this page:
http://ocean.dmi.dk/arctic/icecover.uk.php

2) On February 22nd 2016 Anthony Watts wrote on WUWT:

There has been so much skulduggery going on in the climate establishment in recent years that it is hard to avoid the conclusion that this graph has been withdrawn simply because it gives the “wrong” results.

3) Anthony Watts refuses to publish any and all comments on his blog pointing out what he himself had confirmed that DMI said in August 2013.

4) Anthony Watts states on Judith Curry’s “Climate Etc.” blog that:

You post off topic or disrupt threads with the sort of unsubstantiated nonsense you post above, and both demand to have these off topic comments heard and then play the “look Watts is censoring me!” game when your comments don’t meet our site comment policy and/or are abusive in nature.

5) Judith Curry deletes the following comment (amongst others) on her “Climate Etc.” blog:

David – Are you suffering from acute snow blindness too, just like poor Paul Homewood? Try reading this if you haven’t already. Try reading it again if you have:

https://greatWhiteCon.info/2016/02/gross-deception-about-dmis-missing-graph/

In view of the incontrovertible evidence why would anyone believe anything Paul Homewood, Anthony Watts and Judith Curry claim about “Climate Etc.” ever again?

The 2016 Arctic Winter Sea Ice Puzzle

Professor Judith Curry recently posed the following rhetorical question on her “Climate Etc.” blog:

Arctic sea ice extent has been anomalously low this winter. The greatest anomalies are in the European sector, specifically in the Barents Sea. To what extent are the anomalies associated with warm temperatures?

Which she answered as follows:

So, what might be causing this particular anomaly? Some possibilities are:

  • Gobal warming (January 2016 was warmest Jan on record, according to the surface temperature analyses

  • Multidecadal oscillations (e.g. stadium wave) predicts ice recovery to be occurring in the same region (European Arctic) where we see the sea ice decline).

  • Seasonal weather circulation patterns – this has been a year with with unusual weather patterns, with both low temperature and high temperature records being set.

As regular readers will already be aware we have been blogging about anomalously warm temperatures in the Arctic all year and so felt well qualified to contribute to the “debate”. What a job that turned out to be! Early on in the proceedings the anticipated pronouncement was made by one of Judith’s “denizens”. A link to a ludicrously inaccurate article on Watts Up With That accompanied by the following words of wisdom:

Other measures are high.

Which of course they aren’t! Instead of stating the bleedin’ obvious Professor Curry replied:

I spotted this, no idea what to make of it.

You would think she and her denizens would therefore have been pleased when I attempted to explain to her what to make of it, but you would have been mistaken. The icing on the ad hominem cake was the aforementioned Anthony Watts driving by to accuse me of all sorts of nefarious activities without providing a single shred of evidence and then running for the hills when invited to actually prove his ludicrous allegations.

Since the denizens of “Climate Etc.” aren’t particularly interested let’s take stock here instead shall we? After every Arctic area and extent metric under the sun sitting at “lowest *ever” levels for weeks a recent increase in coverage on the Pacific side of the Arctic has changed that. The most up to date example of that is the JAXA/ADS extent, which currently looks like this:

vishop_extent-20160228

The latest reading is the merest whisker above 2015’s record low maximum. However in other respects things are most certainly not comparable with 2015. See for example this concentration comparison from Andrew Slater of the NSIDC:

ice_con_delt_20160227

Much more ice on the Pacific periphery where it will all have disappeared by September, as opposed to much less ice on the Atlantic side, even well to the north of 80 degrees latitude where the sun still does not shine. Here’s a video revealing how the sea ice North of the Pacific Ocean has been reacting to the sequence of hurricane force storms that have been passing through the area over the past couple of months:

Now let’s take a look at “near real time” Arctic sea ice thickness as measured by the CryoSat 2 satellite:

CryoSat2_20160225

Notice the absence of any thick ice in the Beaufort and Chukchi Seas, an obvious difference from last year? Notice too the large area of thick ice that looks as though it’s heading towards the Fram strait exit from the Central Arctic. Here’s another video, this time of sea ice movement over on the Atlantic side of the Arctic Ocean:

Those dark areas between Svalbard and the North Pole are suddenly starting to look as though they represent reality rather than a mere “artifact”, although perhaps they are merely transient evidence of yet another Arctic “heat wave”?

Gross Deception About MASIE and the Sea Ice Index

Our title for today is a reference back to a 2015 article by Paul Homewood on his “Not A Lot Of People Know That” blog, in which he told a load of old porky pies about the National Snow and Ice Data Center’s graphs of Arctic sea ice extent. Only last week Mr. Homewood cooked up another pile of porky pies concerning the Danish Meteorolical Institute’s Arctic sea ice extent metric. Now he has turned his pie baking skills to the Multisensor Analyzed Sea Ice Extent (MASIE for short), which is “a prototype collaborative product of the National Ice Center and NSIDC”.

Mr. Homewood obviously hadn’t done a whole lot of homework on MASIE before firing up his porky pie production line, since he used the self same recipe posted on the so called “Science Matters” blog of Ron Clutz shortly before. In fact he just reprinted the first part of Ron’s article and added a handy link to Ron’s even bigger pile of porky pies beneath it. Hence both Paul and Ron’s web sites currently proudly proclaim that:

Something strange is happening in the reporting of sea ice extents in the Arctic. I am not suggesting that “Something is rotten in the state of Denmark.” That issue about a Danish graph seems to be subsiding, though there are unresolved questions. What if the 30% DMI graph is overestimating and the 15% DMI graph is underestimating?

The MASIE record from NIC shows an average year in progress, with new highs occurring well above the 2015 maximum:

Clutz-masie-2016-to-day-56r

While I am compelled to agree with Ron and Paul that “something strange is happening in the reporting of sea ice extents in the Arctic” we disagree about everything else! One reason for that is because only a few days ago I interviewed NASA scientist Walt Meier following a suggestion by Ron Clutz that I do precisely that. Please read the edited highlights of that interview, and note in addition that Walt assured me that he had not previously been contacted by either of Messrs. Clutz and Homewood. Having never previously contacted Mr. Meier, here’s what Ron Clutz would have his loyal readership believe this weekend:

NOAA Is Losing Arctic Ice

Why the Discrepancy between SII and MASIE?

The issue also concerns Walter Meier who is in charge of SII, and as a true scientist, he is looking to get the best measurements possible. He and several colleagues compared SII and MASIE and published their findings last October. The purpose of the analysis was stated thus:

Our comparison is not meant to be an extensive validation of either product, but to illustrate as guidance for future use how the two products behave in different regimes.

Here is what Dr. Meier’s peer reviewed paper from October last year concluded on the matter:

Operational modelers require timely data that are as accurate as possible to initialize forecast models. In particular, an accurate ice edge is important because of the influence of the interaction of sea ice and water with the overlying atmosphere on the model fluxes. Consistency of data is also desirable for operational models, but is a secondary concern because the models are regularly reinitialized for their synoptic forecasts. Operational observations like MASIE make the most sense for these applications. However, the quality and amount of information used to produce the operational analyses vary.

Climate modelers desire consistent long-term data to minimize model biases and better understand and potentially improve model physics. The passive microwave record is useful, but has limitations. Regions of thin ice are underestimated and if the ice cover is diffuse with low concentration, ice-covered regions may be detected as open water. Even thin ice modifies heat and moisture transfer and thus may affect atmospheric and oceanic coupling. Surface melt results in an underestimation of concentration. This should be considered when evaluating model concentrations with passive microwave data.

and here once again is what he told me a few short days ago:

Since the quantity and quality of [MASIE] data varies the time series will not be consistent over time.

For some strange reasom Mr. Clutz’s article mentions none of this. Needless to say I have attempted to bring this unfortunate oversight to the attention of Ron & Paul:

2016-02-28_1007-RonClutz

Even more unfortunately it seems that their joint acute snow blindness has got even worse over the last few days, since they still haven’t noticed my link to Walt Meier’s words waiting patiently in their WordPress.com “moderation queues”.

Whilst we wish them on their recovery from their painful ailment, here’s an alternative interpretation of the MASIE data:

MASIE-Min

Operational modellers for the use of! Here’s what the NSIDC’s Sea Ice Index currently reveals to climate scientists:

Charctic-20160226

NSIDC Arctic sea ice extent is evidently still currently lowest *ever for the date.

The “DMIGate” Dodo is Pushing up the Daisies

Our title this morning is but a brief extract from a conversation I started on Judith Curry’s “Climate Etc.” blog about (believe it or not!) the effects of large wind driven swells on the “Marginal Ice Zone” of sea ice in the Arctic:

2016-02-25_1159-Judy

Whilst “skeptics” like Pethewin keep on flogging that particular “dead parrot” the DMI have just published an explanation of the erroneously high readings on their now deceased 30% concentration threshold Arctic sea ice extent metric. Here’s an extract, together with a pretty picture for the benefit of those amongst we Cryospheric commenters who are hard of understanding:

The apparent elevated sea ice extent in the data from the old extent algorithm was an artifact, caused by a new and higher resolution coast mask.

Surely that’s easy enough for even the dumbest of all the armchair Arctic analysts scattered across the internet to comprehend?

Going into more detail, the DMI article explains that:

Most of our sea ice extent followers know that the old plot includes a coastal mask, inside which sea ice was is accounted for. In summer 2015 this mask was refined and the masked region was subsequently smaller, thus leaving more area for classified sea ice and open water. The difference in masked area, before and after summer 2015, is approximately 1.4 million km2. This corresponds to difference of the blue coast lines in figure 2, showing the old and new coastal masks in the left and right panels, respectively. The difference may be difficult to detect on the figure, but the area is quite significant. The increasing sea ice extent that is caused by the new coast mask is not great during summer, because sea ice has a relative short line of contact with land during summer. But the new and finer coast mask will result in increasingly more sea ice, compared to previous years during winter, as the coast line with sea ice contact is increasing. This is the reason for an increasing sea ice extent during current freeze-up period, relative to previous winters. A comparison of the 2015/2016 sea ice extent with previous years does therefore not make sense.

Note how the dark blue “coastal mask” in the left hand image from February 22nd 2015 is thicker than the one in the right hand image from the same date this year. DMI conclude with this apology:

Because of the deprecated status of the old plot in the past year, DMI has not been monitoring these irregularities. The old plot should, of cause, have been removed when the mask was replaced. DMI apologizes for the confusion and inconvenience this has caused.

Somehow I doubt that the assorted “skeptics” that have recently been making massive mountains out of this minor molehill will apologise for all “the confusion and inconvenience this has caused”. Causing confusion and inconvenience was probably the general idea.