by Nomad
Thu Aug 7th, 2008 at 08:45:50 AM EST
On Colman's request...
I'll briefly revert back to an old schtick of mine: the conviction that climate models are the end-all and be-all of predicting our future climate is built on very shaky soil indeed.
A new paper from hydrologist Demetris Koutsoyiannis has just been released which has the potential to stir up some ripples in the climate field:
On the credibility of climate predictions / De la credibilite des previsions climatiques
Geographically distributed predictions of future climate, obtained through climate models, are widely used in hydrology and many other disciplines, typically without assessing their reliability. Here we compare the output of various models to temperature and precipitation observations from eight stations with long (over 100 years) records from around the globe. The results show that models perform poorly, even at a climatic (30-year) scale. Thus local model projections cannot be credible, whereas a common argument that models can perform better at larger spatial scales is unsupported.
Bold mine.
Dug out by afew
The paper is open access (PDF here) and thus freely available for your own criticism. I have gone through large parts of it and it is written in a friendly, explanatory style although subjective at times.
The research has been reviewed by a wide group of climate scientists, including Gavin Schmidt of Real Climate fame who didn't seem to like the preview much:
RealClimate
With all due respect to the authors, they do not appear know very much about either TAR or AR4. Looking at the statistics of local temperature and precipitation is useful but picking just a few long records and comparing to the nearest individual grid cells is not sensible. The differences in topography an local micro-climates are probably large and will make a big difference. A better approach would have been to look at aggregated statistics over larger areas. This has in fact been done though - for instance Blender and Fraedrich (2003), and there was a recent paper that looked the AR4 models (in GRL maybe? - I can't quickly find the reference). The most curious aspect of this paper's reception in the blogosphere is that the authors use the surface station records which in all other circumstances the cheer squad would be condemning as being horribly contaminated. Just saying. - gavin]
But Schmidt doesn't seem to like anything that results in creating uncertainty around climate models...
Koutsoyiannis addressed the method of selecting surface records after a conference presentation had appeared on the web:
Climate Audit - by Steve McIntyre » Koutsoyiannis 2008 Presentation
No, we did not do any cherry picking. We retrieved long data series without missing data (or with very few missing data) available on the Internet. We had decided to use eight stations and we retrieved data for eight stations, choosing them with the sample size criterion (> 100 years - with one exception in rainfall in Australia, Alice Springs, because we were not able to find a station with sample sizes > 100 years for both rainfall and temperature) and also a criterion to cover various types of climate and various locations around the world. Otherwise, the selection was random. We did not throw away any station after checking how well it correlates with GCMs. We picked eight stations, we checked these eight stations, and we published the results for these eight stations. Not even one station gave results better than very poor. Anyone who has doubts, can try any other station with long observation series. Our experience makes us are pretty confident that the results will be similar with ours, those of the eight stations. And anyone who disagrees with our method of falsification/verification of GCM results may feel free to propose and apply a better one. What we insist on is that verification/falsification should be done now (not 100 years after) based on past data. The joke of casting deterministic predictions for hundred years ahead, without being able (and thus avoiding) to reproduce past behaviours, is really a good one, as things show. But I think it is just a joke.
(Editor's note: Written by Demetris Koutsoyiannis)
Despite Colman's insistence to draw conclusions, I don't think there's anything very solidly conclusive that can be taken away from Koutsoyiannis' work at this point - a repeat investigation (by others or the same team) that extends the data set could shed further light if this is just a fluke or an actually inherent feature.
However, I do feel this is the kind of criticism from serious scientists who stand outside the "core business" of modelling that IMO should not be just wafted away with vagaries that convey a sense of 'they don't know what they're talking about'. I'm biased of course because I hold greater credence to measured data compared to (predictive) models, and apparently so does Koutsoyiannis...
On a limb, perhaps one can spin at least two thoughts from this:
- Positively: alarmism could be overblown and the narrative based on climate models doesn't hold much water
- Negatively: we really don't know yet what's in stock for our future climate.
People who know better than me can perhaps discuss why a stochastic approach to climate modelling is superior compared to a deterministic approach, as Koutsoyiannis et al. argues, or why it would be rubbish.
And now I'm back to my own writings...