Welcome to European Tribune. It's gone a bit quiet around here these days, but it's still going.
Display:
Of course we should be suspicious.

But if the paper is a direct calculation in the framework of a standard model, and that the calculation is correct, then whoever made it should not change the view that it is, indeed, correct.

And I don't dispute that neo-classical and Austrians judge the author rather than the argument. But since we complain when they do, it seems that we indeed find it an inappropriate course of action.

That being said, Mankiw was my first example of clearly paid hack during the discussion with the previously mentioned LSE speaker at the following drinks.

Earth provides enough to satisfy every man's need, but not every man's greed. Gandhi

by Cyrille (cyrillev domain yahoo.fr) on Thu Jan 1st, 2015 at 06:07:05 AM EST
[ Parent ]
But if the paper is a direct calculation in the framework of a standard model, and that the calculation is correct, then whoever made it should not change the view that it is, indeed, correct.
"Proper" peer review would require replicating such calculations. Nobody does that. Especially if the paper involves a DSGE model. Or, in the case of a paper presenting the result of statistical analysis, the raw data are not provided. Remember the case of Reinhart' and Rogoff's "evidence" supporting austerity?

A society committed to the notion that government is always bad will have bad government. And it doesn't have to be that way. — Paul Krugman
by Migeru (migeru at eurotrib dot com) on Thu Jan 1st, 2015 at 06:40:54 AM EST
[ Parent ]
Then peer review is broken -whether it is Mankiw or anybody else writing the paper.

As for R&R (which was not a paper), it should be ground for dismissing any such paper until data and calculations are made available. They were hardly trade secrets (which should not be a valid excuse anyway): they were national statistics...

Earth provides enough to satisfy every man's need, but not every man's greed. Gandhi

by Cyrille (cyrillev domain yahoo.fr) on Thu Jan 1st, 2015 at 07:24:10 AM EST
[ Parent ]
"Proper" peer review would require replicating such calculations.

I disagree.

Peer review should verify that the methodology used is not insane, that the paper properly references its data, that the author has performed adequate robustness and specification tests, and that the data is available to other investigators who wish to replicate the analysis.

It is possible to imagine cases where the analysis is based on data that cannot be made available to the general public for ethical reasons, or because doing so would be an unreasonable commercial loss for the source of said data. However, in those cases I would argue that journals should demand full independent replication rather than the much more cursory process of peer review.

The above is already a higher standard than current academic peer review observes, and I don't think going beyond this is realistic - or necessarily a desirable use of the reviewers' time.

Now, there's a whole issue of replication not receiving the recognition it ought to. But that is a slightly different matter, and one I think can be solved with standard governance methods, like formalized KPIs for researchers requiring them to publish two replications for each original result.

- Jake

Friends come and go. Enemies accumulate.

by JakeS (JangoSierra 'at' gmail 'dot' com) on Thu Jan 1st, 2015 at 08:49:08 AM EST
[ Parent ]
But if the paper is a direct calculation in the framework of a standard model, and that the calculation is correct, then whoever made it should not change the view that it is, indeed, correct.

No, but your ability to evaluate whether it is, in fact, correct is impaired by the fact that you cannot assume that the calculation was made in good faith.

Peer review presumes that the paper is written in good faith. It is the hiring board's job to prevent the infiltration of pseudoscientists into academia; doing it at the paper level is simply not feasible. So if the presumption of good faith is visibly inapplicable, then "review" is not the correct tool for evaluating the paper. "Forensic reconstruction" is, and that is well beyond the scope of what can be expected of an unpaid reviewer.

Now, for general equilibrium models that doesn't really matter, because general equilibrium papers should be rejected out of hand as the pseudoscientific nonsense they are. And since the defining feature of most economic pseudoscientists is the fact that they are incapable or unwilling to operate outside a single, proven false, modeling framework, the distinction between rejecting the man and rejecting the model is in practice not very great.

- Jake

Friends come and go. Enemies accumulate.

by JakeS (JangoSierra 'at' gmail 'dot' com) on Thu Jan 1st, 2015 at 08:39:17 AM EST
[ Parent ]

Display:

Top Diaries

Occasional Series