Welcome to European Tribune. It's gone a bit quiet around here these days, but it's still going.
I was told (I will have to put this in a Glen Beck style of discourse) that in some areas calculation errors were detected in near 100% of papers assessed.

Note that I agree: reviewing complex papers by repeating the calculations is asking too much. When software is made available I tend to download it and evaluate it. But with mathematical formulae even reading is a big pain.

I know of a top scientist in population genetics that says that when he sees lots of maths in papers is because authors are trying to make something difficult to detect. ;)

But this only exposes how the current process is flawed: peer-review can only go so far. And that "far" is not enough to detect even gross mistakes.

Science is also riddled with Dunning-Kruger effect. For instance I work in biology/medicine with a CS background. Most people developing software in bio/med think that because they are so good bio/med people they immediately become fantastic programmers. And then you see people devising results done with software which has the quality coming from a high school student. Don't even try to suggest that they are completely ignorant in the subject of programming.

I once had a discussion with a top scientist which does only theoretical modeling on the advantages of indenting code. This person doesn't even indent code. And why? "Well, with 8/9 levels of code, indenting makes it unprintable. And it is impossible to break the code in less levels of indentation"

Another suggested that an optimization algorithm always finds the maximum if the algorithm is stochastic (whereas if it is deterministic only local maximum can be found). Code was made, published and used like this.

by t-------------- on Sun Nov 22nd, 2009 at 12:34:17 PM EST
[ Parent ]

Others have rated this comment as follows:


Occasional Series