icon

The nonsense maths effect

Share this page

Stephen Hawking was once told by an editor that every equation in a book would halve the sales. Curiously, the opposite seems to happen when it comes to research papers. Include a bit of maths in the abstract (a kind of summary) and people rate your paper higher — even if the maths makes no sense at all. At least this is what a study published in the Journal Judgment and decision making seems to suggests.

Mathematical formulae written on paper

Maths: incomprehensible but impressive?

Kimmo Eriksson, the author of the study, took two abstracts from papers published in respected research journals. One paper was in evolutionary anthropology and the other in sociology. He gave these two abstracts to 200 people, all experienced in reading research papers and all with a postgraduate degree, and asked them to rate the quality of the research described in the abstracts. What the 200 participants didn't know is that Eriksson had randomly added a bit of maths to one of the two abstracts they were looking at. It came in the shape of the following sentence, taken from a third and unrelated paper:

A mathematical model $(T_{PP}=T_0-fT_0d_ f^2−fT_ Pd_ f)$ is developed to describe sequential effects.

That sentence made absolutely no sense in either context.

People with degrees in maths, science and technology weren't fooled by the fake maths, but those with degrees in other areas, such as the humanities, social sciences and education, were: they rated the abstract with the tacked-on sentence higher. "The experimental results suggest a bias for nonsense maths in judgements of quality of research," says Eriksson in his paper.

The effect is probably down to a basic feature of human nature: we tend to be in awe of things we feel we can't understand. Maths, with its reassuring ring of objectivity and definiteness, can boost the credibility of research results. This can be perfectly legitimate: maths is a useful tool in many areas outside of hard science. But Eriksson, who moved from pure maths to interdisciplinary work in social science and cultural studies, isn't entirely happy with the way it is being used in these fields. "In areas like sociology or evolutionary anthropology I found mathematics often to be used in ways that from my viewpoint were illegitimate, such as to make a point that would better be made with only simple logic, or to uncritically take properties of a mathematical model to be properties of the real world, or to include mathematics to make a paper look more impressive," he says in his paper. "If mathematics is held in awe in an unhealthy way, its use is not subjected to sufficient levels of critical thinking."

You can read Eriksson's paper here. There is also an interesting article on this and other bogus maths effect in this article in the Wall Street Journal.

Permalink

Although I agree that mathematics (specially statistics) is often abused in social sciences to obtain results that should not resist critical review - from assuming stronger results than the maths actually show, failing to apply consistent methodology (for example not controlling variable dependency) or just plain non-sequiturs, I think the effect described here is not so much concerned with mathematics itself as with confidence on competence.

When someone requests a service, especially a knowledge service, there is an implicit trust in the intellectual honesty of the provider - if I request legal counseling, I do not expect that the service provider will behave in an incompetent fashion. If he cites bogus laws, how can I detect it unless I am a legal expert myself? When the social scientist sees a paper with a mathematical model, if he does not truly understand it, he at least makes the assumption of competence from the writer. Since a mathematical model usually provides a non-ambiguous problem description, the presence of one usually does imply greater rigor - at least to the extent that other scientists can reproduce and test the model. In that way, the social scientist is making what I think is a reasonable decision when prefering papers that have underlying mathematical models, at least to the extent that the underlying experiment usually will be more reproducible.

What does this say about review processes? I think the only lesson we can take from this experiment is that papers should be peer reviewed and rated by those with competence to understand the entirety of the paper. If a reviewer is uncomfortable with the maths, he should consult another expert that can help him in that regard - no one can be burdened with the duty to know everything.

Permalink

", or to uncritically take properties of a mathematical model to be properties of the real world".....string theory for example :-)