Peter Bowditch's Web Site
 

Things change. Get used to it.

In the June edition of Australasian Science I wrote about the reproducibility of scientific studies. I was mainly concerned that there were studies in pseudoscience where replication did not indicate presence of any effects at all, and generally this is because the original studies or experiments were conducted without proper controls or procedures. The fact that much of this "research" doesn't stand up to closer investigation is generally ignored by pseudoscientists, although they are very quick to point out that much of what is published in real scientific journals also fails the replication test.

There was much glee in woowoo world earlier this year when it was suggested that 50% of the content of medical journals may either be incorrect or out of date. This is no surprise to people who understand how the science of medicine advances. As an example, it is perfectly reasonable to assume that papers about effective treatments for bacterial diseases were largely reduced to the status of historical relics after the discovery of antibiotics. Similarly, imaging techniques like PET and MRI made much of what was known about treatment of physical conditions obsolete, and the changes that might result from increasing knowledge about the human genome and neuroscience will send a lot of what we now know into the dustbin.

Science is like that. It is a work in progress, we don't know everything, and if we did science would stop.

More relevant to the issue of reproducibility is a study recently described in Science magazine (note – "described" not "published"). The study looked at 100 papers published in second-tier psychology journals and found that on average only 39% of the results could be replicated. It is interesting to note that this finding united the pseudoscientists and those who claim that psychology is not a science. (I studied cognitive psychology and will leave my defence of it as a science until another time.)

The investigators' findings can be summed up in this quote: "Generally evidence was weaker on replication. The stronger the evidence was to begin with, however, including a larger effect size, the more likely the results were reproduced". The second sentence is hardly news, as you would expect stronger effects from better evidence to give a result closer to what is happening in reality.

My initial reaction was surprise that the figure was as high as 39%. The study looked at papers in the areas of cognitive and social psychology, with those in the cognitive class having better replication than social psychology research. This is not really surprising because the two areas look at different parts of the human experience.

Cognitive psychology is largely about measurement to infer internal processes which cannot be directly measured or in many cases even described by the subject. This would lead to the expectation of a somewhat lower variability between test subjects, both in a single study and over a period of time. Social psychology, on the other hand, is much more about observation, motivation, personality and subjective experiences that can be described by the subject. We would expect a high variability across a range of subjects, and even for the same subjects at varying times.

People aren't like electrons or photons or atoms of elements or many of the other things dealt with in the so-called "hard" sciences. Every person is unique in the cells that make up their body and the experiences and knowledge that make up their personality, and this even applies to identical twins. I am not the same person that I was ten years ago, because events of those ten years have reshaped both what and how I think. When people talk to me about IQ testing I like to point out that the most recent time I was tested showed that I had dropped five points from when I was tested at high school. This doesn't mean that I'm less smart now, because the denominator in the equation is a lot larger.

It is this variability, both inter- and intra-person, which makes any form of psychological study difficult to replicate. Even if the same subject group is used for the second occasion and everything else is kept the same you would expect the results to be different, and this is even more likely with studies in social psychology. None of this, however, invalidates the idea of research in the social sciences.

One aspect that needs to be considered in any examination of replication is publication bias. Journals want to publish material that is new, exciting and different to what has gone before. A paper which says "We have exactly replicated the findings of paper X as published in an earlier addition of journal Y" is hardly going to excite journal editors and push it into the next edition, but that is another topic for another day.

This article was published as the Naked Skeptic column in the December 2015 edition of Australasian Science





Copyright © 1998- Peter Bowditch
Logos and trademarks belong to whoever owns them


Authorisation to mechanically or electronically copy the contents of any material published in Australasian Science magazine is granted by the publisher to users licensed by Copyright Agency Ltd. Creative Commons does not apply to this page.