icon

Medical research plagued by bad reporting

Marianne Freiberger Share this page

Medical research plagued by bad reporting

A new study published in the British Medical Journal has found that medical researchers are still too sloppy when presenting results from randomised trials of medical interventions. The study, conducted by Susan Hopewell from the University of Oxford and colleagues, concludes that "the quality of reporting remains well below an acceptable level," and warns that "without complete and transparent reporting of how a trial was designed and conducted, it is difficult for readers to assess its conduct and validity."

In a randomised trial the volunteers on whom a new intervention is to be tested are chosen at random. For example, the volunteers are randomly divided between the study group, which will receive the new treatment or drug, and the control group, who will receive a placebo (or a comparative existing treatment). Not only does randomisation keep bias, whether intentional or unintentional, from creeping into the study, it also balances any unknown factors which may influence the health of the volunteers.

But simply saying that you've tested an intervention in a randomised trial and observed a particular result isn't — or at any rate shouldn't be — enough to convince your colleagues of the trial's merits. You need to be clear about exactly how the trial was set up and the statistical methods used to analyse the results. Sample size is an obvious example: a 75% success rate may sound impressive, but it's not really that meaningful if your sample consisted of only four people. Your results may simply have been due to chance, and rare side effects will only be picked up with a much larger sample. There are statistical methods for calculating how big your sample should be, which depend on the size of the effect you're expecting to observe. Revealing your sample size calculations is essential for anyone to be able to assess the power of your trial and the validity of your conclusions. (See the Plus article Evaluating a medical treatment to find out more).

 

But it's not just information on statistical methods that needs to be made available: being clear about how a trial was conducted, who took part and how they were involved is also important. Just as volunteers can fall prey to the placebo effect, so the medical staff monitoring outcomes and the researchers analysing them may inadvertently allow their expectations to skew the results. You need to state clearly whether patients and staff were aware of who was receiving the real and placebo treatments (if the allocation is concealed the trial is said to be blinded). You also need to account for any patients that dropped out during the course of the study and why — if these tended to be people with more serious conditions, then the conclusion might falsely indicate that the people in the study group experienced an improvement, when in reality your results were due to sicker people dropping out.

In 1996, the Consolidated Standards of Reporting Trials Group (CONSORT) came up with a minimum set of recommendations for reporting on randomised trials, offering a "standard way for authors to prepare reports of trial findings, facilitating their complete and transparent reporting, and aiding their critical appraisal and interpretation". The publication of the new study into the quality of reporting coincided with a review of the CONSORT recommendations, which was published last week. Hopewell and her colleagues studied over 1000 papers which had been registered on the PubMed publication index , and which reported on randomised trials. Needless to say, the researchers chose the papers they considered at random. Around half of the papers had been indexed in 2000, just before the previous review (in 2001) of the CONSORT recommendations, while the other half had been indexed in 2006.

The study finds clear evidence that reporting on randomised trials has improved between 2000 and 2006, but still gives a damning verdict. The authors note that "the worrying relatively small sample sizes seen in [the] cohort may be an indication that many trials are inadequately powered to detect clinically important treatment benefits." They found that only under half of the 2006 trial reports provided detailed information on how sample sizes were calculated. Only a third or less explained what had been done to prevent participants from guessing whether they belonged to the study group or the control group, and what technique was used to randomise them. Only a third of the 2006 trial reports stated whether an initial analysis to help account for the effects of participants dropping out during the study had been carried out. Over half of the 2006 trial reports neglected to give further details on blinding. Only very rarely did trial reports give access to a full trial protocol which clearly sets out the aim and design of the trial, and the statistical methods used.

In their paper Hopewell and her colleagues point to "considerable evidence to show that reports of randomised trials that have inadequate or unclear documentation of [how the trials were designed and conducted] lead to biased estimates of intervention effects." In other words, leave out the nitty gritty and you're in danger of drawing wrong conclusions about medical interventions that effect many people's life. On a positive note, the study did find significant improvement in reporting since the CONSORT guidelines were reviewed in 2001. Hopefully the latest review will bring reporting up to the necessary standard.

Further reading

You can read more about evidence based medicine and the role of maths in the biomedical sciences on Plus.