When nothing means something

The movement towards evidence-based medicine (EBM) is gaining attention and momentum. It is really quite a simple concept: EBM is defined as “the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients” (Sackett et al. 1996, p. 71), just as its name suggests. Efforts are being made to translate research into practice (e.g. the tagline of the British Medical Journal *applause* is helping doctors make better decisions), but this is not without controversy and pitfalls (well that escalated quickly).

The above mentioned definition of EBM is by no means the only definition of EBM, but is one that reveals, in my opinion, its greatest weaknesses: it suggests that practitioners be informed by current best evidence. Current evidence? Then how to proceed in the absence of evidence? Or rather, was there an absence of evidence? How could we know? And why might there be such an absence of evidence?

Take the example of oseltamivir, commonly known as Tamiflu. The time is 2003 and three published study suggests that this drug is effective in reducing complications associated with influenza. At roughly the same time, SARS, avian flu, swine flu etc. raged across the globe. Governments panicked and poured billions into stockpiling this drug. Turns out, oseltamivir isn’t really the miracle drug that everybody thinks it is: a 2009 meta-analysis reveals that is really no evidence to suggest that oseltamivir is effective in any way. We were literally blinded from the entire truth: there was not just three but at least eight other RTCs conducted on oseltamivir, only three ended up being published while the others were locked up by Roche (big phrama) (Ebell et al. 2013). Why were they not published? If you guessed because those studies didn’t find that oseltamivir was a “good drug”, good for you!

Publication bias, or positive-outcome bias, is an issue. This is something that happens beyond clinical trials, even without the “forces of big corporation”. Let’s face it, nobody likes to publish negative results (Fanelli 2012).

My evil twin recently experienced something similar. She has been trying to publish one of her papers since July 2011. Said paper, reporting a negative result, was rejected by four journals, and she kept getting comments like:

So you didn’t find anything. You learned nothing. Then what are we trying to publish here? (paraphrased)

Other reasons for not liking negative results?

You must have done your analysis wrong.

Your experiment was poorly designed.

Your need a more robust question. (whatever that means)

Negative results appear to mean no impact, and this is very problematic. What really matter should be scientific rigor in the design of the study and not the outcome. Every single study, regardless of the outcome, should be scrutinized in the same way. Studies that report positive results could be flawed as well, and those that report negative outcome really doesn’t automatically mean that something is wrong. A negative outcome is an outcome, and it is nice to know that.

In response to this, there are now journals that are dedicated to publishing negative results (as well as replicated studies, which is a great topic for another post another day). The Journal of Negative Results (appropriately named) is one of them. I particularly like this quote from the journal’s home page:

“The primary intention of Journal of Negative Results is to provide an online-medium to publish peer-reviewed, sound scientific work in ecology and evolutionary biology that is scientifically rigorous but does not rely upon arbitrary significance thresholds to support conclusion

In ancient India, the great mathematician Muhammad ibn Mūsā al-Khwārizmī made one of the greatest breakthroughs in math by acknowledging that the number zero could mean something (more preciously, that 0 has a place on the number line between +1 and -1). Looking back today, this discovery isn’t exactly groundbreaking, but if not for this we would not have algebra today (good news?!) (Barrow, 2001).

Realizing that nothing could mean something, pursuing and revealing those nothings, might very well lead to our next genuine breakthrough in biomedical practices and research.

References

 

Barrow, J. (2001). The Book of Nothing. Random House: London.

Ebell, M., Call, M. & Shinholser, J. (2013). Effectiveness of oseltamivir in adults, a meta-analysis of published and unpublished clinical trials. Family Practice, 30, 125-133

Fanelli, D. (2012). Negative results are disappearing from most disciplines and countries. Scientometrics, 90, 891-904

Sackett, D., Rosenberg, W., Muir Gray, J., Haynes, B., Richardson, W. (1996). Evidence based medicine: what it is and what it isn’t: it’s about integrating individual clinical expertise and the best external evidence. British Medical Journal, 312, 71-72

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: