Saturday, September 19, 2009

Can Pain Research Be Trusted? Caveat Lector!

EBPM Logo Understanding Evidence-Based Pain Management (EBPM)

The prime ingredient of evidence-based pain management practices is, obviously, good and reliable evidence. Yet, as discussed below, several recent investigations have described disturbing examples of data falsification, deceptive authorship practices, improprieties in clinical trial reporting, and misrepresentations of research results. The simple truth appears to be that published research reports in the pain management field cannot always be trusted and the best advice is caveat lector — reader beware.

Data Fraud Stuns Pain Care Community
Consumers of medical research literature rarely question the veracity of reported data. So, it is not surprising that the pain-management community was stunned to learn last winter that at least 21 published research studies (maybe more) dating back to 1996 involved the use of falsified data. The research had been conducted by pain specialist Scott S. Reuben, MD, according to articles in Anesthesiology News [Marcus 2009] and the Journal of Pain & Palliative Care Pharmacotherapy [Lipman 2009]. Reuben had conducted clinical trials demonstrating the benefits of various agents for perioperative analgesia, and the fake data was discovered during an internal investigation by Baystate Medical Center in Springfield, Massachusetts, where he had worked.

Motivations behind Reuben’s deception were not reported and coauthors of the various papers have not been implicated in the fraud. The offending articles have been retracted by the journals in which they appeared — such as Anesthesiology, Anesthesia and Analgesia, and others — however, the articles had already become an enduring part of the pain literature and there is a question of whether future writers or investigators will be aware that the articles were subsequently withdrawn. Additionally, through the years Reuben had published at least 72 articles and every research article, review paper, or meta-analysis that has included references to or data from his work could be tainted and/or biased to some degree by his dishonesty.

Authorship Deceptions
A more common form of academic misconduct involves false attribution of authorship in journal articles. A recent report notes that at least a fifth (20%) of articles published in medical journals may have “guest” authors — persons having little or nothing to do with developing the papers but named on an honorary basis [Godlee 2009]. For example, in the above case, Reuben also was found to have committed publishing forgery by naming Evan Ekman, MD, as a coauthor on at least two of the papers even though Eckman claims he had no role in developing the manuscripts [Marcus 2009]. Additionally, as many as 8% of articles may have at least one “ghost” author; someone who had written or substantially contributed to the article but is not acknowledged at all [Godlee 2009]. Both “guest” and “ghost” authorship were found to be more common in clinical research articles than in reviews or editorials, and it also is believed that the above estimates are an understatement of the problem. While such practices may not repudiate the validity of reported study outcomes, the fact is that readers are being lied to by such deceit.

Hidden Clinical Trials
Two recent investigations looked at problems with the registration and reporting of clinical trials. In the first study [Ross et al. 2009], researchers found that fewer than half (46%) of trials registered with the U.S. National Institutes of Heath (at were ever published. This means that only about half of the potential evidence for guiding evidence-based medicine practices, whether positive or negative, is ever publicly available. To make matters worse, research results that are published may not reflect the clinical trials as they were designed. A second study [Mathieu et al. 2009] found that nearly a third (31%) of randomized, controlled, clinical trials (RCTs) had discrepancies between outcomes that had been registered and those that were eventually reported in the published accounts; in 8 out of 10 cases the inconsistencies favored reporting only outcomes that were statistically significant. Going even further, while accurate trial registration helps ensure transparency and accountability, Mathieu and colleagues found that more than a quarter of published trials had never been registered at all, 14% were not registered until after trial completion, and 11% were registered with no descriptions or only vague indications of the primary outcomes. All of this leads to a publication bias, whereby trials demonstrating statistically significant and positive results tend to appear in journals much more frequently than others, and the public is left with distorted depictions of investigations involving particular therapies or interventions.

Researchers “Spinning” Research
Just as publicists may “spin” the news in favor of their clients, medical researchers sometimes try to present results of lackluster studies in a more attractive light. An investigation noted recently in the British Medical Journal [Chew 2009] examined a sampling of 72 published RCTs containing essentially unremarkable results and found that 40% of them conveyed “spin,” defined as reporting results in such a way as to convince readers that an experimental treatment was beneficial despite poor or non-significant outcomes. Examples of linguistic “spin” included: “[The treatment] is expected to be a very important modality in the treatment strategy of [the disorder]”, “[The treatment effect] approached conventional statistical significance.” Another tactic in the various articles examined was to focus on interesting secondary or subgroup analyses, rather than on the statistically non-significant primary outcomes. A second investigation looked at how benefit claims in RCT articles are sometimes “spun” without any supporting evidence. Of 35 RCTs examined, which made 695 favorable efficacy statements in total about the experimental treatments in question, nearly half (338 or 49%) of the remarks did not specify the statistical significance (if any) of the claimed benefits. And, of the 96 safety claims that were made, only 2 included the term “significant” and only 1 of these was supported by a statistical result.

In sum, except for instances of data fraud, which are rare, the other practices appear to be more commonplace than is warranted or acceptable. Of course, there also is the possibility that investigations of bias in research articles are themselves biased in some fashion. The most reasonable response seems to be for consumers of medical research literature to develop a healthy skepticism about everything they read in the journals and very cautiously accepting research conclusions as guides for patient care.

> Chew M. Researchers, like politicians, use “spin” in presenting their results, conference hears. BMJ. 2009(Sep 15);339:b3779.
> Godlee F. More than 20% of articles have a “guest” author, study shows. BMJ. 2009(Sep 15);339:b3783.
> Lipman AG. The pain drug fraud scandal: implications for clinicians, investigators, and journals. J Pain Pall Care Pharmacother. 2009;23(3):216-218.
> Marcus A. Fraud case rocks anesthesiology community: Mass. researcher implicated in falsification of data, other misdeeds. Anesthesiology News. 2009(Mar):35(3). [Article
available here, free registration required.]
> Mathieu S, Boutron I, Moher D, et al. Comparison of registered and published primary outcomes in randomized controlled trials. JAMA. 2009;302:977-984.
> Ross JS, Mulvey GK, Hines EM, et al. Trial publication after registration in A cross-sectional analysis. PLoS Med. 2009. [Article
available here.]