Wednesday, May 2, 2012

Fallacies of Evidence in Pain Research

Making Sense of Pain Research Part 12 – How Critical Thinking Goes Awry

The greatest mistake that consumers of pain research can make is to blindly accept that published results and conclusions, or narrative arguments of some sort, are undoubtedly reasonable and true. We can either become wise or be fooled by commonplace fallacies in the pain literature, which subvert critical thinking to create flawed perceptions of reality. Although uncertainty and doubt are inherent in all scientific endeavors, a most important lesson is depicted in the Latin phrase, Ubi Dubium, Ibi Intellectum — where there is doubt, there also can be understanding.

The Value of Doubt for Critical Thinking

An appreciation of reasonable doubt is essential for all who wish to use research evidence as a measure proof and, thereby, apply evidence-based pain medicine to provide better pain care for patients. And, as was noted in Part 11 of this Series [here], proof in pain research rarely rises to a highest level of “beyond reasonable doubt.”

Furthermore, there are many ways in which research and conclusions derived from the resulting evidence go awry. Foremost, are fallacies of logical thinking, which are used either innocently or intentionally to mislead an audience of readers or listeners. It was further noted in Part 11 that it takes considerable evidence to establish cause-effect relationships and, in a great many instances, evidence that researchers suggest as implying causation of some sort barely rises above reasonable suspicion.

Most certainly, not everything in pain research is fallacious. Yet, critics of medical research have proposed that many wrong, or at least unreliable and invalid, therapeutic answers are being generated due to biases that distort the assessment and presentation of evidence and lead to erroneous conclusions [discussed in Part 3 of this Series here]. These biases almost always either exaggerate, undervalue, or otherwise misrepresent evidence to favor the arguments or perspectives of the writer or speaker; it can be evident in what is said as much as in what is selectively elided from discussion, or by the statistics that are presented in a report versus those that are omitted or glanced over.

Unfortunately, a serious danger is that once a scientific theory, medical practice, public health model, or expert opinion becomes accepted and established — no matter how fallacious — it is extremely difficult to overturn [Prasad et al. 2012]. Meanwhile, unwary consumers of the medical literature are faced with misrepresentations of truth; some of these fallacies are subtle while others are more obvious. As famed Italian astronomer and physicist Galileo Galilei put it, “All truths are easy to understand once they are discovered; the point is to discover them.” That is the mission of this article in our Series, “Making Sense of Pain Research.”

What Are Fallacies & Why Are They So Perfidious?

Fallacies are arguments relying on errors in reasoning [Curtis 2012]. Most persons would recognize the logical fallacy of asserting that on days when it rains large numbers of people are carrying umbrellas; therefore, umbrellas are a significant cause of rain. Such arguments are sometimes called “sophisms” (from Plato’s dialog of sophists in Euthydemus), and wrong-headed reasoning is “sophistry” [Curtis 2012]. Some of these fallacies are among the most persistent and pernicious problems burdening the pain research literature today; consequently, they serve as impediments to true knowledge and better patient care.

Logical fallacies also were studied by Aristotle and, later, in Medieval times, which resulted in Latin names for many of them that endure today and are mentioned below. Understanding principles of skewed critical thinking is more important than remembering the names, and the term “fallacy,” itself, is actually a multipurpose term referring to [Dowden 2010]:

    1. a type of error in an argument,
    2. errors in reasoning (involving data, definitions, etc.),
    3. erroneous beliefs or conclusions, or
    4. “rhetorical techniques” of false persuasion.

Among other purposes, fallacies serve to create stereotypes, rationalize, inappropriately assign burden of proof, cast innuendo, build erroneous negative or positive associations, and otherwise manufacture misinformation. Philosophers have long debated the ethical and moral implications of fallacies in scientific research and reporting. In an earlier UPDATE titled “Can Pain Research Be Trusted? Caveat Lector!” [here], we noted that, while instances of reporting fictitious data might be rare, medical researchers do often deceptively present their results as being more auspicious or convincing than justified; just as publicists may “spin” the news in favor of their clients.

Surprisingly, however, irregularities in research data may not be as rare as once thought. A controversial survey recently reported in the British Medical Journal found that 1 in 7 Dutch physicians claimed to have seen research results that were invented [Sheldon 2012]. Nearly a quarter knew of data that had been manipulated to achieve significant results. A similar survey of physicians in the UK found that 13% had witnesses research data being altered or fabricated.

Along those same lines, nearly 20 years ago, Douglas G. Altman, Professor of Statistics in Medicine at Oxford University, wrote [Altman 1994]:

“What should we think about researchers who use the wrong techniques (either wilfully or in ignorance), use the right techniques wrongly, misinterpret their results, report their results selectively, cite the literature selectively, and draw unjustified conclusions? We should be appalled. Yet numerous studies of the medical literature, in both general and specialist journals, have shown that all of the above phenomena are common. This is surely a scandal.”

Around that same time, James Mills, MD, commenting in the New England Journal of Medicine [1993], observed that, if study data are manipulated long and hard enough and in enough different ways — what he called “data torture” — they can be made to say whatever a researcher/author wants to hear and to support whatever he wants to prove to the public.

“Every investigator wants to present results in the most exciting way,” Mills wrote, “we all look for the most dramatic, positive findings in our data. When this process goes beyond reasonable interpretation of the facts, it becomes data torturing. The unfortunate result of torturing data is the dissemination of incorrect information to the research community and to patients.”

It must be emphasized that fallacies of evidence are not just about data and statistics. Of equal importance, fallacies involve errors in critical thinking resulting in either accepting a falsehood or rejecting some truth. To appreciate and detect these fallacies, consumers need to approach everything they read and hear with the keen sensibilities of a Sherlock Holmes; a shrewd investigator who susses out how the evidence at hand may or may not lead to reasonably logical conclusions.

According to Wikipedia [here], nearly 100 logical fallacies have been described and studied — many with Latin names — but there are even more if subtle variations are included. A number of fallacies commonly found in research reports and other presentations in the pain field are discussed below.

Non Causa Pro Causa Fallacies — No Cause for Cause

Non Causa fallacies present arguments that confuse correlation or association with causality [Thompson 2009]. For example, one event (or series of events) might be suggested as causing another. While there indeed might be some sort of connection between the events, even a statistically significant association, the argument makes erroneous assumptions about cause and effect, or treats selected events as having interdependent relationships that actually may be due to something else entirely.

The umbrellas-causing-rain statement noted above is a type of Non Causa fallacy — the observations are mutually linked but there is no causation. In pain research reports, erroneous conclusions are usually less obvious, and could result from faulty reasoning by the authors, subtle expressions of bias, or an intentional deception [Curtis 2012].

Authors are usually careful to avoid overt statements of cause and effect, and rarely use the terms “cause” or “causation.” Sometimes they even specifically mention that their reported statistical associations should not be used to infer causation. Yet, authors also realize that readers often will interpret the conclusions as “suggesting” causation, and this is how the outcomes will be described in the literature by future authors. This is sophistry in action.

As we described in Part 11 of this Series, true cause-effect relationships are quite difficult to establish scientifically and within acceptable limits of doubt. So, it is disconcerting that Non Causa fallacies are so common in pain research, and several types are worth examining more closely.

Cum Hoc, Ergo Propter Hoc — With This, Therefore Because of This
This type of Non Causa fallacy is committed when inferences about causation are based primarily on an association between two or more event trends or outcomes that occur together in time, simultaneously. Statistically, the correlation may be significant; however, correlation is not the same as causation, and to avoid a Cum Hoc, Ergo Propter Hoc fallacy researchers must consider and rule out other logical explanations for the association [Greenhalgh 1997].

CDC Datatrends A prototypical example of the Cum Hoc, Ergo Propter Hoc fallacy was demonstrated in fall 2011 when the U.S. Centers for Disease Control and Prevention (CDC) released several reports proclaiming an “epidemic” of overdose deaths and addiction related to prescription opioid pain relievers (OPR). This was discussed in an UPDATE [here].

A vital evidentiary component of the reports was a graph (at right) depicting simultaneously increasing data trends of opioid sales (dashed line), OPR deaths (solid line), and admissions to addiction treatment (dotted line). The CDC’s implication was that OPR deaths and addiction were directly related to escalating opioid prescribing through the years as represented by sales. To the naïve reader the nearly parallel trend lines were persuasive evidence of a cause-effect relationship, as the CDC had likely intended.

The fallacy of this is that the reports did not present reasonable alternative explanations for the correlations [as discussed in a “Guest Author” UPDATE
here], not the least of which was that the recording of OPR deaths was probably exaggerated, as conceded by the CDC. And, they ignore escalating rates of chronic pain in America as an important factor. Yet, a purported “epidemic” of deaths and addiction “caused by” opioid prescribing was sensationalized by the news media and widely referenced in academic papers.

There are almost always several possible explanations to be considered, and ruled out, to avoid being misled by Cum Hoc, Ergo Propter Hoc fallacies [Curtis 2012]:

  • The correlations, no matter how strong, may simply be a coincidence. Statistical lore is filled with examples of coincidental or random correlations, particularly in observational or epidemiological data that encompasses uncontrolled and/or unknown confounding variables.

  • An additional, possibly unknown, event may be the major contributor or cause of the association. In the CDC example above, an increased recognition of severe chronic pain conditions might have accounted for both escalating distribution of opioids and deaths, as sicker patients were prescribed multiple medications with overdose potential, including opioids, for their disorders.

  • The direction of causation may be the reverse of that in the conclusion. When events or outcomes are occurring simultaneously it sometimes can be impossible to know which happens first; that is, which might be cause and what is effect.

FMS Fat For example, various research investigations have demonstrated significant associations between fibromyalgia syndrome (FMS) and obesity in women [discussed in UPDATE here]. In most research reports, the biased implication is that the women develop FMS because of being overweight, which is most likely a Cum Hoc fallacy. None of the investigators tried to answer the question of which came first, obesity or FMS, and it is just as likely that FMS, with its disabling symptoms, is more a risk factor for excess weight than the reverse. Another possibility is that medications used to treat the disorder may predispose patients to gain weight.

Post Hoc, Ergo Propter Hoc — After This, Therefore Because of This
This is a logical fallacy of the Non Causa variety that simply states, “Since that event followed this one, this event must have caused that one.” It also is referred to as “false cause” or “coincidental correlation.”

A Post Hoc fallacy is a particularly deceptive error in reasoning because the temporal sequence of events, one coming directly after the other, appears to support an interpretation of causality. However, the fallacy comes from leaping to a conclusion based solely on the order of events, rather than taking into account other factors that might have influenced the association. This fallacy can be particularly prominent in observational research studies that are inadequately controlled to eliminate the potential influences of unmeasured or unknown factors.

As an example, European researchers conducted a prospective, observational, open-label investigation of single-dose intrathecal administration of midazolam in patients with chronic low-back pain [discussed in UPDATE here]. The treatment demonstrated significant pain relief with a very low incidence of adverse effects, so the researchers concluded in their journal article that single-dose intrathecal midazolam is a useful supplement to standard analgesic therapy.

However, in a surprising display of candor, one of the journal’s own editors complained in an accompanying commentary that the study represented a
Post Hoc, Ergo Propter Hoc fallacy. That is, just because patients improved AFTER treatment did NOT mean that they improved BECAUSE OF the treatment. Among other limitations, the study methodology did not account for placebo effects. Plus, there were uncontrolled confounders; eg, the intrathecal drug was a supplement to vaguely specified oral analgesics.

An additional possibility was a Regression Fallacy [Curtis 2012]. This refers to the statistical phenomenon of “regression to the mean,” which, in this case, would portray a tendency for severe pain to decline toward lower average levels during the natural course of back disorders. Mistaking treatment effects as causative of this statistical trend can be fallacious. In sum, intrathecal midazolam therapy might be a viable approach, but this research study did not provide valid evidence of that.

Texas Sharpshooter Fallacy
Sharp Shooter The story behind this fallacy is that a fabled marksman from the Lone Star state fired his gun haphazardly at the side of a barn and then, as proof of his shooting prowess, he painted a Bullseye around an area where the most bullet holes clustered [Curtis 2012]. In medical research, this fallacy occurs when investigators select certain variables from a wide array of data to demonstrate a close association depicting cause-effect relationships. However, there could be other explanations:

  1. It is possible that the cluster of data could merely be the result of chance; either not caused by anything in particular or due to multiple influences acting randomly.

  2. Even if the data cluster is nonrandom, there could be many possible reasons for its occurrence other than the reason or cause chosen by the researchers as most likely.

This fallacy has been detected in the field of epidemiology and it also may creep into data-mining studies that are so common these days in pain research [concerns about this were discussed in an UPDATE here]. A common strategy in data-mining research is to cull computerized records for factors that appear to change in unison; that is, are correlated. However, within large databases — which are increasingly accessible via electronic medical records and administrative data from government agencies or private institutions — there may be hundreds of variables to choose among.

Computer programs make it easy to ferret-out a handful of variables that demonstrate strong, statistically significant relationships, but this is fraught with potential error. For example:

As discussed in an UPDATE [here], researchers at a large managed care organization examined the association between regularly taking oral nonsteroidal anti-inflammatory drugs (NSAIDs) and erectile dysfunction (ED). Retrospectively culling their database of medical records, they found that NSAID use increased the odds of self-reported ED by about 70%. After adjusting the data to account for other conditions that might influence ED — age, smoking, diabetes, hypertension, etc. — the odds were still statistically significant, but the increase in ED was only 20% (a very small clinical effect size). Despite this, the authors concluded that regular NSAID use was a risk factor for developing ED, beyond what would be expected due to age and other comorbidities.

At first glance, NSAIDs and ED fell neatly within the “Bullseye,” but less neatly so when other variables were accounted for in the targeted mix. Still, data were not gathered (or, at least not reported) for resolving possible errors in the conclusion, such as: Which came first, NSAID use or ED? Might pain conditions being treated with NSAIDs have affected ED? Did discontinuation of NSAIDs help to resolve ED in these patients? Therefore, based on this study, it could be fallacious to assume that NSAID use is causative of or even a significant risk factor for ED.

Another possible, but more flagrantly deceptive, aspect of the Texas Sharpshooter fallacy is what psychologist Norbert Kerr described as HARKing,” or Hypothesizing After the Results are Known [also described in Part 4 of this Series here]. That is, researchers present significant findings based on results that serendipitously emerged during their investigation, as if the outcomes were the original object of their study all along. Like the Texas Sharpshooter, the researchers shoot first — they run data analyses — and then draw a Bullseye around those items that seem to fit together.

Of course, researchers are not eager to admit to HARKing, and there is no way of detecting this since such investigations are not registered in advance, as are most prospective clinical trials. Yet, retrospective research on large data sets facilitates the fortuitous discovery of many outcomes that are worthy of reporting, even though the probability of fallacious conclusions is quite high. And, with so many variables and outcomes to choose among, the databases can be “milked” repeatedly to produce a steady flow of publishable articles.

Experts in research methodology agree that database-mining studies can be useful for generating hypotheses (questions) for further testing in higher quality, better controlled clinical trials. Whether or not those exploratory data-mining exercises should be published as if they, themselves, arrive at worthwhile conclusions to guide public opinion or clinical practice, as is often the case, is debatable [Altman 1994].

The Art of Argumenta

There are many ways in which authors and speakers may appeal to powers, events, or knowledge beyond themselves to spin their webs of persuasion. These various types of appeals — or, Argumenta (plural), Argumentum (singular) — become fallacies when they are used as rhetorical techniques to distort the truth or conceal the lack of valid scientific evidence.

To a large extent, Argumenta are often the grist for narrative fallacies (discussed below). They may be based on bits of fact, but most often reflect personal biases, hearsay, folklore, superstition, or the like. Following are several common Argumenta that sometimes appear in the pain research literature.

Argumentum ad Ignoratum (Appeal to Ignorance)
This appeal may propose that insufficient or missing evidence — such as, for certain treatment effects — is itself evidence for a lack of such effects. In other words, this fallacious Argumentum proposes that lack of knowledge, or ignorance, is a form of valid evidence.

As David Katz, director of the Yale Prevention Research Center, has observed [here]: “Science is advanced by an open mind that seeks knowledge, while acknowledging its current limits. Science does not make assertions about what cannot be true, simply because evidence that it is true has not yet been generated. Science does not mistake absence of evidence for evidence of absence.” [See also, Altman and Bland, 1995.]

In the pain management literature, Argumentum ad Ignoratum tactics sometimes attempt to bolster an unsubstantiated assertion by shifting the burden of proof onto those who might take a different or opposing viewpoint. The challenge is, “If you cannot disprove this claim then it must be true.” The absurdity of this argument is like saying that if you cannot prove that the Tooth Fairy does not exist, then she must exist.

This also may present the impossible task of proving a negative — that there is no possibility of some claim being true (or untrue, as the case may be). When it comes to medical science a negative cannot be proven absolutely, since there is always some probability of practically anything occurring in nature even if merely due to chance or random effects.

For example, as discussed in an UPDATE [here], opponents of long-term opioid therapy for chronic noncancer pain (CNCP) often claim as evidence against such practice a lack of evidence to support its effectiveness and safety. They disregard the fact that substantial high-quality research has not been done to establish the facts one way or the other; that long-term opioids are either safe or unsafe, effective or ineffective for CNCP. However, to completely discredit the Argumentum ad Ignoratum, advocates favoring the practice must impossibly prove a negative; that the long-term use of opioids for CNCP does not incur any safety risks and/or ineffectiveness.

In some cases, this Argumentum may take the reverse form; eg, “If you cannot prove the Tooth Fairy exists, then she must not exist.” Either way, in medical science, proof should rely on positive evidence in support of a claim, not lack of evidence for or against some claim [Shermer 1997].

Argumentum ad Verecundiam (Appeal to Authority)
Appeals to authority are not always fallacious; however, if the legitimacy of the alleged authority or information source, and/or or its relevance to the point at issue, could be contested, then the veracity of the Argumentum is doubtful. Users of this fallacy often call upon the published works of others or institutional data (eg, government reports) to bolster their arguments, without questioning the accuracy, reliability, or validity of those sources. As often happens in such cases, weak evidence based on dubious authority is used to make strong statements for or against some position, policy, or clinical practice, which may perilously mislead an otherwise uninformed audience.

Argumentum ad Antiquitatem (Appeal to Tradition or History)
Perpetrators of this fallacy invoke the ancient roots or long history of some practice as evidence of its efficacy and/or safety. In many cases, however, this may be the only evidence, and there is no comparable history of scientific investigation and convincing research evidence to serve as proof. While appeals to history would never serve as justification for rejuvenation of the archaic practice of trepanation (making holes in the skull to remedy brain disorders), they are sometimes called upon as support for certain pain treatment modalities that have so far eluded scientific validation.

Argumentum ad Populum (Appeal to the People or Popularity)
Much like appeals to tradition, ad Populum fallacies refer to widespread use and seeming acceptance of a practice as evidence of its validity. Certain homeopathic remedies for pain are often supported, at least in part, by appeals to their worldwide popularity; essentially claiming that, if they were not effective treatments people would abandon them — which, thereby, circumvents a need for scientific evidence. One need only remember that, for centuries, bloodletting was de rigueur therapy for a wide variety of physical and psychological ills, before patients and their care providers finally realized that, in most cases, it was doing more harm than good.

Narrative Fallacies — Weaving Illusory Realities

Narrative fallacy is often described as a distinct type of logical argument, but is considered here as an umbrella concept for a number of fallacies that bias reasoning, short-circuit doubt, and subvert true understanding. As author Nassim Taleb observed in his book, The Black Swan: Fooled by Randomness:

“The narrative fallacy addresses our limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship upon them. Explanations bind facts together. They make them all the more easily remembered; they help them make more sense. Where this propensity can go wrong is when it increases our impression of understanding [Taleb 2007, p. 64].”

In this context, of course, the “impression of understanding” is not an accurate or true depiction of data, facts, or other evidence. Narrative fallacies serve the human need to fit a story or pattern to a series of seemingly connected or disconnected observations or events, even though this can be fertile ground for errors of interpretation. The following narrative fallacies also often incorporate the creative use of the tactics described above as support.

Illusory Correlation
In narrative fallacies, illusory correlation is a form of bias in which interrelationships of data, observations, or events are found that one expects to find, even when no such relationships truly exist. As the 19th Century French neurologist Jean-Martin Charcot pointed out, ”In the final analysis, we see only what we are ready to see, what we have been taught to see. We eliminate everything that is not part of our prejudices.”

This fallacy may be influenced by, among other things, relatively few events or observations that stand out as unique and memorable. For example, a particular group of patients misusing a particular medication, or responding adversely in some way to the therapy, may be accepted as a stereotype for all persons who are prescribed the specific drug.

Illusory correlations also may engender stereotypes of patients with certain pain disorders. For example, some biased literature characterizes all patients with fibromyalgia as being overweight, sedentary, and depressed. In other instances, even the phrase “chronic pain syndrome” may depict a stereotype of co-occurring symptoms and behaviors that is more of an illusion than scientific reality. It is important that readers question whether such portrayals are based on valid evidence or on biased misconceptions.

The Essentialism Fallacy has been traced at least as far back as Plato. Whether designing a research study, or presenting some argument in print or spoken word, some “essential feature” is proposed as a defining characteristic of an otherwise complex issue or larger problem. This may overly simplify the situation and also improperly exclude anything that is deemed inconvenient or unfavorable to some viewpoint or agenda.

Blind Men and Elephant Essentialism is depicted in the fable of 6 blind men who came across an elephant. The first, grabbing the tail, proclaimed the elephant was a rope. The second, seizing a leg, said the beast was a tree. The third, running his hand along the sturdy side, insisted the animal was a wall. The fourth, who chanced to touch an ear, declared the elephant was just a fan. The fifth, feeling a pointed tusk, exclaimed in fear that the monster was a spear. Finally, the sixth blind man grasping the squirming trunk, argued the creature must be a snake. And so, the 6 blind men disputed loud and long, each in his own opinion; though each was partly right, they all were in the wrong.

Essentialism is almost an inherent part of many research designs, whereby subjects are selected on the basis of certain characteristics that are considered to be definitive of the disease or condition under study. As a result, subjects that might be more typical of everyday patients may be excluded. This helps to reduce variability in the data, and enhances statistical power, but it also may cast a shadow of doubt over the external validity of the research outcomes.

A related fallacy is Reductionism, which seeks to oversimplify the nature of larger, more complex phenomena by reducing them to smaller, simpler, or more fundamental components. In research, reductionist fallacies also may come about when data from select groups of individuals — whether from epidemiological or observational studies, controlled trials, or merely anecdotal cases — are used to characterize an entire population of patients.

In a broad sense, reductionist fallacies might be evidenced in the prolific number of studies coming from large managed healthcare organizations, which function like “research mills” with their extensive databases of electronic medical records that can be dredged for interesting outcomes. Results of these studies actually only depict pain management practices and results in patient populations at the respective institutions; yet, over time and due to the sheer number of publications, those outcomes may come to be accepted as representative of a much larger constellation of patients. While this might be true, it could equally be fallacious, and readers need to be aware of this.

Another related argument is the No True Scotsman” fallacy. This error in logic is an ad hoc rescue of either an essentialist or reductionist argument that comes under criticism; it re-characterizes the situation to escape reproach. For example:

  • Jones says, “All Scotsmen are loyal and brave.”
  • Smith observes, “But, McDougal is a Scotsman and he was convicted of being a cowardly traitor.”
  • Jones counters, “It just shows that McDougal is not a TRUE Scotsman.”

Along those lines, when an expectedly beneficial treatment fails, researchers might conduct a post hoc “responder analysis,” examining outcomes only in patients who did achieve some level of favorable response. They end up claiming that patients who were TRUE candidates for the treatment experienced significant benefits, while others were outliers in some way that made them less amenable to the effects. This may seem like a reasonable observation; however, it also questions the external validity of the treatment for everyday practice.

Unfortunately, the No True Scotsman fallacy also may even creep into arguments in support of persons with pain. Well-intended statements like, “No true pain patient would abuse their medication” or, more subtly, “No patient receiving adequate pain relief would be noncompliant with therapy,” reflect biased opinions unless valid evidence is available to support such claims.

False Dichotomy
Dichotomy This is also called a false dilemma, an either-or fallacy, fallacy of false choice, or black-and-white thinking. A false dichotomy forces simple answers to complex questions, involving an argument in which only two choices are offered. However, in truth, there might be many additional options worth considering or shades of grey between the extremes.

By its nature, research in pain management can be riddled with dichotomies that, while not always false, may encourage incomplete understandings. For example, effectiveness of a therapy may be assessed by the percentage of patients experiencing ≥50% pain relief; either they achieve that level or they do not. Meanwhile, it could be an extremely efficacious treatment, with almost all patients experiencing 90% pain relief, or one providing at best 51% pain relief in most patients; distinctions are lost in the either-or, dichotomous analysis.

The same occurs with adverse events, which are almost always recorded as dichotomous data. For example, either patients are reported as experiencing nausea with a new drug or they do not. The nausea might be transient, lasting hours, or it might be persistently troublesome, lasting days; it is still recorded as a yes-or-no event.

False dichotomies can present dilemmas for consumers of pain literature. Conclusions may seem logical, but represent extremes; eg, a therapy is proposed as either effective or ineffective, safe or harmful. More nuanced explanations may be presented in the discussion sections of articles, but readily dismissed as being irrelevant or inapplicable to the researcher’s conclusions. The reader is left to ponder whether other explanations could/should be considered that are not so easily rejected and might be more representative of the truth.

Myths of Beneficence
We first described the “Myth of Beneficence” in an UPDATE [here]. This comes about when arguments, actions, programs, or policies are proposed as beneficial to the public health and patient welfare, but may be driven by hidden agendas and recruit fallacious tactics, such as those described above, for evidentiary support. The objectives may be well-intended; however, prospects for unintended consequences — often negative and ultimately harmful — due to deficient or biased reasoning are not taken into account.

For example, it has become increasingly recognized that the “Just Say ‘No’ to Drugs” and the so-called “War on Drugs” campaigns in the United Sates were failures, and it seems today that they might have been founded on myths of beneficence. Yet, current efforts by various federal and state agencies, and private groups, to curtail analgesic-use problems similarly may represent such a fallacy.

The protagonists claim something like, “Our actions (restrictions, regulations, programs, etc.) will ensure access for patients who need the pain-relieving medication(s) while ameliorating the problems (of over-prescribing, misuse, abuse, diversion, overdose, addiction, etc.).” At the same time, as unintended consequences of the “helpful” actions, countless numbers of patients actually may be harmed by more limited access to those pain relievers in direct or indirect ways.

With myths of beneficence there is usually a semblance of truth and altruism at their core. And, the presumed good intentions behind these myths garner wide acceptance by audiences who do not fully understand or question the potentially harmful consequences. However, as French author and philosopher Albert Camus observed: “The evil that is in the world almost always comes of ignorance, and good intentions may do as much harm as malevolence if they lack understanding.”

Doubt Facilitates Understanding

Again, it must be emphasized that not all research results or conclusions, arguments or opinions, actions or policies in the pain management field are founded on fallacies. But at least some are; perhaps, more than we might imagine — or want to believe.

This returns us to the dictum: Ubi Dubium, Ibi Intellectum. Where there is doubt, and questioning, and open-minded inquiry, there can be better understanding. Awareness of logical fallacies should serve as a tool to build a greater appreciation of evidence and its many nuances in serving as proof.

This is not intended as a weapon of crass destruction to rip apart the hard work of research teams or the presentations of writers and speakers in the pain management field. Critical thinkers should take no pleasure or pride in debunking the fallacies in others’ reasoning. That is not the point. Rather, having reasonable doubt should summon an approach to intellectual inquiry and scientific analysis that leads to best practices based on valid evidence for improving the care of patients with pain.

> Altman DG. The scandal of poor medical research. BMJ. 1994(Jan 29);308:283-284 [
article PDF here].
> Altman DG, Bland JM. Absence of evidence is not evidence of absence. BMJ. 1995;311(7003):485 [
PDF here].
> Curtis GN. [online]. 2012 [
access here].
> Dowden B. Fallacies: Internet Encyclopedia of Philosophy. California State University, Sacramento. 2010 [
available here].
> Greenhalgh T. How to read a paper: Statistics for the non-statistician, II – “Significant” relations and their pitfalls. BMJ. 1997;315(7105):422 [
abstract here].
> Mills JL. Data Torturing. NEJM. 1993;329:1196-1199 [
abstract here].
> Prasad V, Cifu A, Ioannidis JPA. Reversals of established medical practices: Evidence to abandon ship. JAMA. 2012;307(1):37-38 [
extract here].
> Sheldon T. Survey of Dutch doctors finds evidence of widespread research misconduct. BMJ. 2012;344:e2898 [
extract here].
> Shermer M. Why People Believe Weird Things. New York, NY: Holt & Co.; 1997.
> Taleb NN. The Black Swan: Fooled by Randomness. New York, NY: Random House; 2007.
> Thompson B. Fallacy Page [online]. California State University, San Marcos. 2009 [
access here].

eNotificationsl Don’t Miss Out. Stay Up-to-Date on UPDATES!
Register [here] to receive a once-weekly eNotification of new postings.