* Part 7 – Beware of Odds Posing as Risks*

While the presentation of data as odds and Odds Ratios is favored by gamblers and some statisticians, many authorities on evidence-based medicine disparage their use in pain research reports as being unhelpful and potentially misleading for clinical decision-making purposes. Knowing the odds may be essential for successful betting on horse races but an understanding of how odds and Odds Ratios relate to risks and Risk Ratios is more useful for making sense of pain research. This article tells why.

The previous *Pain-Topics UPDATE* in this series [*Part 6 here*] discussed “risk statistics” — RR, RRR, ARR, and NNT — which are estimates of effect that help to put pain research into perspective for everyday practice. With a bit of study, most readers can understand and intuitively interpret risk-effect measures to decide whether a therapy or intervention might be helpful for patients with pain.

On the other hand, odds and Odds Ratios, or ORs, are somewhat like an “evil cousin”; they share a common lineage with risk-effect statistics but they are calculated differently, are not the same thing, and can be deceptive. Yet, ORs are often presented in pain research reports and confused with or wrongly interpreted as Risk Ratios.

**What Are Statistical Odds?**

To begin, the term “odds” is not used in biostatistics in the colloquial sense of denoting “chance” or “likelihood.” Rather, the *odds of an event*— eg, pain relief in pain research — are the frequency of the event *occurring* divided by the frequency of the event *not occurring*. The odds often can be calculated from the same data used to compute “event rates” to derive risk-effect statistics.

For example, ** Figure 1** at left (reproduced from

*Part 6*in this series) represents a typical pain research study design in which two groups are examined: 1) an Experimental group exposed to a therapy or intervention of interest, and 2) a Control group that instead receives an alternate therapy/intervention or placebo. For each group, either the outcome event of interest

*occurs*(“Yes”) or it does

*not occur*(“No”). The alphabetical letters symbolize the numbers of research subjects in each cell of the table.

From *Figure 1*, the odds of an event in the Experimental group are calculated by **a/b**. Similarly, event odds in the Control group are computed by **c/d**. The values for odds can range from 0 to infinitely large.

In comparison, the risk, or rate probability, of the event happening is the number of subjects experiencing the event divided by the total number of subjects at risk of experiencing that event. In *Figure 1* the Experimental event rate (EER) is computed as *a/(a+b)*, and the Control event rate (CER) is *c/(c+d)*. This is the concept more familiar to healthcare professionals; it describes the probability, usually expressed as a percentage, with which an outcome event occurs in each group. The EER and CER can each range in value from 0 to 1.0, or 100%.

A simple example is to consider differences between risks versus odds in dice, such as having the side with 6 dots facing up after the throw of a single die. Referring to

Figure 1above, there is 1 chance of the 6-dot side facing up (a=1) and 5 chances of that sidenotfacing up (b=5). Therefore, the likelihood (ie, risk) of throwing a 6 is calculated as 1/(1+5) = 1/6 = 0.14 or 14%. Whereas, the odds are calculated as 1/5 = 0.20. It can be seen that the odds are both different from and significantly greater than the likelihood/risk of the same event happening.

By way of further example, ** Figure 2**, below, shows the comparable odds (bottom row) associated with various values of increasing risk (top row) [

*from*Davies et al. 1998].

An interesting point to note is that there can be very large disparities in size between risks and odds; however, at less than 20% risk the sizes of odds and risks become somewhat similar, and they become almost identical at ≤10%. This occurs when relatively few events occur as compared with the total size of the population being studied; for example, in a trial finding only 200 cases of nausea with a new analgesics out of 2,000 persons taking the new drug (risk = 200/2,000 = 0.10 or 10%; odds = 200/1,800 = 0.11).

As noted above, risks are fairly straightforward to visualize as a percentage of persons experiencing the event; in contrast, when subject response is presented as odds it becomes difficult to conceptually imagine what is happening. Hence, for the most part, odds are unhelpful to the reader wanting to translate statistics into clinically meaningful terms.

**What Are Odds Ratios?**

Both Odds Ratios and Risk Ratios compare two groups of dichotomous data and tell something about events occurring in one group relative to the other…

>TheOdds Ratiois the odds of an event in the Experimental group divided by the odds in the Control group. Therefore, this can be calculated as:OR (Odds Ratio) = Experimental Group Odds / Control Group Odds,or(a/b)/(c/d)from Figure 1 above.

>TheRisk Ratiois the probability or event rate occurring in the Experimental group divided by the event rate in the Control group. This is calculated as:RR (Risk Ratio) = EER/CERor[a/(a+b)] / [c/(c+d)]from Figure 1.

As the calculation formulas suggest, it is essential to keep in mind that the OR and RR calculated from the same data are usually quite different. ** Figure 3** (at left) shows calculated ORs and RRs based on the same set of

*hypothetical data*. Both the OR and RR share the quality that if the value is “1” then the outcome is equally likely for both groups; that is, 1.0 is the “null value” of no difference between groups. If either ratio is greater than 1, then events in the Experimental group are more likely to occur than in the Control group; conversely, if the ratio is less than 1, events in the Control group are more likely to happen.

However, there is a very important difference demonstrated in *Figure 3*: in most cases, Odds Ratios *exaggerate the size* of the effect in comparison with the comparable Risk Ratio for the same data. If the OR is less than 1.0 then it is a smaller value than the comparable RR and, if the OR is greater than 1.0 it is a larger value than the respective RR.

For example, researchers examined the association between oral NSAID use and erectile dysfunction (ED) in a large, diverse group of men [as discussed in a

Pain-Topics UPDATEhere]. Compared with men who did not take NSAIDs regularly, the frequent use of NSAIDs increased theoddsof ED by 72% (Odds Ratio = 1.72). However, based on data provided in the report, the Risk Ratio can be calculated to be 1.46, which is a medium-sized effect and suggests that the frequent use of NSAIDs incurs a 46% greater risk of developing ED than might occur in men not using NSAIDs. While some may not consider this disparity between odds and risks of great significance, since both suggest an important effect of NSAIDs on ED, presenting the data as odds, as the authors did, creates a potentially misleading impression of greater clinical importance.

Two further points about Odds Ratios are worth noting…

- Researchers can and should conduct tests of statistical significance on ORs and then report
*P*-values and 95% Confidence Intervals for the data [*Confidence Intervals, or CIs, were previously discussed here*]. Just as with other measures, if the CI for an Odds Ratio spans across the null value (1.0 in this case) then the respective OR is not statistically significant. For example an OR=1.25 with a CI=0.85 to 1.50 would not be statistically significant at*P*≤ 0.05.

- Researchers (or others) may be tempted to treat odds data like risk-effects and calculate Relative Odds Reductions (ie, ROR = 1-OR), Absolute Odds Reductions, or Numbers-Needed-to-Treat (NNT) based on odds. However, since these measures are based on
*odds*rather than risks — with all of the noted size distortions and difficulties of interpretation that odds present — the result can be deceptive. For example, the Relative Odds Reduction will almost always be numerically larger than the comparable Relative Risk Reduction, or RRR, and it has been suggested that, “authors seeking to inflate the apparent effectiveness of a study drug may be motivated to report the results as ROR instead of as RRR” [Prasad et al. 2008].

**When Are Odds Ratios Similar to Risk Ratios?**

There is an important exception when ORs might be interpreted *as if* they are RRs. As noted above (*Figure 2*), when the risks are relatively small — that is few events occurring in a large total population or group of subjects — the risks and odds grow similar in size. In such cases, the Odds Ratio and Risk Ratio also begin to approximate each other. This is exemplified by the following calculations (*referring to Figure 1 and the discussion above*).

> If ‘a’ is very small compared with ‘b’ then EER = [a/(a+b)] ≈ a/b

> If ‘c’ is very small compared with ‘d’ then CER = [c/(c+d)] ≈ c/d

> Thus, the Odds Ratio = (a/b) / (c/d) becomes about equal to EER/CER = Risk Ratio

Another way of stating this is that, if events occurring in Treatment and Control groups are very small, compared with those events *not* occurring in those groups, then OR ≈ RR. Here is an example…

In a case-control study from the U.S. Centers for Disease Control & Prevention reporting on birth defects among infants born to women who had taken opioid analgesics during early pregnancy, the authors reported outcomes as Odds Ratios [Broussard et al. 2011; also discussed in

Pain-Topics UPDATEhere]. In this study, the probability of events occurring (eg, birth defects) was extremely low compared with those events not occurring in the large population studied; therefore, ORs were approximately equal to RRs and the authors used the two terms somewhat interchangeably. However, treating the ORs as RRs, without accurately converting the values, the authors gave an example of what they calculated as the absolute increase in odds for a particular birth defect. Their result was 28% greater than it would have been had they first properly converted ORs to RRs. The difference was extremely small — only 0.008% — which may seem like “splitting hairs”; yet, it is an example of how complex data might be skewed in one direction or another depending on its presentation.

It is not uncommon in pain research for study reports to give an overall impression that there are multifold increases in events presented as odds or Odds Ratios. In fact, however, theabsolute increases in riskor likelihood of the events in question are so small that they may be strongly affected by sampling errors, confounding influences, or methodological biases.

Some authors have argued that discrepancies between ORs and RRs that are of large size to begin with are of little consequence, since both values represent clinically meaningful effects [Davies et al. 1998]. For example, a 200% increase in outcome probability (RR=2.0) and a comparable 3.5-fold increase in odds (OR=3.5) are both large effects that could be important in a qualitative sense. However, *quantitatively*, readers will be led astray by ORs that create a false impression. This often happens with data reporting increases in adverse events (eg, analgesic overdose) in which presenting data only as ORs will exaggerate the alleged seriousness of the outcomes (and, perhaps, make otherwise benign research outcomes appear to be of much greater significance and urgency).

**Problems of Interpretation**

Odds Ratios are a measure of effect size preferred by statisticians and some researchers for analyzing data from case-control studies, cohort studies, some randomized controlled trials, in logistic regression equations, and in meta-analyses. However, almost every expert discussing the presentation of odds and Odds Ratios in research papers concedes that readers rarely understand how they might apply in practice and are almost universally misinterpreted. For example…

“Probably no one (with the possible exception of certain statisticians) intuitively understands a ratio of odds.” The relationship of odds and risks “might be compared to the use of different scales such as Fahrenheit and Centigrade to report absolute values of and relationships between different temperatures”[Prasad et al. 2008].

“It is difficult to see any justification for the use of odds in what purports to be scientific study. It is just another example of the misleading effects of statistical computing packages when they are used without understanding or with disinterest”[Brignell, 2006].

There appears to be widespread agreement that if odds were used for statistical calculations, which is justified in some cases and required by some computerized programs, they should be converted to risks and Risk Ratios in the published reports to foster more accurate interpretation and better understanding among readers. Unfortunately, as was noted in *Part 1* of this series [here], most research articles are written for other researchers; not necessarily for consumption by healthcare providers or the public.

Due to inherent confusion, the terms odds, risks, Odds Ratios, Risk Ratios — along with chance, probability, frequency, incidence, and likelihood — are often used interchangeably as if describing the same thing. As demonstrated above, odds and risks are calculated differently from each other and have specific meanings, as do Odds Ratios compared with Risk Ratios. Discussing odds as if they were risks, in most cases, leads to erroneously inflated conclusions and this is particularly problematic in news stories or other presentations of research that report odds data using faulty understandings.

For example, take an outcome Odds Ratio of 3.5 and its comparable Risk Ratio of 2.0 (noted in

Figure 3above). For the RR one can say, “There was a 200% greater likelihood of the event (eg, pain relief) in the Experimental group than in the Control group.” Better yet, this can be converted to a Relative Risk Reduction (or, Increase, in this case; RRR=1-RR), to say “The outcome event was increased 100% by the experimental treatment compared with the control treatment.” On the other hand, for the Odds Ratio, one can only say, “There was a 3.5-fold greaterodds[notgreater frequency or likelihood] of the event occurring in the Experimental group.” Increased odds is not the same as increased probability or frequency of an outcome, and describing the Odds Ratio as if it were a Risk Ratio would in most cases grossly overstate the effect size and suggest that the clinical significance is greater than it actually is. The exact words that are used in describing the effect can make an important difference.

When they report odds or Odds Ratios in their articles, researchers often skirt around the linguistic challenges of describing clinical meaning for readers by merely reporting the OR values (and, hopefully, 95% Confidence Intervals and *P*-values). Readers are then left on their own to understand whether the ORs are large or small in terms of effect size and how the data might translate into clinically significant risks.

Also, since ORs are usually larger than the comparable RRs, research authors may choose to present the data as ORs if it is advantageous to portray a larger effect of a treatment. This is not to accuse these authors of malfeasance — the presentation of odds and ORs sometimes can be justified on statistical grounds — but not going to the extra trouble of putting the data into clinical context for average readers might be considered at the least discourteous.

**Summary Points & Guidance**

- Odds and Odds Ratios may be of use for statistical computations involved in certain data analyses, but they can be confusing, even misleading, when presented in published research papers. Therefore, these measures of effect are considered a poor choice for clinical decision-making purposes.

- Generally, if an Odds Ratio is interpreted as identical to the Risk Ratio, the size of the treatment effect will be overestimated, sometimes substantially. If the OR is less than 1.0 then it is smaller than the respective RR. If the OR is greater than 1.0 then it is bigger than the RR.

- An
**exception**is when very few events occur relative to the total size of the population or group being studied or treated. In such cases, the probability or frequency — ie, risk — of the event occurring is small, and as the risk falls below 20%, odds and risks become more and more similar. When risk is ≤10%, odds numerically approximate the risk and the respective Odds Ratio can be interpreted as being roughly equivalent to a Risk Ratio.

- If raw data are provided in a research report that otherwise presents outcomes only as odds and ORs readers can use those to calculate the comparable EER, CER, RRs, RRRs, ARRs, and NNTs (see
*Part 6*in this series).

- If only the odds are reported for each group, they can be converted to their respective risk-rate values — eg, EER or CER — by using the following equation:
**group risk = group odds/(1 + group odds)**. Then, the usual equations can be used to calculate risk-effects; such as, RR=EER/CER.

- There also are formulas for directly converting Odds Ratios to Risk Ratios; however, these are complex, require access to some of the raw data, and require time and computing skills that go beyond what should be expected of readers.

- If only the odds are reported for each group, they can be converted to their respective risk-rate values — eg, EER or CER — by using the following equation:
- At the least, researcher-authors who present data as odds and ORs should, in their discussion of results, do the calculations and tell readers how these data relate to comparable risk effects, whether RR, RRR, ARR, and/or NNT. If this is not done, readers need to be concerned about the quality of the research and motivations behind it.

- Also beware of news reports or other presentations based on research reports that include odds or Odds Ratios; there is a good chance that the numbers are being misinterpreted. It is not unusual for journalists, or other presenters of medical information, to dramatically overstate the risk of something happening (good or bad) by interpreting an Odds Ratio as if it were a percentage change in risk.

This was a long article to reach a simple conclusion: *Researchers should not report their data as odds or Odds Ratios in published reports; rather, those measures should be converted to and explained as risk-effects in ways that readers can better understand.* Since this is unlikely to happen, consumers of pain research need to take it upon themselves to understand how odds and risks relate to each other and how distortions of interpretation can occur.

*UPDATES*articles in this series are published, register [here] to receive once-weekly Pain-Topics “e-Notifications.”

**REFERENCES: **> Brignell J. How do Relative Risk and Odds Ratio compare? Numberwatch. 2006 [available here].

> Broussard CS, Rasmussen SA, Reefuis J, et al. Maternal treatment with opioid analgesics and risk for birth defects. AJOG. 2011(Feb 23); online ahead of print [abstract here].

> Davies HTO, Crombie IK, Tavakoli M. When Can Odds Ratios Mislead? BMJ. 1998(Mar 28);316:989 [article here].

> Goldin R. Odds Ratios: Stats Articles 2008. Stats.org[online]. 2007(Apr) [available here].

> Grimes DA, Schulz KF. Making Sense of Odds and Odds Ratios. Obstet Gynecol. 2008;111(2pt1):423-426 [PDF here].

> Higgins JPT, Green S, eds. Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 [updated March 2011]. Section 9.2.2 Effect measures for dichotomous outcomes. The Cochrane Collaboration, 2011. [available here].

> Prasad K, Jaeschke R, Wyer P, et al. Tips for Teachers of Evidence-Based Medicine: Understanding Odds Ratios and Their Relationship to Risk Ratios. J Gen Intern Med. 2008;23(5):635-640 [article here].

> Zhang J, Yu KF. What's the Relative Risk? A Method of Correcting the Odds Ratio in Cohort Studies of Common Outcomes. JAMA. 1998;280(19):1690-1691 [abstract here].