Wednesday, April 3, 2013

Pain Research: Prevalence, Incidence, & Survival

Making Sense of Pain ResearchPart 14 – Challenges of Epidemiological Evidence

Prevalence, incidence, and survival data come under the umbrella of epidemiological evidence. Pain management practices, policy decisions, and organizational initiatives are often driven by such evidence, which is sometimes biased to favor particular viewpoints or agendas. Therefore, an understanding of these concepts is essential for critical consumers of pain research.

Epidemiological investigations of health-related events address vital concerns in the pain management field. Events of interest may involve harms, such as occurrences of disease or adverse effects of therapy. Or, events may relate to benefits, such as rates of pain relief associated with specific therapies, or successes of disease or harm prevention programs.

Important questions are explored by epidemiological data. How often do certain events occur during particular periods of time? How long does it take for those events to arise or resolve? Are disorders or problems of concern declining or increasing over time? Are specific interventions helpful or harmful when it comes to long-term patient outcomes?

Results of epidemiological research in the pain field reflect how many persons are affected in a group or a whole population of persons during specific periods of time. But, what do those numbers truly represent? How were they determined? And, most important, can they be trusted as being reliable and valid?
 

Degrees of Freedom Bias

Epidemiological researchers have many techniques for gathering data and statistical tools for framing the results of their investigations; foremost are prevalence, incidence, and survival analyses. Along with that, there are many decisions researchers must make about how data are collected, categorized, and analyzed that can affect the veracity of outcomes. In Part 13 of this series [here], this was described as “researcher degrees of freedom,” and these countless small choices, which often are subjective in nature, can result in biased or questionable outcomes.

Typically, outcomes of epidemiological studies are conveyed as proportions or probabilities. There is a numerator in the equation reflecting occurrences of events in a population of interest and a denominator representing the larger population at risk of experiencing those events during a specific period of time. Using their discretion, or researcher degrees of freedom, investigators can adjust the sizes of numerators or denominators to be larger or smaller and, thereby, influence the direction and dimensions of resulting outcomes.

Why might this occur? Judgments guiding the design and conduct of epidemiological studies and data interpretation may be influenced by public sentiment, political pressures, or organizational agendas. Truth may become distorted when probabilities of event occurrences are adjusted to accommodate preconceptions of the size, impact, or importance of the issue at hand. This is dogma, not science.

Consumers of the literature usually cannot know for certain if and when epidemiological data were manipulated in some fashion to create a biased impression — but it does happen in pain research. It is essential that readers understand the research process and to question how data might have been differently collected, analyzed, and/or presented to arrive at alternately plausible — and perhaps more reliable — conclusions.
 

Defining Prevalence, Incidence, and Survival

The language of epidemiology and the statistical tools for computing results at first seem straightforward, but can become confusing, misleading, or both. To begin, here are some basic definitions of terms [Byrne undated; Coggon et al undated; Deng 2011; Israni 2007; Sedgwick 2010A; Spruance et al. 2004; Wu et al. 2003] along with clarifying examples of each:

1. Prevalence
This typically represents the number of persons in a population of interest affected by an event — such as a pain condition or a measured outcome of some sort — during a specified period of time divided by the number of persons in the total population under consideration and potentially at risk for the event during that time. Prevalence results are usually reported as proportions or percentages; such as, the percentage of persons in the United States afflicted with fibromyalgia during a particular year or years.

Prevalence is often reported simply as the prevalence rate; however, it may be measured at a very distinct point in time — eg, on a certain day or during a very brief time period — and is then called the “point prevalence.” Usually, the rate is measured over a longer time period, such as a year or more, and is most accurately described as the “period prevalence.”

Example 1: In a review of surveillance studies assessing headache in the U.S., one of the population surveys encompassed 6 years of data collection and included adjusted data from 13,414 adults ≥20 years of age [Smitherman et al. 2013]. Respondents were asked, “During the past 3 months did you have severe headache or migraine?” and 3,045 answered affirmatively. The overall prevalence rate for severe headache/migraine was simply reported as 22.7% (3,045/13,414 x 100).

Note: It could be important to consider in this prevalence rate that only events during the few months prior to each person being surveyed were captured. Severe headache or migraine remitting prior to or newly arising after the “past 3 months” period in each respondent would not be reflected in the data. In this sense, the prevalence rate might be considered as a type of point prevalence. The study also characterized “adults” as ≥ age 20, rather than 18 years of age, so the prevalence may be understated when considering a traditional adult population.

Prevalence is sometimes expressed as the rate per 1,000, or more, persons. In the above example, the prevalence rate could have been reported as 227 cases per 1,000 persons or 23 per 10,000 persons; rather than 22.7%. However, this should not be confused with an incidence rate (described next, below), which takes a specific timeframe into account and is usually reported as cases per 1,000 person-years.

Epidemiological studies of prevalence are often cross-sectional in design and the data are retrospective; that is, looking back over a prior time period, whether weeks, months, or years. Data-mining is often used to cull information from large repositories of medical records, but this may be problematic unless the records were designed in advance to answer the particular epidemiological questions of concern. Survey questionnaires of various types also are used, but these can be biased by how the questions are designed and asked, as well as by recall bias — ie, respondents not accurately remembering or reporting events from the past.

2. Incidence
This is a measure foremost of new, or de novo, events occurring during a specified period of time in a population at potential risk for experiencing the events. Although it is sometimes expressed simply as the number or percentage of new events or cases occurring during some time period, incidence is actually much more complex.

Incidence proportion — also called “cumulative incidence” or “incidence of events” — is the number of persons newly experiencing an event of interest during a defined period of time divided by the number of persons in a population observed and/or exposed to the risk during that time. It expresses a simple proportion, or percentage, of new cases per number of persons.

Example 2: If a population under observation comprises 1,000 persons initially without migraine headache, and 30 of them newly develop migraine during 2 years of observational followup, the incidence proportion is simply 30 cases per 1,000 persons, or 0.030 (3%). Note that this does not account for the duration of followup time necessary to observe all of the new events — 2 years in this example.
 

Incidence rate — also called “incidence density rate” or “person-time incidence rate” — expresses the number of new events occurring in a population of persons, taking into account the length of observation or followup. The followup times for individuals may differ substantially, but they are converted to a common measure — usually person-years.

Example 3: From example 2 above, the incidence rate would be 15 new cases of migraine per 1,000 person-years, or 0.015 (1.5%). That is, 30 new cases were observed during 2 years in 1,000 persons, so 15 cases would be expected among the 1,000 persons during 1 year (1,000 person-years). If, instead, half of the 1,000 person population was observed during 2 years and the remainder for only 1 year the cumulative time would be 1,500 person-years and the incidence rate would be 0.020, or about 20 cases per 1,000 person-years (30/1,500 x 1,000).

The general formula for incidence rate stated as per 1,000 person-years is: [(number of new events during the time period) / (total person-years of those exposed or at risk during the time period)] x 1,000.

It is important to note that if the event of interest is something that might occur more than once in each individual during the time period of observation — such as emergency department visits, falls, overdoses, etc. — then the incidence rate rather than the incidence proportion should always be calculated. This would take into account the total number of events in the numerator; rather than merely the number of persons experiencing the events (as in the incidence proportion).

Example 4: In example 2 above, if the event of concern among the 1,000 persons was visits to the emergency department (ED) during the 2 years of observation among new migraine sufferers, and the 30 patients in question had 40 ED visits during that time, then the incidence rate for ED visits would be 0.020 (40/2,000). In this case, 40 new events were observed in 1,000 persons during the 2 years, so 20 events would be expected during 1 year, or 20 per 1,000 person-years.

Using person-time, as does the incidence rate denominator, rather than the number of persons in the population at risk, can accommodate situations where segments of the population at risk vary with time. However, there is an assumption here that the rate of new event occurrences is constant over different periods of time, such that for an incidence rate of 0.015, 15 new cases would be expected in 1,000 persons observed for 1 year or in 100 persons observed for 10 years. If it is important to account for new events that do not occur at a constant rate over time, then a survival analysis (described below) will help to assess cumulative incidences of an event during the time period.

Prospective or longitudinal studies — looking forward in time during a specific period — are often used for incidence investigations to more accurately detect and assess new cases/events. However, this type of study is often smaller in scope than a cross-sectional design, which may encompass large-scale population surveys that have more generalizability.

3. Survival Analysis
In its broadest sense, a survival analysis is a type of incidence rate study in which events may occur at varying time points that are of importance. It is an often misunderstood form of prospective epidemiological study focusing on the time until some event occurs, such as the time elapsing from starting a therapy until an end point is reached (survival time).

During the study, the frequency and timing of events may differ among subjects, some will experience the event of interest while others may experience an alternate event or no event. Also, the time until subjects who do experience the event of interest may vary, as can the duration of followup across subjects, and some subjects may discontinue the study entirely or be lost to followup — survival analyses take these many factors into account.

Most commonly, survival analyses compare 2 groups; such as a treatment/intervention group with a placebo or alternate treatment group in a clinical trial, or cases versus controls in a cohort study. The cumulative incidences of events occurring in each group over time are statistically compared to determine which group demonstrates the more favorable outcome.

Two common methods that account for variations in events occurring in a group at specific followup times are (A) Kaplan-Meier analysis and (B) Cox proportional hazards analysis, which is a statistical regression method [Israni 2007]. Therefore, this is a prospective approach assessing changing occurrences of events over time and, thereby, it also helps to determine how long it might take to reach an endpoint of interest.
 

Kaplan-Meir analysis measures the ratio of subjects typically without an event (ie, survivors) divided by the total number of subjects at risk for the event in each group. The ratios are recalculated at specific time points to reflect changes in occurrence of the event of interest. This also takes into account subjects that have dropped out of the study or otherwise fail to reach the study endpoint — called “censoring.”

The ratios are used to generate a curve — often displaying a stair-step pattern — that graphically depicts the probability of survival at the various time points. In studies with multiple groups, such as an intervention and a control arm, a Kaplan-Meir survival curve can be generated for each group. If the curves are close together or cross, a statistically significant difference between groups is unlikely to exist; statistical procedures, such as a log-rank test, can be used to derive P-values indicating the specific level of significance, if any.

KM Curve ShinglesExample 5: Researchers investigated whether a live attenuated zoster-virus vaccine would decrease the incidence of herpes zoster (shingles) and postherpetic neuralgia (PHN) in older adults [Oxman et al. 2005]. In a time-to-event survival analysis spanning 5 years the cumulative incidence of shingles was lower in the zoster vaccine group than placebo group at all time-points, and the figure at right depicts Kaplan-Meier survival curves for this effect.

In this case, cumulative incidence — expressed as a percentage of subjects in each group experiencing the event — is the probability of newly developing herpes zoster (HZ) during the period from 30-days postvaccination to each followup time. The proportions not on the curves remained HZ-free (survivors). The curves are widely separated and a log-rank test found the difference between groups was statistically significant, P < 0.001. Examining the survival data in the graph, it is apparent that the vaccine was beneficial compared with placebo at all time-points during followup.
 

Cox proportional hazards analysis is somewhat complex, but is another common statistical procedure for analyzing time-to-event data when comparing outcomes between groups. First, the hazard rate in each group is computed, which is the probability of an event occurring during a certain time interval.

Next, the hazard ratio, also called “relative hazard,” compares the hazard rate of an event in one group (eg, treatment arm in a parallel-group trial or exposed group in a cohort study) with the hazard rate in a control or comparator group. Even though hazard rates for each group may vary during the entire study period, there may be an assumption that the hazard ratio, or HR, is constant over time; so, for example, in a clinical trial where pain resolution is the endpoint, the HR may estimate the relative likelihood of pain resolution in treated versus control subjects at any given point in time. In some study reports, researchers may present separate HRs for differing periods of time (see example 6 below).

As a type of logistic regression method, a Cox proportional hazards analysis also can be used to control for effects of potentially confounding factors (eg, age, sex, weight, comorbid disease, and the like). Variations in followup time across subjects also can be statistically controlled. The hazard ratio resulting from a Cox analysis can be interpreted much like a Relative Risk Ratio; eg, an HR of 5 means that the exposed or intervention group has 5 times the risk of having the event (or, reaching the end point) as the unexposed or control group. [Risk ratios were discussed in Part 6 of this series here.]

Example 6: Cardiovascular risk after a first heart attack (myocardial infarction, or MI) usually declines rapidly during the first year; however, researchers conducted a cohort study to assess whether using nonsteroidal anti-inflammatory drugs (NSAIDs) would alter that risk in the first year and thereafter, spanning 5 years [Olsen et al. 2012, and discussed in an UPDATE here]. Of the roughly 99,000 cardiac patients included, approximately 44,000 (44%) were prescribed NSAIDs after their first-recorded MI. In this total population, there were 29,000 coronary deaths or nonfatal recurrent MIs; although, incidence rates varied during each of the 5 years of followup.

Hazard RatiosThe figure at right depicts the time-dependent Cox proportional hazard analysis for each of the 5-year periods (with 95% Confidence Intervals for each). Among patients prescribed any NSAIDs after a first heart attack — and statistically controlling for other risk factors, such as age, comorbidity, other medications, etc. — the risk of a second MI or of dying from coronary heart disease, compared with patients not taking NSAIDs, significantly increased by 30% after 1 year (HR=1.30; 95% CI 1.22–1.39) and 41% after 5 years (HR=1.41; 95% CI, 1.28–1.55). As the confidence intervals for the HRs suggest, since they do not include the point of no effect, or 1.0, the hazard ratios were all statistically significant — P<0.001 in this case. The researchers conclude that the use of NSAIDs is associated with persistently increased coronary risk regardless of time elapsed after a first-time MI.
 

Pitfalls of Numerators

The accuracy of numerators in epidemiological data calculations obviously can bias outcomes, and this depends on how events/cases are defined as well as the timing of those events. An event (eg, pain condition) occurring in a population during a given time period might be classified into 4 categories: 1) an event beginning and ending during the time period, 2) an event beginning during the time period and still existing at the end, 3) an event starting prior to the beginning of the time period and ending during the time period, and 4) an event starting prior to the beginning of the time period and still existing at the end of the period [Wu et al. 2003].

  • Incidence measures encompass only categories 1 and 2 (new events/cases starting during the time period) in the numerator. It is important to consider that certain events may occur anew more than once during a time period (eg, acute pain episodes, emergency department visits, falls, etc.); however, the data may or may not account for these multiple occurrences and it could bias incidence reporting.

  • On the other hand, the period prevalence rate numerator encompasses all 4 categories throughout the time period. However, the point prevalence rate, may be much smaller, since it only considers event/cases existing at a single or brief point in time and does not include events that had occurred but ended before the time point or those that started after that point (as in example 1 above).

  • Studies that report “lifetime prevalence” — encompassing persons who ever experienced an event (eg, pain disorder, addiction, etc.) — can be problematic. The numerator could include persons currently with the disorder, those who once had the disorder and recovered, those who might be in temporary remission, and persons who had the disorder at the time of the study and died. It is an aggregate number that is usually quite large and not very informative as to what transpired or who is still affected, and it can falsely make prevalence of the event appear to be widespread and serious.

Defining the event itself — whether it is a disorder, a beneficial outcome, or an adverse effect of some type — also can be problematic and biased by researcher degrees of freedom. Often, by using differing criteria the same event can be widely or narrowly defined, which will make the numerator larger or smaller, respectively. In turn, this impacts, and potentially distorts, epidemiological reporting.

CDC Data TrendsExample 7: A characteristic display of numerator confusion may be evident in data presented by the U.S. Centers for Disease Control and Prevention (CDC) depicting multiyear trends in rates of opioid pain-reliever (OPR) deaths compared with sales of OPR and admissions to substance abuse treatment programs [see figure; also discussed in an UPDATE here]. The researchers conceded that case definitions of OPR-related deaths (middle, solid line) were inaccurate, including both cases with opioid analgesics as a single agent and those where opioids were present but may not have been a primary cause. In some cases, postmortem reports made no distinctions between prescribed (or nonprescribed) OPR and illicit heroin. So, the numerator in this calculation was most likely overly inclusive and inflated.

The numerator for treatment admissions (upper, dotted line) also may have been distorted. The researchers were unclear about whether the events of concern related to opioid abuse only or also to other drugs. And, it is unspecified as to whether data include only first-time new admissions, persons still in treatment from the prior time period, and/or readmissions of some persons during the same year. Depending on definition and how admissions were counted, the numerator could have been made larger or smaller, but the reader does not know for certain.

Finally, the way in which numerator data are gathered can make a significant difference. Whether events/cases are determined by first-hand observation (eg, medical diagnosis), self-report questionnaires, interview surveys (telephone/in-person), medical records reviews (data-mining), or other means, each has its own limitations and potential sources of bias affecting data accuracy and reliability. Researchers should always discuss these concerns in their reports and how they were managed, and critical readers need to judge whether the numerator data can be trusted as being valid.
 

Difficulties of Denominators

The veracity of denominators in epidemiological equations that estimate prevalence, incidence, or survival is vital for portraying the size and composition of the larger population of concern. The size of the overall population at risk for an event can be adjusted in ways that will increase or decrease outcome results.

Various questions must be asked. How should the population at risk be defined and counted; narrowly or broadly? Who should be excluded as not being at risk in terms of demographics, or other factors? In most cases, only a relatively small segment of the at-risk population is observed or followed-up for occurrence of the event, so how will these data be extrapolated to the larger population at risk over time? This can be especially capricious considering that populations are constantly changing in terms of their size and composition, as well as in susceptibilities to specific risks in some cases.

When entire large populations are considered at risk — eg, nationwide, region- or state-wide — adjustments may be made by researchers to available census data to account for changes since the last census. This is particularly important when comparing prevalence or incidence data from one year to another, since the at-risk population may have changed in significant ways. The population changes and any adjustments to denominator data should be explained by researchers in ways that that are transparent and understandable to readers.

Example 8: In the CDC data above in example 7, it is unclear who exactly is represented in the denominators used for calculating rate data. According to the researchers, rates were “age-adjusted to the 2000 U.S. Census population using bridged-race population figures [emphasis added].” While government epidemiologists might be versed in this methodology, it would be unfamiliar to average readers.

Using complex algorithms, bridged-race population estimates are generally used to adjust population estimates to account for changing birth and death rates over time among various racial groups [
explained here]. Similarly, age adjustment [described here] expectedly removes population differences due to changes in age distributions over time. However, the specific adjustments were not explained by the CDC researchers. In any event, it has been conceded that such adjusted estimates are “a fiction,” and are most useful only for comparisons across population estimates that have been modified in exactly the same manner.

Despite those adjustments, the total size and the composition of the denominator populations are unspecified in the CDC’s report. Are the very young (eg, infants and children) included? This sizable group would likely not be at risk for the events depicted. Furthermore, in the graphic presentation, the denominator scaling was altered for OPR deaths, from per 10,000 persons to per 100,000 persons. This seems legitimate — otherwise the trend-line for deaths would barely appear at the bottom of the graph — but it visually conveys an impression that OPR death rates are proportionately large and directly related to the other two rate measures. Indeed, some news reporters, apparently confused by this optical artifice, falsely implied in their stories cause-effect relationships among the 3 rate measures in the CDC’s presentation.
 

Comparing Outcomes as Ratios

As noted above, epidemiological approaches are sometimes used in studies comparing one group of persons/patients with another, whether in clinical trials or population-based investigations. Obviously, the hazard ratio described above is an important way of expressing relationships between groups when time-to-event data are a factor. Ratios using prevalence and incidence data also can be used for comparing one group with another.
 

Prevalence Rate Ratio — In many epidemiological studies, prevalence rates are akin to event rates; that is, a prevalence rate expresses the probability (or percentage) of persons in a group experiencing the event. So, when the prevalence rate in one group is compared with that in another group, the resulting prevalence rate ratio may be interpreted as a Relative Risk Ratio, or Relative Risk [as discussed in Part 6 of this series here].

Example 9: Investigators conducted a case-control study to determine risk factors for prescription-opioid deaths in Utah during 2008-2009 [Lanier et al. 2012]. Cases were 254 decedents taking Rx-opioids during that timeframe. Controls were 1,308 living person also taking Rx-opioids. One risk factor of interest was overuse of opioid medication. The prevalence rate of Rx-opioid overuse was 52.9% in decedents and 3.2% in controls; the prevalence ratio was 52.9/3.2, or 16.5 (95% CI, 9.3-23.7). Therefore, overuse of opioid medication was reported to significantly increase the risk of overdose by more than 16-fold.
 

Incidence Rate Ratio —a prospective epidemiological research design may be used to compare two groups in terms of the frequency of new events occurring in an intervention versus a control group during a specific period of time. The incidence rate for each group represents the average number of events per person-years of followup, and an incidence rate ratio can be calculated as the incidence rate in the intervention group divided by the incidence rate in the control group — the outcome is interpreted as if it were an odds ratio [Sedgwick 2010B; odds ratios were discussed in Part 7 of this series here].

Example 10: In the study of a new zoster-virus vaccine described in example 5 above, the overall incidence rate of shingles in the vaccinated group was reported as 5.42 per 1,000 person-years and 11.12 per 1,000 person-years in the placebo group. Therefore, the incidence rate ratio would be 0.49 (5.42/11.12); ie, the vaccine reduced the likelihood of developing shingles by roughly one-half.
 

NNT — The number-needed-to-treat, or NNT [first discussed in Part 6 of this series here], can be very helpful for deciding on the clinical advantage of a treatment or intervention for individual patients. It is calculated as the reciprocal of the absolute risk reduction (1/ARR) or, in epidemiological data, the reciprocal of the difference in incidence rates between groups.

Example 11: Again, using data from example 5 above, studying a new zoster-virus vaccine, the incidence rates for developing shingles were reported as 5.42 per 1,000 person-years (vaccine group) and 11.12 per 1,000 person-years (placebo group). The incidence rate difference between the 2 groups would be –0.0057 (P<0.001). This is obtained by converting each incidence rate to decimal format and subtracting the result in the placebo group from the vaccinated group; ie, 0.00542 – 0.01112 = –0.0057.

The NNT = 1/–0.0057 ≈ –175 (the minus sign is because there were more events in the placebo than the treatment group, which was a favorable outcome in this case). In other words, there would expectedly be 1 case of shingles prevented for every 175 persons per year receiving the new vaccine; or, as some authors alternately stated it, the NNT to prevent 1 case of herpes zoster over 3 years would be ≈58 [Fashner and Bell 2011].

The several zoster-virus vaccine examples, above, interestingly demonstrate how epidemiological data describing the same outcomes can be variously used to present different perspectives. In terms of the NNT of 175, individual patients may not perceive great benefit (ie, reduction in chances of developing shingles) by being vaccinated. Meanwhile, at the population level, the incidence rate ratio (example 10) suggests that vaccination affords roughly a 50% reduction in the likelihood of developing shingles, although the absolute percentages are small, and the Kaplan-Meir curves (example 5) demonstrate sustained decreases in the cumulative incidence of shingles over time. Thereby, society overall could benefit from significantly reduced suffering and healthcare costs.
 

Caveats & Conclusions

In concept, prevalence and incidence data, and even survival data, are rather straightforward; however, as with most statistical approaches in pain research these become more complicated as one looks closer. The above explanations are in some cases simplifications, but they hopefully provide some basic understandings for becoming a more critical consumer of epidemiological evidence.

News-media stories, journal articles, proposed legislation, and other persuasive arguments almost always begin with epidemiological data to exemplify the size and scope of the issue(s) of concern. In most cases, authors have a range of data to choose from and they usually select either the largest numbers available, depending on the issue, to convey the importance and urgency of their contentions.

Rarely, do they question how the data were derived or why there may be disparities among the various data — that is, which choices might be most accurate and valid. And, once a figure — eg, prevalence or incidence rate — becomes widely accepted and embedded in public awareness, it is very difficult to overturn, no matter how biased or inaccurate it might be. Yet, educated critics of the pain literature have an obligation to examine the data for themselves and to raise doubts when alternate interpretations would lead to strikingly different conclusions.

In that regard, nuances of numerators and denominators that may bias epidemiological evidence were discussed above. Here are some further caveats to consider:

  • Comparisons of population-level prevalence or incidence trends reflect associations or correlations but not causation. That is, the data should not be used to explain why or how the events came about or influenced each other; although, causation may be falsely implied when trends for different events over time are implied to directly affect each other (see CDC data in examples 7 and 8 above). This also has been described as a cum hoc, ergo propter hoc (or, “with this, therefore because of this”) fallacy [see Part 12 of this series, discussing “Fallacies of Evidence in Pain Research” here].

  • Incidence conveys information about the risk of newly experiencing an event, while prevalence suggests the extent of event occurrence, or how widespread it is, in a population. Incidence is expressed in units of number of cases in a population in a time period (eg, 10 cases per 1,000 person-years); whereas, prevalence has no units and is simply a proportion expressing the frequency of cases in a population during a given time. The values for incidence and prevalence are not directly comparable; although, distinctions between the two may be unclear or confusing in some pain research reports.

  • When considering the numerator of an incidence measure, there can be distinct differences between the number of new events and the number of persons experiencing new events. For events that occur only once in a lifetime (eg, death), this makes no difference; however, for events that may recur (eg, medication overdose, falls, etc.) during the time period of interest the distinctions can be critical for how incidence is calculated, discussed, and understood [Deng 2011].

    The number of persons newly experiencing an event divided by the total number of persons in the population at risk is the “incidence proportion” or “cumulative incidence.” The number of new events in the population divided by the time-duration expressed as person-years is the “incidence rate” or “person-time incidence rate.” The latter accounts for the total of new events, rather than just the number of persons (who might each experience multiple new events during the period of study).

    The differences may seem subtle, but researcher-authors sometimes muddle the distinctions by reporting incidence rates as number of patients newly experiencing an event per person-years, which is a mismatch of numerator and denominator that may be much smaller than if they had correctly used number of new events in the numerator. Worse yet, it may be unclear as to whether numbers of persons or of events are being characterized in the numerator of the incidence calculation.

  • Another subtle point is that, in trials observing subjects with varying followup times, calculating NNTs from incidence rates may be misleading. In such cases, the cumulative incidence of outcomes might best be estimated by means of time-to-event data; eg, Kaplan-Meir curves or hazard rates, accounting for differences variably occurring over time [Suissa 2009].
    Example 12: Considering data again from the zoster-virus vaccine study, noted above, the NNT favoring the vaccine was 175 using incidence-rate data in the report (example 11). Alternatively, using data extrapolated from the Kaplan-Meir curves in example 5, the cumulative incidence of shingles at the end of the observation period (5-years) in the vaccine group was ≈2.5% and it was ≈5.3% in the placebo group, yielding an NNT=36. From this perspective, benefits of vaccination may appear to be more appealing for individual patients.
  • Finally, in some research reports it may be unclear whether the authors are describing the prevalence rate or the incidence rate. And, even the population in question might not be clearly defined and understood.
    Example 13: There is probably no more profound example of how distinctions between prevalence and incidence can become muddled and confound conclusions than in the considerable body of evidence surrounding addiction developing in patients being treated for pain with opioid analgesics. This is characterized in a systematic review of the literature by Minozzi et al. [2012, discussed in UPDATE here]. The researchers selected 17 studies as being of adequate quality, including 3 systematic reviews, totaling 88,235 patients, and found that the reported incidence of addiction ranged from 0% to 24% (median 0.5%), while prevalence ranged from 0% to 31% (median 4.5%).

    However, it was unclear in many studies how addiction was being defined and whether those reporting “incidence” data — ie, new cases — might have actually included persons with prior substance-use problems. And, prevalence data are of little use for depicting de novo, or iatrogenic, addiction associated with opioid therapy for pain because they include persons with substance use disorders prior to and/or at the time of entering treatment as well as new cases developing during treatment.

    There was so much variation across the reviewed studies — in terms of design, definition of addiction, and data collection methods — that Minozzi et al. could not conduct a meta-analysis of the data. Often, the researchers had to merely discuss “frequencies” of reported addiction, since it was so unclear whether incidence or prevalence had been assessed. While they still concluded that the risk of iatrogenic addiction in opioid-treated patients is not a “major risk,” the true incidence of events (addiction) associated with opioid-analgesic therapy remains largely undetermined.

In sum, epidemiological data in pain research can be essential for understanding the scope and size of problems needing remediation as well as for assessing successes of programs designed to influence change of some sort. Yet, there are many ways that such data may be distorted — often quite subtly, and either unintentionally or to serve a preconceived agenda — so consumers of the pain literature need to become better educated and more skeptical regarding the veracity and reliability of what they read in journals, or hear at conferences or on the news, or are told by government agencies and other organizations.
 

REFERENCES:
> Byrne J. Statistics and Risk. Skeptical Medicine [online]. Undated [
access here].
> Coggon D, Rose G, Barker DJP. Epidemiology for the Uninitiated, 4th ed. BMJ. Undated [
available here].
> Deng D. Incidence Rate (IR) – How could this be wrongly calculated? [online]. 2011 [
access here].
> Fashner J, Bell AL. Herpes zoster and postherpetic neuralgia: prevention and management. Am Fam Physician. 2011;83(12):1423-1437.
> Israni RK. Guide to Biostatistics. Medpage Today [online]. 2007 [
access PDF here].
> Lanier WA, Johnson EM, Rolfs RT, et al. Risk Factors for Prescription Opioid-Related Death, Utah, 2008–2009. Pain Med. 2012;13:1580-1589 [
abstract here].
> Minozzi S, Amato L, Davoli M. Development of dependence following treatment with opioid analgesics for pain relief: a systematic review. Addiction. 2012(Oct18); online ahead of print [
abstract here].
> Olsen AMS, Fosbøl EL, Lindhardsen J, et al. Long-Term Cardiovascular Risk of NSAID Use According to Time Passed After First-Time Myocardial Infarction: A Nationwide Cohort Study. Circulation. 2012;126:1955-1963 [
available here].
> Oxman MN, Levin MJ, Johnson GR, et al. A vaccine to prevent herpes zoster and postherpetic neuralgia in older adults. NEJM. 2005;352(22):2271-2284 [
abstract here].
> Sedgwick P. Prevalence and Incidence. BMJ. 2010A;341:c4709.
> Sedgwick P. Incidence Rate Ratio. BMJ. 2010B;341:c4804.
> Smitherman TA, Burch R, Sheikh H, Loder E. The prevalence, impact, and treatment of migraine and severe headaches in the United States: A review of statistics from national surveillance studies. Headache. 2013;53(3):427-436 [
abstract here].
> Spruance SL, Reid JE, Grace M, Samore M. Hazard ratio in clinical trials. Antimicrob Agents Chemother. 2004;48(8):2787-2792 [
available here].
> Suissa S. Calculation of number needed to treat [letter]. NEJM. 2009;361:424-425.
> Wu L-T, Korper SP, Marsden ME, et al. Use of incidence and prevalence in the substance abuse literature: A review. Rockville, MD: Substance Abuse and Mental Health Services Administration, Office of Applied Studies; 2003 [
PDF here].

eNotifications Don’t Miss Out. Stay Up-to-Date on Pain-Topics UPDATES!
Register [here] to receive a once-weekly e-Notification of new postings.