When it comes to determining the efficacy of cancer therapies, observational, real-world studies should not replace randomized clinical trials, according to results from a recently published analysis that appeared in JAMA Network Open, in which researchers found very little concordance or correlation between survival outcomes found in RCTs and comparative effectiveness research (CER).
“Randomized clinical trials in oncology represent the highest standard level of evidence from which we establish efficacy of different therapeutic approaches. Despite the importance of randomized clinical trials, this study design has limitations associated with cost, timeliness, and generalizability of results in a real-world oncology population. In addition, several clinical scenarios encountered within oncology lack data from randomized settings to support clinical decision-making. To help fill these evidence gaps, investigators will often rely on research using nonrandomized observational data,” wrote researchers, led by Abhishek Kumar, MD, MAS, of the University of California, San Diego, in La Jolla.
In this study, Kumar and colleagues sought to assess how consistent survival outcomes from analyses using observational cancer registry data (nonrandomized comparative effectiveness research [CER]) are compared with RCTs. To do so, they compared 141 randomized clinical trials referenced in the National Comprehensive Cancer Network Clinical Practice Guidelines for eight common solid tumor types (n=85,118), with data analyses from 1,344,536 patients from the National Cancer Database (NCDB). Using Cox proportional hazards regression models, they calculated hazard ratios (HRs) for overall survival.
HRs for survival calculated in analyses of data from the NCDB was concordant with HRs from the randomized clinical trials in 79 univariable analyses (56%), 98 multivariable analyses (70%), and 90 propensity score models (64%).
Correlation between HR from clinical trials and NCDB analyses were lowest in unadjusted analysis (r=0.17; 95% CI: 0.005-0.33; P=0.02), followed by propensity score analysis (r=0.25; 95% CI: 0.09-0.40; P=0.03), and multivariable analysis (r=0.26; 95% CI: 0.10-0.41; P=0.003).
When they looked at concordance of statistical significance, researchers found that P values from the NCDB analyses corresponded to those from RCTs in 58 univariable analyses (41%), 65 multivariable analyses (46%), and 63 propensity score models (45%).
“Propensity-matched hazard ratios for overall survival from CER-based analyses fell outside the 95% CIs of their RCT counterparts 36% of the time (with 64% falling within). Furthermore, observational studies led to a different inference regarding therapeutic efficacy 55% of the time (ie, point estimates that were either in a different direction, nonsignificant in CER vs significant in RCT or significant in CER but nonsignificant in RCT).”
Kumar, et al also found that no characteristics of clinical trials—including disease site, type of intervention, and severity of cancer—were associated with concordance between the randomized clinical trials and NCDB analyses.
Study limitations include the inability to evaluate cancer-specific survival, a failure to account for all RCT inclusion/exclusion criteria and for differences in follow-up duration, and the structured approach to analysis.
Despite the finding of a significant lack of concordance in survival outcomes between CER using cancer registry data and RCT outcomes, observational studies may still play a part in cancer research, Banerjee and Prasad conceded.
“Is there a role for retrospective CER studies in oncology in light of these findings? We believe the answer is yes. Observational studies may clarify issues of prognosis, patterns of real-world usage, rare adverse events, and glaring disparities in cancer care delivery. However, when it comes to establishing the fundamental efficacy of therapeutic interventions, caution is warranted, and propensity score matching is not a panacea. Ultimately, adding the real-world rhetorical flourish to the title of a CER abstract based on its patient population understates the problematic elements of observational studies: unmeasured confounders, problems defining time 0, and selection bias (confounding by indication).”
Ultimately, though, RCTs in oncology remain the highest standard level of evidence upon which clinicians can base treatment.
“The holy grail of medicine is to develop a system where we can make reliable inferences regarding the effectiveness of therapies as fast as possible, as cheaply as possible, with the least number of patients exposed to less effective regimens. Although many believe observational, real-world data will someday fill this niche, the work of Kumar and colleagues reminds us that for the time being randomization remains the reference standard in cancer research,” Banerjee and Prasad concluded.
Researchers found a significant lack of agreement in survival outcomes between comparative effectiveness research and randomized clinical trials.
Randomized clinical trials are still the standard upon which therapeutic cancer choices should be based.
E.C. Meszaros, Contributing Writer, BreakingMED™
This study was supported by the National Institutes of Health.
Kumar reported no conflicts of interest.
Prasad has received research funding from Arnold Ventures, royalties from Johns Hopkins Press, honoraria from Medscape, universities, medical centers, nonprofits, and professional societies, consulting fees from United Healthcare, and speaking fees from Evicore. His podcast “Plenary Session” is supported by Patreon.
Cat ID: 148
Topic ID: 88,148,730,935,192,148