Skip to main content
  • Research article
  • Open access
  • Published:

Comparative effectiveness and safety of pharmaceuticals assessed in observational studies compared with randomized controlled trials

Abstract

Background

There have been ongoing efforts to understand when and how data from observational studies can be applied to clinical and regulatory decision making. The objective of this review was to assess the comparability of relative treatment effects of pharmaceuticals from observational studies and randomized controlled trials (RCTs).

Methods

We searched PubMed and Embase for systematic literature reviews published between January 1, 1990, and January 31, 2020, that reported relative treatment effects of pharmaceuticals from both observational studies and RCTs. We extracted pooled relative effect estimates from observational studies and RCTs for each outcome, intervention-comparator, or indication assessed in the reviews. We calculated the ratio of the relative effect estimate from observational studies over that from RCTs, along with the corresponding 95% confidence interval (CI) for each pair of pooled RCT and observational study estimates, and we evaluated the consistency in relative treatment effects.

Results

Thirty systematic reviews across 7 therapeutic areas were identified from the literature. We analyzed 74 pairs of pooled relative effect estimates from RCTs and observational studies from 29 reviews. There was no statistically significant difference (based on the 95% CI) in relative effect estimates between RCTs and observational studies in 79.7% of pairs. There was an extreme difference (ratio < 0.7 or > 1.43) in 43.2% of pairs, and, in 17.6% of pairs, there was a significant difference and the estimates pointed in opposite directions.

Conclusions

Overall, our review shows that while there is no significant difference in the relative risk ratios between the majority of RCTs and observational studies compared, there is significant variation in about 20% of comparisons. The source of this variation should be the subject of further inquiry to elucidate how much of the variation is due to differences in patient populations versus biased estimates arising from issues with study design or analytical/statistical methods.

Peer Review reports

Background

Health care decision makers, particularly regulators but also health technology assessment agencies, have depended upon evidence from randomized clinical trials (RCTs) to assess drug effectiveness and to make comparisons among treatment options. Widespread adoption of the RCT was the hallmark of progress in clinical research in the twentieth century and accelerated the development and approval of new therapeutics; confidence in RCTs derived from their experimental nature, designs to minimize bias, rigorous data quality, and analytic approaches that supported causal inference.

In the last 30 years, we have witnessed an explosion of observational real-world data (RWD) and evidence (RWE) derived from RWD that has supplemented our understanding of the benefits and risks of treatments in broader populations of patients. RWE has been largely leveraged by regulators to assess the safety of marketed products and for new drug approvals when RCTs are infeasible, such as in rare diseases, oncology, or for long-term adverse effects. RCTs often do not have sufficient sample size to detect rare adverse events or long enough follow-up to detect long-term adverse effects. In such cases, regulatory decisions are often supplemented by RWE. However, leveraging of RWE has been much more slowly embraced in comparison to the adoption of RCTs for a variety of reasons. Imputation of causality is less certain in the absence of randomization and RWD can be much sparser and often requires extensive curation before it can be analyzed. Thus, skepticism about the robustness of observational RWD studies has made decision makers cautious in relying solely upon it to render judgments about the availability and appropriate use of new therapeutics, particularly by regulatory bodies.

Moreover, observational studies examining the effectiveness of treatments in similar populations have not always provided results consistent with RCTs. Despite many studies finding similar treatment effect estimates from RCTs and RWD analyses [1,2,3], other analyses have documented wide variation in results from RWD analyses within the same therapeutic areas [4], including analyses using propensity score-based methods [5]. Nonetheless, public interest has grown in the routine leveraging of RWD to promote the creation of a learning healthcare system, and regulatory bodies and other decision makers are exploring ways to expand their use of RWE. This is partly due to increasing acknowledgement of the value of RWE, such as its ability to better reflect actual environments in which the interventions are used.

One promising approach to understanding the sources of variability between RCT and observational study results is to compare estimates obtained from RWD analyses that attempt to emulate the eligibility criteria, endpoints, and other features of trials as closely as possible. A small number of RWD analyses have generated findings similar to previous RCTs [6, 7], and the findings of other RWD analyses have been consistent with subsequent RCTs [8]. In a small number of cases, RCTs and RWD studies have been published simultaneously [9]. This has the advantage of not knowing the RCT estimate when conducting the RWD study. There have been disagreements between observational RWD analyses and RCTs that were based upon avoidable errors in the RWD analysis design [7, 10]. This has led to a focus on the importance of research design in observational RWD analyses attempting to draw causal inferences regarding treatment effects [11,12,13].Emulation studies can improve understanding of when observational studies may reliably generate results consistent with RCTs; however, not all RCTs can be feasibly emulated using RWD due to limitations in observational datasets. Existing sources of observational data, such as health insurance claims and electronic health records (EHRs), may not routinely capture the intervention, indication, inclusion and exclusion criteria, and/or endpoints used in RCTs [14].

The objective of this paper is to provide further evidence on the comparability of RCTs and observational studies when the latter use a range of study designs and were not designed to emulate RCTs. We aim to quantify the extent of the difference in treatment effect estimates between RCTs and observational studies. We go beyond previous comparisons of RCTs and observational studies, with a focus purely on pharmaceuticals, and provide a systematic landscape review of the (in)consistency between RCT and observational study treatment effect estimates. The reasons for the variation in relative treatment effects are not assessed in this review but should be the subject of further study.

Methods

Eligibility criteria

Inclusion criteria

  • Study design:

    • Published systematic literature reviews designed to compare relative treatment effects from observational studies with the corresponding effects from RCTs; or

    • Published systematic literature reviews that reported subgroup analyses stratified by RCT and observational study design; and

    • Observational studies included in these reviews have to be retrospective or prospective cohort studies, or case-control studies

  • Population: Human subjects

  • Intervention(s) and comparator(s): Any active or placebo-controlled pharmaceutical or biopharmaceutical intervention

  • Outcome(s):

    • Efficacy/effectiveness or safety outcomes

    • Pooled relative treatment effect estimates for both observational studies and RCTs

Exclusion criteria

  • Systematic reviews that compared absolute outcomes, such as event rates, between non-comparative observational studies and RCTs

  • Non-pharmaceutical-based studies, e.g., surgical procedures, traditional medicine, vitamin/herbal supplements, etc.

  • Non-English language

  • Abstracts or conference proceedings

Search strategy

We searched PubMed and Embase to identify relevant systematic literature reviews published between January 1, 1990, and January 31, 2020. Anglemeyer et al.’s search strategy [1] was used as a template to develop the search strategy, which included a wide range of MeSH terms and relevant keywords. We updated Anglemeyer et al.’s systematic review hedge and used the more recent CADTH systematic review/meta-analysis hedge, created in 2016, in both PubMed and Embase [15]. We restricted our search to focus on pharmaceuticals only. PubMed and Embase were searched for the following concepts: pharmaceuticals, study methodology, and comparisons (filters: Humans and English language). The PubMed search strategy which was adapted for use in Embase can be found in Additional File 1.

Study selection

After removing duplicate references, three authors (JG, YH and LO) screened the titles and abstracts to identify relevant reviews. Once complete, LO verified the screening for accuracy. Following the title and abstract screen, full text articles were obtained for all potentially relevant reviews. Full text articles were then assessed to determine if they meet the selection criteria for final inclusion in the review.

Data extraction

A pilot extraction was first done by two authors (JG and YH) on a sample of three articles using a standardized extraction table. This was done to test the standardized extraction table and to ensure consistency between the authors performing the data extraction. JG and YH then independently extracted information from each review using the standardized extraction table. A third author (LO) verified the extraction for accuracy and identified any discrepancies. These discrepancies were discussed until resolved.

We focused on primary outcomes reported in the reviews and extracted information summarizing the scope of each of the identified systematic reviews. Extracted information included the following: review objective, population, disease/therapeutic area, interventions, outcome(s), number of included RCTs and observational studies, pooled relative treatment effect estimates for RCTs and observational studies along with the 95% confidence intervals (95% CI), and measures of heterogeneity.

Analysis

Based on the extracted information, we calculated the ratio of the relative treatment effect estimate from observational studies over the relative treatment effect estimate from RCTs (e.g., RRobs/RRrct), along with the corresponding 95% CI obtained via a Monte Carlo simulation for each pair of pooled RCT and observational study estimates. Outcomes for which the relative treatment effect was not expressed with a relative risk (RR), odds ratio (OR), or hazard ratio (HR) were excluded from our analysis.

We expressed differences in pooled effect estimates with the following measures: ratios that were < 1, > 1, or = 1, ratios indicating an “extreme difference” (< 0.70 or > 1.43) [16] and absence of an extreme difference. We evaluated (in)consistency between pooled RCT and observational study estimates with the following measures: presence of opposite direction of effect, RCT effect estimate outside the 95% CI of the observational study estimate, and vice versa, statistically significant difference between RCT and observational study estimates, and statistically significant difference along with opposite direction of effect. Statistically significant difference was determined by examining the 95% CI of the ratio of the relative treatment effect estimates from observational studies and RCTs derived from the Monte Carlo simulation. We examined differences in relative effect measures from observational studies and RCTs by outcome type and therapeutic area.

To test the robustness of our findings, we conducted two sensitivity analyses. As some reviews assessed more than one endpoint and contributed more than one pair of pooled relative treatment effects from RCTs and observational studies to our analysis, we repeated the analysis with one endpoint per review, i.e., a single pair of pooled relative treatment effects from RCTs and observational studies from each review, selecting the most frequently used endpoints for inclusion whenever possible. Additionally, as some studies were included in more than one review, we repeated the analysis ensuring that there is no overlap of data between the included reviews, i.e., ensuring that each study was included in only one review included in our analysis. Details on the sensitivity analyses are included in Additional File 2. All analyses were conducted using RStudio, version 1.3.1073 (©2009-2020 RStudio, PBC).

Results

Literature search

Our search on PubMed and Embase yielded 3798 unique citations after removing duplicates. After screening titles and abstracts, we identified 93 full text articles for further review. Of these, 30 reviews met our inclusion criteria (Fig. 1).

Fig. 1
figure 1

Diagram depicting literature screening process 

Included systematic reviews

The characteristics of the included reviews and the pairs of pooled relative treatment effects from RCTs and observational studies reported in the reviews are summarized in Table 1. Thirty systematic reviews across 7 therapeutic areas (cardiovascular disease [15/30], infectious disease [6/30], oncology [3/30], mental health [2/30], immune-inflammatory [1/30], metabolic disease [1/30], and other [2/30]) were identified from the literature. These reviews included 519 RCTs and observational studies and provided 79 pairs of pooled relative treatment effects from RCTs and observational studies across multiple interventions, comparators, and outcomes. Five pairs were excluded from our assessment because they concerned continuous outcomes (n = 1) or no pooled effect estimate was reported for observational studies (n = 4). As a result, 74 pairs of pooled relative treatment effects from RCTs and observational studies from 29 reviews were available for assessment of consistency.

Table 1 Characteristics of included reviews

Ratio of relative effect measures from observational studies and RCTs

Figure 2 presents the scatterplot of relative effect measures from observational studies and RCTs across the 74 pairs of pooled relative treatment effects with the 95% CI bars. The ratio of the relative effect measure from observational studies over the corresponding relative effect measure from RCTs ranged from 0.09 to 6.50 (median = 0.92, interquartile range = 0.69–1.27). The ratio was greater than 1, i.e., the relative effect was larger in observational studies in 31 of the 74 pairs (41.9%). The ratio was less than 1, i.e., the relative effect was larger in RCTs in 42 of the 74 pairs (56.8%), and the ratio was equal to 1 in one of the 74 pairs (1.4%). The ratio was greater than 1.43 in 12 of the 74 pairs (16.2%) and less than 0.7 in 20 of the 74 pairs (27.0%) indicating an extreme difference. There was an absence of an extreme difference (0.7 ≤ ratio ≤ 1.43) in 42 of the 74 pairs (56.8%; Table 2). Sensitivity analyses including only one endpoint from each review and ensuring no overlap of data between the included reviews resulted in similar findings (Table 2). Scatterplots of relative effect measures from observational studies and RCTs by outcome type and therapeutic area can be found in Additional File 3: Figures S1 and S2.

Fig. 2
figure 2

Relative effect measures (RR, OR, HR) from observational studies (y-axis) versus corresponding relative effect measures from randomized controlled trials (x-axis) across 74 pairs of pooled relative treatment effects 

Table 2 Ratio of relative effect measures from observational studies and relative effect measures from RCTs (e.g., RRobs/RRrct): (a) among 74 pairs of pooled estimates, (b) with only one endpoint per review included, and (c) with studies reported in multiple reviews excluded

Consistency of relative effect measures from observational studies and RCTs

In 30 of the 74 pairs (40.5%), effect estimates from observational studies and RCTs pointed in opposite directions of effect. The RCT point estimate was outside the 95% CI of the observational study in 35 of the 74 pairs (47.3%) and the observational study point estimate was outside the 95% CI of the RCT in 27 of the 74 pairs (36.5%). There was a statistically significant difference between relative effect estimates from observational studies and RCTs in 15 of the 74 pairs (20.3%). In 13 of the 74 pairs (17.6%), there was a statistically significant difference and the effect estimates of observational studies and RCTs pointed in opposite directions (Table 3). The results remained fairly consistent when the sensitivity analyses were conducted (Table 3).

Table 3 Consistency of relative effect measures from observational studies and relative effect measures from RCTs: (a) among 74 pairs of pooled estimates, (b) with only one endpoint per review included, and (c) with studies reported in multiple reviews excluded

Discussion

Our analysis of 29 reviews comparing results of RCTs and observational studies of pharmaceuticals showed, on average, no significant differences in their relative risk ratios across all studies, but also considerable study-by-study variability. The median ratio of the relative effect measure from observational studies to RCTs was 0.92, indicating just slightly lower effectiveness/safety estimates in observational studies than corresponding RCTs. This is in fact somewhat higher than the 0.80 ratio recently found in meta-research comparing effect estimates of randomized clinical trials that use routinely collected data (i.e., from traditional observational study sources such as registries, electronic health records, or administrative claims) for outcome ascertainment with traditional trials not using routinely collected data [47]. However, whether judging by the frequency of “extreme” differences (43.2%) or statistically significant differences in opposite directions (17.6%), one could not claim that observational study results consistently replicated RCT results on a study-by-study basis in our sample.

There are a number of reasons that any given observational study result may not replicate an RCT comparing the same treatments. First, it may not have been the intent of the observational study researchers to match a specific clinical trial—they may have intentionally studied a different treatment population, setting, or protocol in order to complement or test the RCT findings. In such cases, there would be variation in effect estimates due to estimating a different causal effect. Even if the researcher does attempt to match a specific RCT, the data may not have been available to closely match it, since patient histories, test results, etc., used for RCT inclusion criteria may not be observed, or outcomes may not be captured the same way. Even given similar data, non-randomized studies have the potential for selection/channeling bias into treatment determined by factors unobservable in either type of study, and analytic attempts to correct for such confounding may have limited success. In some cases, treatment conditions may differ enough between the RCT and real-world practice that replication of results should not be expected, e.g., due to careful safety monitoring that affects subsequent treatment in RCTs. Finally, it is possible that other pharmacoepidemiologic principles, beyond the study design considerations we already mentioned, were violated in the individual RWD studies, which could have caused disagreement between their results and the RCTs. While variation in treatment effect estimates due to estimating a different causal effect in a different study population is expected and valid, biased estimates arising from issues with study design or analytical methods may be problematic.

Details in these reviews were typically insufficient to distinguish among these possible explanations, without detailed review of the individual studies, which we did not attempt here. However, some reviews did attempt to explain the differences they found. For example, in the review by Gandhi et al. (2015) [24], which compared dual-antiplatelet therapy (DAPT) to mono-antiplatelet therapy (MAPT) following transcatheter aortic valve implantation, there was a statistically significant difference in pooled relative treatment effect estimates from observational studies and RCTs. The primary outcome was more likely to occur in the DAPT group than in the MAPT group in the observational studies (OR 3.02; 95% CI 1.91–4.76); however, no statistically significant difference was found between DAPT and MAPT in the RCTs (OR 0.98; 95% CI 0.46–2.11). The authors explained that the RCTs (n = 2) and observational studies (n = 2) included in this review had variable patient inclusion/exclusion criteria and there were differences in the type of prosthetic aortic valve used, which may have introduced selection bias [24].

To allow for better use of individual observational studies to inform decision-making, their ability to replicate RCT results needs to become more reliable, and the “target trial” approach seems to be a path forward. Several systematic efforts using sophisticated observational data research designs to emulate multiple RCTs are underway [48, 49]. These efforts are intended to provide regulatory bodies and other decision makers with empirical evidence to support the development of a framework for assessing when and under what circumstances observational RWE can be used to support a wider range of regulatory decisions. RCT DUPLICATE is a collaboration between the Food and Drug Administration (FDA), Brigham and Women’s Hospital and Harvard Medical School Division of Pharmacoepidemiology, to replicate 30 completed Phase III or IV trials and to predict the results of seven ongoing Phase IV trials using Medicare and commercial claims data [50]. The RCT DUPLICATE team has recently reported results for its first 10 trials [51]. They report hazard ratio estimates within the 95% CI of the corresponding trial for 8 of 10 emulations.

The Multi-Regional Clinical Trials Center and OptumLabs are leading another effort called Observational Patient Evidence for Regulatory Approval and Understanding Disease (OPERAND) which extends the trial emulation activity and relaxes the inclusion/exclusion criteria of the trials to examine treatment effects in the broader patient population treated in routine care [52]. The FDA has also funded the Yale University-Mayo Clinic Center of Excellence in Regulatory Science and Innovation to predict the results of three to four ongoing safety trials using OptumLabs claims data [53].

It is important to understand that clinical trials emulation efforts are being conducted solely to improve understanding of when observational studies may be expected to produce robust results. Bartlett and colleagues [14] found that in a review of 220 clinical trials published in high impact medical journals in 2017, 15% could potentially be emulated using data available from medical claims or EHRs. For example, the inclusion/exclusion criteria for many oncology trials require data on genetic markers and progression free survival unavailable in EHRs. The estimate by Bartlett and colleagues may prove to be an underestimate as the ability to link different types of observational data continues to improve. Nevertheless, it is reasonable to assume that it is not possible to emulate most trials with existing observational datasets.

These efforts are critical to advance our understanding of the strengths and limitations of observational RWE, identifying issues with study design, endpoint definition, data quality, and analytical methodology that may impact the consistency of findings between RWE and RCTs. While much attention has focused on differences in study populations between observational studies and RCTs as the reason for the inconsistency in effect estimates, emerging evidence suggests that issues with study design (e.g., establishing time zero of exposure) may be equally if not more important [7]. Therefore, the results of these efforts will not provide definitive guidance to decision makers but they emphasize how even subtle differences in study design and endpoint definition can impact absolute estimates of treatment effect. Moreover, RWE studies are answering a different question than RCTs, i.e., “Does it work?” verses “Can it Work?” The former is important to a variety of stakeholders beyond regulators. Hence, they should not be expected to provide results identical to RCTs.

Conclusions

In conclusion, although our review shows no average significant difference in the relative risk ratios between published RCTs and observational studies, there is substantial study-to-study variation. It was impractical to review all individual observational study designs and examine their potential biases, but future work should elucidate how much of the variation is due to differences in study populations versus biased estimates arising from issues with study design or analytical methods. As more target trial replication attempts are conducted and published, more systematic evidence will emerge on the reliability of this approach and on the potential for observational studies to more routinely inform healthcare decisions.

Availability of data and materials

The data analyzed in this study are included in this published article.

Abbreviations

CI:

Confidence interval

DAPT:

Dual-antiplatelet therapy

EHR:

Electronic health record

FDA:

Food and Drug Administration

HR:

Hazard ratio

MAPT:

Mono-antiplatelet therapy

OPERAND:

Observational Patient Evidence for Regulatory Approval and Understanding Disease

OR:

Odds ratio

RCT:

Randomized controlled trial

RWD:

Real-world data

RWE:

Real-world evidence

RR:

Relative risk

References

  1. Anglemyer A, Horvath HT, Bero L. Healthcare outcomes assessed with observational study designs compared with those assessed in randomized trials. Cochrane Database Syst Rev. 2014;4(4):MR000034. https://0-doi-org.brum.beds.ac.uk/10.1002/14651858.MR000034.pub2.

    Article  Google Scholar 

  2. Concato J, Shah N, Horwitz RI. Randomized controlled trials, observational studies, and the hierarchy of research designs. N Engl J Med. 2000;342(25):1887–92. https://0-doi-org.brum.beds.ac.uk/10.1056/NEJM200006223422507.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  3. Benson K, Hartz AJ. A comparison of observational studies and randomized, controlled trials. N Engl J Med. 2000;342(25):1878–86. https://0-doi-org.brum.beds.ac.uk/10.1056/NEJM200006223422506.

    Article  CAS  PubMed  Google Scholar 

  4. Madigan D, Ryan P, Schuemie M, et al. Evaluating the impact of database heterogeneity on observational study results. Am J Epidemiol. 2013;178(4):645–51. https://0-doi-org.brum.beds.ac.uk/10.1093/aje/kwt010.

    Article  PubMed  PubMed Central  Google Scholar 

  5. Forbes S, Dahabreh I. Benchmarking observational analyses against randomized trials: a review of studies assessing propensity score methods. J Gen Intern Med. 2020;35(5):1396–404. https://0-doi-org.brum.beds.ac.uk/10.1007/s11606-020-05713-5.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Seeger J, Bykov K, Bartels D, Huybrechts K, Zint K, Schneeweiss S. 2015. Safety and effectiveness of dabigatran and warfarin in routine care of patients with atrial fibrillation. Thromb Haemost. 2015;114(6):1277–89. https://0-doi-org.brum.beds.ac.uk/10.1160/TH15-06-0497.

    Article  PubMed  Google Scholar 

  7. Dickerman B, Garcia-Albeniz X, Logan R, et al. Avoidable flaws in observational analyses: an application to statins and cancer. Nat Med. 2019;25(10):1601–6. https://0-doi-org.brum.beds.ac.uk/10.1038/s41591-019-0597-x.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  8. Schneeweiss S, Seeger JD, Landon J, Walker AM. Aprotinin during coronary-artery bypass grafting and risk of death. N Engl J Med. 2008;358(8):771–83. https://0-doi-org.brum.beds.ac.uk/10.1056/NEJMoa0707571.

    Article  CAS  PubMed  Google Scholar 

  9. Noseworthy P, Gersh B, Kent D, et al. Atrial fibrillation ablation in practice: assessing CABANA generalizability. Peter A NoseworthyRobert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, 200 1st St SW, Rochester, MN, USA. Eur Heart J. 2019;40(16):1257–64. https://0-doi-org.brum.beds.ac.uk/10.1093/eurheartj/ehz085.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  10. Hernán MA, Alonso A, Logan R, Grodstein F, Michels KB, Willett WC, et al. Observational studies analyzed like randomized experiments: an application to postmenopausal hormone therapy and coronary heart disease. Epidemiology. 2008 Nov;19(6):766–79. https://0-doi-org.brum.beds.ac.uk/10.1097/EDE.0b013e3181875e61.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Petersen M, van der Laan M. Causal models and learning from data: integrating causal modeling and statistical estimation. Epidemiology. 2014;25(3):418–26. https://0-doi-org.brum.beds.ac.uk/10.1097/EDE.0000000000000078.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Goodman S, Schneeweiss S, Baiocchi M. Using design thinking to differentiate useful from misleading evidence in observational research. JAMA. 2017;317(7):705–7. https://0-doi-org.brum.beds.ac.uk/10.1001/jama.2016.19970.

    Article  PubMed  Google Scholar 

  13. Franklin JM, Schneeweiss S. When and How can real world data analyses substitute for randomized controlled trials? Clin Pharmacol Ther. 2017 Dec;102(6):924–33. https://0-doi-org.brum.beds.ac.uk/10.1002/cpt.857.

    Article  PubMed  Google Scholar 

  14. Bartlett V, Dhruva S, Shah N, Ryan P, Ross J. Feasibility of using real-world data to replicate clinical trial evidence. JAMA Netw Open. 2019;2(10):e1912869. https://0-doi-org.brum.beds.ac.uk/10.1001/jamanetworkopen.2019.12869.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Strings attached: CADTH database search filters [Internet]. Ottawa: CADTH; 2016. Available from: https://www.cadth.ca/resources/finding-evidence

  16. Dahabreh IJ, Sheldrick RC, Paulus JK, Chung M, Varvarigou V, Jafri H, et al. Do observational studies using propensity score methods agree with randomized trials? A systematic comparison of studies on acute coronary syndromes. Eur Heart J. 2012;33(15):1893–901. https://0-doi-org.brum.beds.ac.uk/10.1093/eurheartj/ehs114.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Abuzaid A, Ranjan P, Fabrizio C, et al. Single anti-platelet therapy versus dual anti-platelet therapy after transcatheter aortic valve replacement: a meta-analysis. Structural Heart. 2018;2(5):408–18. https://0-doi-org.brum.beds.ac.uk/10.1080/24748706.2018.1491082.

    Article  Google Scholar 

  18. Agarwal N, Mahmoud AN, Mojadidi MK, Golwala H, Elgendy IY. Dual versus triple antithrombotic therapy in patients undergoing percutaneous coronary intervention-meta-analysis and meta-regression. Cardiovasc Revasc Med. 2019;20(12):1134–9. https://0-doi-org.brum.beds.ac.uk/10.1016/j.carrev.2019.02.022.

    Article  PubMed  Google Scholar 

  19. Agarwal N, Mahmoud AN, Patel NK, Jain A, Garg J, Mojadidi MK, et al. Meta-analysis of aspirin versus dual antiplatelet therapy following coronary artery bypass grafting. Am J Cardiol. 2018;121(1):32–40. https://0-doi-org.brum.beds.ac.uk/10.1016/j.amjcard.2017.09.022.

    Article  CAS  PubMed  Google Scholar 

  20. An KR, Belley-Cote EP, Um KJ, Gupta S, McClure G, Jaffer IH, et al. Antiplatelet therapy versus anticoagulation after surgical bioprosthetic aortic valve replacement: a systematic review and meta-analysis. Thromb Haemost. 2019;119(2):328–39. https://0-doi-org.brum.beds.ac.uk/10.1055/s-0038-1676816.

    Article  PubMed  Google Scholar 

  21. Chien HT, Lin YC, Sheu CC, Hsieh KP, Chang JS. Is colistin-associated acute kidney injury clinically important in adults? A systematic review and meta-analysis. Int J Antimicrob Agents. 2020;55(3):105889. https://0-doi-org.brum.beds.ac.uk/10.1016/j.ijantimicag.2020.105889.

    Article  CAS  PubMed  Google Scholar 

  22. Chopra V, Rogers MA, Buist M, Govindan S, Lindenauer PK, Saint S, et al. Is statin use associated with reduced mortality after pneumonia? A systematic review and meta-analysis. Am J Med. 2012;125(11):1111–23. https://0-doi-org.brum.beds.ac.uk/10.1016/j.amjmed.2012.04.011.

    Article  CAS  PubMed  Google Scholar 

  23. Desai RJ, Thaler KJ, Mahlknecht P, Gartlehner G, McDonagh MS, Mesgarpour B, et al. Comparative risk of harm associated with the use of targeted immunomodulators: a systematic review. Arthritis Care Res (Hoboken). 2016;68(8):1078–88. https://0-doi-org.brum.beds.ac.uk/10.1002/acr.22815.

    Article  CAS  PubMed  Google Scholar 

  24. Gandhi S, Schwalm JD, Velianou JL, Natarajan MK, Farkouh ME. Comparison of dual-antiplatelet therapy to mono-antiplatelet therapy after transcatheter aortic valve implantation: systematic review and meta-analysis. Can J Cardiol. 2015 Jun;31(6):775–84. https://0-doi-org.brum.beds.ac.uk/10.1016/j.cjca.2015.01.014.

    Article  PubMed  Google Scholar 

  25. Ge Z, Faggioni M, Baber U, Sartori S, Sorrentino S, Farhan S, et al. Safety and efficacy of nonvitamin K antagonist oral anticoagulants during catheter ablation of atrial fibrillation: a systematic review and meta-analysis. Cardiovasc Ther. 2018;36(5):e12457. https://0-doi-org.brum.beds.ac.uk/10.1111/1755-5922.12457.

    Article  CAS  PubMed  Google Scholar 

  26. Heffernan AJ, Sime FB, Sun J, Lipman J, Kumar A, Andrews K, et al. Beta-lactam antibiotic versus combined beta-lactam antibiotics and single daily dosing regimens of aminoglycosides for treating serious infections: a meta-analysis. Int J Antimicrob Agents. 2020;55(3):105839. https://0-doi-org.brum.beds.ac.uk/10.1016/j.ijantimicag.2019.10.020.

    Article  CAS  PubMed  Google Scholar 

  27. Ho ET, Wong G, Craig JC, Chapman JR. Once-daily extended-release versus twice-daily standard-release tacrolimus in kidney transplant recipients: a systematic review. Transplantation. 2013;95(9):1120–8. https://0-doi-org.brum.beds.ac.uk/10.1097/TP.0b013e318284c15b.

    Article  CAS  PubMed  Google Scholar 

  28. Khan SU, Lone AN, Asad ZUA, Rahman H, Khan MS, Saleem MA, et al. Meta-Analysis of efficacy and safety of proton pump inhibitors with dual antiplatelet therapy for coronary artery disease. Cardiovasc Revasc Med. 2019;20(12):1125–33. https://0-doi-org.brum.beds.ac.uk/10.1016/j.carrev.2019.02.002.

    Article  PubMed  PubMed Central  Google Scholar 

  29. Kirson NY, Weiden PJ, Yermakov S, Huang W, Samuelson T, Offord SJ, et al. Efficacy and effectiveness of depot versus oral antipsychotics in schizophrenia: synthesizing results across different research designs. J Clin Psychiatry. 2013;74(6):568–75. https://0-doi-org.brum.beds.ac.uk/10.4088/JCP.12r08167.

    Article  CAS  PubMed  Google Scholar 

  30. Land R, Siskind D, McArdle P, Kisely S, Winckel K, Hollingworth SA. The impact of clozapine on hospital use: a systematic review and meta-analysis. Acta Psychiatr Scand. 2017;135(4):296–309. https://0-doi-org.brum.beds.ac.uk/10.1111/acps.12700.

    Article  CAS  PubMed  Google Scholar 

  31. Li L, Li S, Deng K, Liu J, Vandvik PO, Zhao P, et al. Dipeptidyl peptidase-4 inhibitors and risk of heart failure in type 2 diabetes: systematic review and meta-analysis of randomised and observational studies. BMJ. 2016;352:i610. https://0-doi-org.brum.beds.ac.uk/10.1136/bmj.i610.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  32. Melloni C, Washam JB, Jones WS, Halim SA, Hasselblad V, Mayer SB, et al. Conflicting results between randomized trials and observational studies on the impact of proton pump inhibitors on cardiovascular events when coadministered with dual antiplatelet therapy: systematic review. Circ Cardiovasc Qual Outcomes. 2015;8(1):47–55. https://0-doi-org.brum.beds.ac.uk/10.1161/CIRCOUTCOMES.114.001177.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Miles JA, Hanumanthu BK, Patel K, Chen M, Siegel RM, Kokkinidis DG. Torsemide versus furosemide and intermediate-term outcomes in patients with heart failure: an updated meta-analysis. J Cardiovasc Med (Hagerstown). 2019;20(6):379–88. https://0-doi-org.brum.beds.ac.uk/10.2459/JCM.0000000000000794.

    Article  CAS  Google Scholar 

  34. Mongkhon P, Naser AY, Fanning L, Tse G, Lau WCY, Wong ICK, et al. Oral anticoagulants and risk of dementia: a systematic review and meta-analysis of observational studies and randomized controlled trials. Neurosci Biobehav Rev. 2019;96:1–9. https://0-doi-org.brum.beds.ac.uk/10.1016/j.neubiorev.2018.10.025.

    Article  CAS  PubMed  Google Scholar 

  35. Raheja H, Garg A, Goel S, Banerjee K, Hollander G, Shani J, et al. Comparison of single versus dual antiplatelet therapy after TAVR: a systematic review and meta-analysis. Catheter Cardiovasc Interv. 2018;92(4):783–91. https://0-doi-org.brum.beds.ac.uk/10.1002/ccd.27582.

    Article  PubMed  Google Scholar 

  36. Ramjan R, Calmy A, Vitoria M, Mills EJ, Hill A, Cooke G, et al. Systematic review and meta-analysis: patient and programme impact of fixed-dose combination antiretroviral therapy. Trop Med Int Health. 2014;19(5):501–13. https://0-doi-org.brum.beds.ac.uk/10.1111/tmi.12297.

    Article  PubMed  Google Scholar 

  37. Shi M, Zheng H, Nie B, Gong W, Cui X. Statin use and risk of liver cancer: an update meta-analysis. BMJ Open. 2014;4(9):e005399. https://0-doi-org.brum.beds.ac.uk/10.1136/bmjopen-2014-005399.

    Article  PubMed  PubMed Central  Google Scholar 

  38. Teo J, Liew Y, Lee W, Kwa AL. Prolonged infusion versus intermittent boluses of Î2-lactam antibiotics for treatment of acute infections: a meta-analysis. Int J Antimicrob Agents. 2014;43(5):403–11. https://0-doi-org.brum.beds.ac.uk/10.1016/j.ijantimicag.2014.01.027.

    Article  CAS  PubMed  Google Scholar 

  39. Vinceti M, Filippini T, Del Giovane C, Dennert G, Zwahlen M, Brinkman M, et al. Selenium for preventing cancer. Cochrane Database Syst Rev. 2018;1:CD005195. https://0-doi-org.brum.beds.ac.uk/10.1002/14651858.CD005195.pub4.

    Article  PubMed  Google Scholar 

  40. Wang CH, Li CH, Hsieh R, Fan CY, Hsu TC, Chang WC, et al. Proton pump inhibitors therapy and the risk of pneumonia: a systematic review and meta-analysis of randomized controlled trials and observational studies. Expert Opin Drug Saf. 2019;18(3):163–72. https://0-doi-org.brum.beds.ac.uk/10.1080/14740338.2019.1577820.

    Article  CAS  PubMed  Google Scholar 

  41. Wat R, Mammi M, Paredes J, Haines J, Alasmari M, Liew A, et al. The effectiveness of antiepileptic medications as prophylaxis of early seizure in patients with traumatic brain injury compared with placebo or no treatment: a systematic review and meta-analysis. World Neurosurg. 2019;122:433–40. https://0-doi-org.brum.beds.ac.uk/10.1016/j.wneu.2018.11.076.

    Article  PubMed  Google Scholar 

  42. Wong AYS, Chan EW, Anand S, Worsley AJ, Wong ICK. Managing cardiovascular risk of macrolides: systematic review and meta-analysis. Drug Saf. 2017;40(8):663–77. https://0-doi-org.brum.beds.ac.uk/10.1007/s40264-017-0533-2.

    Article  CAS  PubMed  Google Scholar 

  43. Yang J, Yu S, Yang Z, Yan Y, Chen Y, Zeng H, et al. Efficacy and safety of supportive care biosimilars among cancer patients: a systematic review and meta-analysis. BioDrugs. 2019;33(4):373–89. https://0-doi-org.brum.beds.ac.uk/10.1007/s40259-019-00356-3.

    Article  PubMed  Google Scholar 

  44. Yu W, Wang B, Zhan B, Li Q, Li Y, Zhu Z, et al. Statin therapy improved long-term prognosis in patients with major non-cardiac vascular surgeries: a systematic review and meta-analysis. Vascul Pharmacol. 2018;109:1–16. https://0-doi-org.brum.beds.ac.uk/10.1016/j.vph.2018.06.015.

    Article  CAS  PubMed  Google Scholar 

  45. Zhang C, Gu ZC, Ding Z, Shen L, Pan MM, Zheng YL, et al. Decreased risk of renal impairment in atrial fibrillation patients receiving non-vitamin K antagonist oral anticoagulants: a pooled analysis of randomized controlled trials and real-world studies. Thromb Res. 2019;174:16–23. https://0-doi-org.brum.beds.ac.uk/10.1016/j.thromres.2018.12.010.

    Article  CAS  PubMed  Google Scholar 

  46. Zhao Y, Peng H, Li X, Qin Y, Cao F, Peng D, et al. Dual antiplatelet therapy after coronary artery bypass surgery: is there an increase in bleeding risk? A meta-analysis. Interact Cardiovasc Thorac Surg. 2018;26(4):573–82. https://0-doi-org.brum.beds.ac.uk/10.1093/icvts/ivx374.

    Article  PubMed  Google Scholar 

  47. McCord KA, Ewald H, Agarwal A. Treatment effects in randomised trials using routinely collected data for outcome assessment versus traditional trials: meta-research study. BMJ. 2021;372:n450. https://0-doi-org.brum.beds.ac.uk/10.1136/bmj.n450.

    Article  Google Scholar 

  48. Thompson D. Replication of randomized, controlled trials using real world data: what could go wrong? Value Health. 2021;24(1):112–5. https://0-doi-org.brum.beds.ac.uk/10.1016/j.jval.2020.09.015.

    Article  PubMed  Google Scholar 

  49. Crown W, Bierer B. Real-world evidence: understanding sources of variability through empirical analysis. Value Health. 2021 Jan;24(1):116–7. https://0-doi-org.brum.beds.ac.uk/10.1016/j.jval.2020.11.003.

    Article  PubMed  Google Scholar 

  50. FDA Prediction Project – RCT DUPLICATE [Internet]. Available from: www.rctduplicate.org

  51. Franklin J, Patorno E, Desai R, et al. Emulating randomized clinical trials with nonrandomized real-world evidence studies: first results from the RCT DUPLICATE initiative. Circulation. 2021;143(10):1002–13. https://0-doi-org.brum.beds.ac.uk/10.1161/CIRCULATIONAHA.120.051718.

    Article  PubMed  Google Scholar 

  52. Evaluating RWE from observational studies in regulatory decision-making: lessons learned from trial replication analyses. Trial Emulation Studies and OPERAND. Duke-Margolis Center for Health Policy Virtual Meeting, February 16-17, 2021.

  53. Yale School of Medicine. Center for Outcomes Research and Evaluation (CORE). Current Projects. https://medicine.yale.edu/core/current_projects/cersi/research/.

Download references

Acknowledgements

Not applicable

Funding

No funding was received for this study.

Author information

Authors and Affiliations

Authors

Contributions

YH contributed to the extraction, analysis, and interpretation of the data and contributed to the drafting and revising the manuscript. JJ contributed to the design of the study, was a major contributor to the analysis and interpretation of the data, and contributed to the revision of the manuscript. JG contributed to the coordination of the study, extraction of the data, and drafting of the manuscript. MB, WC, and RW contributed to the design of the study, interpretation of the results, drafting of the manuscript, and revision of the manuscript for important intellectual content. WG and CDM contributed to the design of the study, interpretation of the results, and revision of the manuscript for important intellectual content. LO coordinated the study and contributed to the data extraction, interpretation of the results, drafting the manuscript, and revising the manuscript for important intellectual content. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Yoon Duk Hong.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

Yoon Duk Hong, John Guerino, Marc L. Berger, William Crown, Richard J. Willke, Wim G. Goettsch, and Lucinda S. Orsini have no conflicts of interest to report. Jeroen P. Jansen is a part-time employee of Precision Medicine Group (PMG) (PRECISIONheor) and has stock options from Precision Medicine Group. PMG provides contracted research services to pharmaceutical and biotech industry. C. Daniel Mullins has received consulting fees from AstraZeneca, Bayer, Incyte, Merck, Pfizer, and Takeda and has received support from Bayer and Pfizer for attending meetings and/or travel.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional File 1:.

Search Strategy

Additional File 2:.

Sensitivity Analyses

Additional File 3:.

Figures S1 & S2. Figure S1. Relative effect measures from observational studies versus corresponding relative effect measures from randomized controlled trials by outcome type. Figure S2. Relative effect measures from observational studies versus corresponding relative effect measures from randomized controlled trials by therapeutic area.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hong, Y.D., Jansen, J.P., Guerino, J. et al. Comparative effectiveness and safety of pharmaceuticals assessed in observational studies compared with randomized controlled trials. BMC Med 19, 307 (2021). https://0-doi-org.brum.beds.ac.uk/10.1186/s12916-021-02176-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12916-021-02176-1

Keywords