Skip to main content

Health professions digital education on clinical practice guidelines: a systematic review by Digital Health Education collaboration

Abstract

Background

Clinical practice guidelines are an important source of information, designed to help clinicians integrate research evidence into their clinical practice. Digital education is increasingly used for clinical practice guideline dissemination and adoption. Our aim was to evaluate the effectiveness of digital education in improving the adoption of clinical practice guidelines.

Methods

We performed a systematic review and searched seven electronic databases from January 1990 to September 2018. Two reviewers independently screened studies, extracted data and assessed risk of bias. We included studies in any language evaluating the effectiveness of digital education on clinical practice guidelines compared to other forms of education or no intervention in healthcare professionals. We used the Grading of Recommendations, Assessment, Development and Evaluations (GRADE) approach to assess the quality of the body of evidence.

Results

Seventeen trials involving 2382 participants were included. The included studies were diverse with a largely unclear or high risk of bias. They mostly focused on physicians, evaluated computer-based interventions with limited interactivity and measured participants’ knowledge and behaviour. With regard to knowledge, studies comparing the effect of digital education with no intervention showed a moderate, statistically significant difference in favour of digital education intervention (SMD = 0.85, 95% CI 0.16, 1.54; I2 = 83%, n = 3, moderate quality of evidence). Studies comparing the effect of digital education with traditional learning on knowledge showed a small, statistically non-significant difference in favour of digital education (SMD = 0.23, 95% CI − 0.12, 0.59; I2 = 34%, n = 3, moderate quality of evidence). Three studies measured participants’ skills and reported mixed results. Of four studies measuring satisfaction, three studies favoured digital education over traditional learning. Of nine studies evaluating healthcare professionals’ behaviour change, only one study comparing email-delivered, spaced education intervention to no intervention reported improvement in the intervention group. Of three studies reporting patient outcomes, only one study comparing email-delivered, spaced education games to non-interactive online resources reported modest improvement in the intervention group. The quality of evidence for outcomes other than knowledge was mostly judged as low due to risk of bias, imprecision and/or inconsistency.

Conclusions

Health professions digital education on clinical practice guidelines is at least as effective as traditional learning and more effective than no intervention in terms of knowledge. Most studies report little or no difference in healthcare professionals’ behaviours and patient outcomes. The only intervention shown to improve healthcare professionals’ behaviour and modestly patient outcomes was email-delivered, spaced education. Future research should evaluate interactive, simulation-based and spaced forms of digital education and report on outcomes such as skills, behaviour, patient outcomes and cost.

Peer Review reports

Introduction

The translation of new research evidence into clinical practice can take up to 17 years [1]. A commonly used strategy that aims to bridge this divide is clinical practice guidelines [2]. Guidelines are evidence synthesis-based recommendations developed to support beneficial clinical practices, reduce unwanted variations and improve patient care outcomes [3, 4]. They are an important source of information for clinicians, designed to help them assimilate, evaluate and adopt evidence into their clinical practice [5]. However, their uptake is still low and dependent on a range of factors relating to the guideline itself (i.e. its complexity, applicability and clarity), healthcare professionals, patients and healthcare organisation [6,7,8]. Correspondingly, multifaceted interventions targeting these various factors have been shown to be most effective in promoting guideline uptake. While health professions education is an essential part of these multifaceted interventions, it is also still commonly employed as the only guideline dissemination strategy [9]. Evidence to date has mostly focused on traditional learning. Traditional learning for clinical practice guideline adoption was shown to lead to small improvement in desired clinical practices, with more interactive and engaging interventions showing greater effectiveness [10, 11].

Traditional learning, especially the face-to-face type, can be time-consuming, costly and inaccessible [9, 12]. Printed, self-learning resources on the other hand are easily overlooked by busy healthcare professionals [13]. With the relentless growth in research evidence and healthcare complexity, traditional education seems unsustainable in the context of guideline dissemination and training. Digital education, increasingly employed in continuing medical education and professional development, may offer a more flexible, affordable and accessible alternative to traditional learning as it transcends geographical and time constraints. The use of diverse media and delivery devices allows for engaging and interactive learning resources which can be easily updated in line with the new evidence and customised to the individual healthcare professional’s learning needs [14, 15]. By freeing up educators’ and healthcare professionals’ time, digital education may prove to be more cost-effective compared to traditional learning [16, 17]. Past reviews on guideline dissemination and adoption evaluated the effectiveness of traditional education, decision support systems, multifaceted intervention or all digital interventions [9, 11, 18,19,20,21,22]. While digital education is increasingly used in continuing professional development, it is unclear how effective it is in promoting guideline adoption. To address this gap, we decided to undertake a systematic review to investigate the effectiveness and economic impact of digital education for guideline adoption among healthcare professionals.

Methods

We followed Cochrane recommendations for the conduct of systematic reviews and reported according to the PRISMA guidance [23, 24].

Study selection

We included RCTs and cluster RCTs that compared digital education to usual education or other forms of digital education to train pre- or post-registration healthcare professionals on clinical practice guidelines. We included healthcare professionals with qualifications found in the Health Field of Education and Training (091) of the International Standard Classification of Education (ISCED-F). We excluded studies of students and/or practitioners of traditional, alternative and complementary medicine. Digital education interventions could be delivered as the only mode of the education intervention or blended with traditional learning (i.e. blended learning). We included offline and online computer-based digital education, digital game-based learning (DGBL), massive open online courses (MOOCs), virtual reality environments (VRE), virtual patient simulations (VPS) and mobile learning (mLearning) [25]. In studies comparing diverse forms of digital education, we differentiated the interventions based on the level interactivity. Interventions with greater control over the learning environment were considered more interactive. We excluded studies on psychomotor skills trainers (PST) as this form of digital education may not be readily available to healthcare professionals. We also excluded studies on interventions that lacked explicit reference to a clinical practice guideline, had an optional digital education component and focused on digital tools for patient management or on computerised decision support systems. Computerised decision support systems are a type of software providing clinicians with decision support in the form of evidence-based, patient-specific recommendations at the point of care [26]. We excluded studies on computerised decision support systems as they have a different underlying principle compared to digital education by being available at the point of care, providing patient-specific recommendations, being integrated with patient data etc. No restrictions on outcomes were applied.

We extracted data on the following primary outcomes:

  • Learners’ knowledge, post-intervention. Knowledge is defined as learners’ factual or conceptual understanding measured using change between pre- and post-test scores.

  • Learners’ skills post-intervention. Skills are defined as learners’ ability to demonstrate a procedure or technique in an educational setting.

  • Learners’ attitudes post-intervention towards new competencies, clinical practice or patients (e.g. recognition of moral and ethical responsibilities towards patients). Attitude is defined as the tendency to respond positively or negatively towards the intervention.

  • Learners’ satisfaction post-intervention with the learning intervention (e.g. retention rates, dropout rates, survey satisfaction scores). This can be defined as the level of approval when comparing the perceived performance of digital education compared with one’s expectations.

  • Change in healthcare professional’s practice or behaviour.

We also extracted data on the following secondary outcomes:

  • Cost and cost-effectiveness of the intervention

  • Patient-related outcomes (e.g. heaviness of smoking index, number of smoking cessation)

  • Adverse/unintended effects of the intervention

Data sources, collection, analysis and risk of bias assessment

This review is part of a global evidence synthesis initiative on digital health professions education for which a wider search strategy was developed (see Additional file 1). The following databases were searched from January 1990 to September 2018: MEDLINE (Ovid), Embase (Ovid), Central Register of Controlled Trials (CENTRAL) (Cochrane Library), PsycINFO (EBSCO), Educational Resource Information Centre (ERIC) (EBSCO), CINAHL (EBSCO) and Web of Science Core Collection (Thomson Reuters). The rationale for using 1990 as the starting year for our search was because preceding this year, the use of the computers was largely restricted to very basic functions. No language or publication restrictions were applied. We searched reference lists of all included studies and relevant systematic reviews. We also searched the International Clinical Trials Registry Platform, Search Portal and Current Controlled Trials metaRegister of Controlled Trials to locate unpublished or ongoing trials. We contacted the relevant investigators for missing information. Search results from different sources were combined in a single library, and duplicate records were removed. Two reviewers individually screened titles and abstracts identified by the searches. Full texts of potentially relevant articles were obtained and assessed for inclusion independently by two reviewers. Where data was missing or incomplete, reviewers were contacted for additional information. Any disagreements were settled through discussion between the two reviewers with a third reviewer acting as an arbiter.

Two reviewers extracted the data independently using a standardised data extraction form which was piloted and amended based on feedback. Data was extracted on study design, participants’ demographics, type of digital education, intervention content and outcomes. We contacted study authors in the event of any ambiguous or missing information. Disagreements between reviewers were resolved by discussion. A third reviewer acted as an arbiter in cases where disagreements persisted.

The methodological quality of included RCTs was independently assessed by two reviewers using the Cochrane Risk of Bias Tool which includes the following domains: (1) random sequence generation, (2) allocation concealment, (3) blinding of participants to the intervention, (4) blinding of outcome assessment, (5) attrition, (6) selective reporting and (7) other sources of bias (i.e. baseline imbalances) [23]. The following five additional criteria were included for the assessment of cluster RCTs: (1) recruitment bias which can occur when individuals are recruited to the trial after the clusters have been randomised, (2) baseline imbalance, (3) loss of clusters, (4) incorrect analysis and (5) comparability with individually randomised trials to make sure intervention effects are not overestimated due to ‘Herd effect’ or any such reasons as recommended by the Cochrane Handbook for Systematic Reviews of Interventions [23].

Data synthesis and analysis

We included post-intervention outcome data in our review for the sake of consistency as this is the most commonly reported form of findings in the included studies. We also reported separately the change score data from the included studies. For continuous outcomes, we reported the standardised mean differences (SMDs) and associated 95% CIs across studies. Standardised mean difference was used as a summary statistic as the outcomes in the included studies were measured differently. We were unable to identify a clinically meaningful effect size from the literature specifically for digital education interventions. Therefore, in line with other evidence syntheses of educational research, we interpreted SMDs using Cohen’s rule of thumb: < 0.2 no effect, 0.2–0.5 small effect size, 0.5–0.8 medium effect size and > 0.80 large effect size [23, 27, 28]. For dichotomous outcomes, we summarised relative risks and associated 95% CIs across studies. Subgroup analyses were not feasible due to the limited number of studies within respective comparisons, and outcomes. We employed the random-effects model in our meta-analysis. The I2 statistic was employed to evaluate heterogeneity, with I2 < 25%, 25–75% and > 75% to represent low, moderate and high degree of inconsistency, respectively [23]. The meta-analysis was performed using Review Manager 5.3 (Cochrane Library Software, Oxford, UK) [23]. We reported the findings in line with the PRISMA reporting standards [24]. We assessed and reported the quality of the evidence for each outcome, using the following GRADE assessment criteria: risk of bias, inconsistency, imprecision, indirectness and publication bias. Two authors independently assessed the quality of the evidence. We rated the quality of the body of evidence for each outcome as ‘high’, ‘moderate’ and ‘low’. We prepared ‘Summary of findings’ tables for each comparison to present the findings and the quality of the evidence (Additional file 1) [29]. We were unable to pool the data statistically using meta-analysis for some outcomes (e.g. skills, behaviour) due to high heterogeneity in types of participants, interventions, comparisons, outcomes, outcome measures and outcomes measurement instruments. We presented those findings in the form of a narrative synthesis. We organised the studies by the comparisons and outcomes. We transformed the data expressed in different ways into a common statistical format. We tabulated the results to identify patterns in data across the included studies focusing on both the direction as well as the effect size where possible. In addition, we displayed all the available behaviour change outcome data in a forest plot without a meta-analysis as a visual summary (see Additional file 1). In some studies, behaviour was measured in the same study participants using different approaches and tools. Instead of selecting one outcome or producing a single estimate per study, we present all behaviour change outcome data from the included studies as it focuses on different aspects of clinicians’ behaviour and practice [23].

Results

Our searches identified a total of 44,054 citations. After screening titles and abstracts, we retrieved full text for 4072 studies focusing on different digital education interventions for health professions education. We identified 40 potentially eligible studies of which 21 studies were excluded for not meeting our inclusion criteria. Seventeen studies from 19 reports, comprising of 14 individually randomised studies and three cluster randomised studies with 2382 participants, were included (Fig. 1, Table 1) [16, 30,31,32,33,34,35,36,37,38,39,40,41,42,43, 46, 47]. One of the included cluster RCTs had three different reports [43]. All seventeen included studies were published in English. Three studies focused on students (nursing students, medical students and emergency medicine students) while the remaining studies targeted post-registration healthcare professionals, mostly primary care physicians [30, 41, 47]. Except for one study from an upper middle-income country [36], all studies were from high-income countries with ten studies from the USA. Sample size ranged from 10 to 1054, with one third of studies having less than 50 participants. Ten studies reported that the intervention was delivered as part of a continuing medical education programme [16, 31, 33,34,35,36, 40, 42, 43, 46].

Fig. 1
figure 1

Study flow diagram

Table 1 Characteristics of included studies

Eight studies compared digital education to traditional learning (i.e. lectures, paper-based self-study materials and small-group tutorial) [34,35,36,37,38,39,40,41], four studies compared digital education digital education to no intervention [30,31,32,33] and five studies compared more to less interactive forms of digital education [16, 42, 43, 46, 47]. Digital interventions mostly consisted of educational materials with low or no interactivity. Nine studies focused on online modules (i.e. sequenced collection of subject-related, multimedia materials) with or without feedback and discussions [16, 31, 33,34,35, 40, 42, 43, 47]; two studies each on online simulation [30, 39], spaced education in the form of regular email-delivered surveys [32, 46] and PowerPoint presentations [36, 38]; and one study each on computer-based offline video [41] and CD-ROM-delivered intervention [37]. The educational content in the included studies spanned resuscitation, paediatric basic life support, diabetes, preventive care, labour management, myocardial infarction management, hypertension management, physical therapy, prostate cancer screening, dementia, urinary tract infection and heart failure. Four studies focused on more than one guideline, i.e. two, four, five and 1100 guidelines [31, 33, 37, 42]. Included studies measured knowledge, skills, satisfaction, behaviour and patient outcomes. None of the studies reported attitudes, cost-related outcomes or adverse/unintended effects. All studies measured outcomes immediately after the intervention. In addition, six studies measured long-term knowledge retention ranging from 1 to 9 months post-intervention [32,33,34,35, 38, 47] and two studies measured long-term behaviour change [32, 33].

Half of the studies had unclear or high risk of bias for random sequence generation and more than half had unclear risk for allocation concealment due to missing information, resulting in general unclear risk of bias for selection bias (see Fig. 2). Outcome assessment was mostly done by non-blinded assessors and without the use of validated instruments. For cluster RCTs, the risk of bias was overall low. The quality of evidence ranged from low to moderate and was downgraded because of risk of bias, imprecision and/or inconsistency (see Additional file 1).

Fig. 2
figure 2

Risk of bias summary

Digital education vs no intervention

Four studies compared the effects of digital education for clinical practice guideline adoption to no intervention (Table 1). Three of these four studies evaluated participants’ knowledge [31,32,33]. The pooled analysis of these studies showed large beneficial effect of digital education interventions for clinical practice guideline adoption on knowledge scores (SMD = 0.85, 95% CI 0.16, 1.54; I2 = 83%, moderate quality of evidence) (Fig. 3). The high observed heterogeneity was largely driven by a study on spaced education via emails showing large improvement in the intervention group (SMD = 1.52, 95% CI 1.06, 1.97) [32] and CIs that poorly overlap with the CIs from the other two studies in this analysis. The two remaining studies that evaluated online modules and case-based discussion reported mixed results [31, 33]. One study measuring long-term knowledge retention at 6 months post-intervention [33] reported moderate beneficial effect of the digital education intervention group when compared to no intervention (SMD = 0.73, 95% CI 0.09, 1.38).

Fig. 3
figure 3

Forest plot of knowledge outcome comparing digital education on clinical practice guidelines to no intervention or traditional learning

Only one study (n = 31), evaluating the use of a simulation-based module, measured participants’ skills post-intervention and reported a large beneficial effect of digital education (SMD = 0.93, 95% CI 0.18–1.68, low quality of evidence) [30]. The effect of digital education on healthcare professionals’ behaviour was reported in two studies with mixed findings [32, 33]. Study on the use of spaced education via emails reported improvement in healthcare professional’s behaviour (RR = 0.75, 95% CI 0.69, 0.828) [32]. Conversely, the study on the use of online module and discussions reported no difference in healthcare professionals’ behaviour [33]. The same two studies also reported long-term data for behavioural change outcome. The follow-up behavioural change findings in these studies were consistent with those immediately post-intervention with one study evaluating an online module reporting no difference between the groups at 6 months [33], and the other study on spaced education still favouring the intervention group at 18 months post-intervention [32].

None of the studies reported on attitudes, adverse effect, patient outcomes or cost outcomes.

Digital education vs traditional learning

Eight studies compared the effects of digital education for clinical practice guideline adoption to traditional learning (Table 1) [34,35,36,37,38,39,40,41]. Five of these eight studies (n = 405) measured knowledge [34,35,36, 38, 40]. The pooled estimate from three studies reporting post-intervention data showed a small statistically non-significant effect on knowledge scores in the digital education group compared to traditional learning (SMD = 0.23, 95% CI − 0.12, 0.59; I2 = 34%, moderate quality of evidence) (Fig. 3). The moderate heterogeneity was due to a small, pilot study with very imprecise findings [38] as shown by its wide CIs that poorly overlap with the CIs from the other two studies in this analysis. The remaining two studies without post-intervention data also reported no difference between the groups immediately post-intervention although one of them reported that the intervention group scored slightly higher than the control group when averaged across baseline, post-intervention and follow-up measurement [35]. Three studies also measured long-term knowledge retention 1 to 6 months post-intervention and reported no difference between the groups in two studies [35, 38] and moderate improvement in the digital education group in one study [34].

Of four studies evaluating participants’ satisfaction with the intervention [34,35,36, 38], three studies reported large beneficial effect of digital education compared to a lecture or printed resources [34, 36, 38]. One study, employing interactive small-group learning as a control, reported no difference [35].

Two studies (n = 133) reported post-intervention skills outcome [39, 41]. One study (n = 45) evaluating the use of simulation-based learning module reported large beneficial effect of digital education (SMD = 1.13, 95% CI 0.50, 1.76, moderate quality of evidence) in comparison to printed guidelines [39]. The other study assessed the effectiveness of computer-based video demonstration compared to peer teaching and reported higher post-intervention skills score in the control group (SMD = − 3.72, 95% CI − 4.42, 3.02, low quality of evidence) [41]. Three studies analysed the healthcare professionals’ behaviour change and reported no difference between the groups (Additional file 1) [35, 37, 40]. One study assessed patient outcomes and reported no differences between groups [40]. None of the included studies reported on attitudes, adverse effects or cost outcomes.

Digital education (more interactive) vs digital education (less interactive)

Five studies compared different configurations of digital education interventions (Table 1) [16, 42, 43, 46, 47]. Four studies evaluated online modules with performance-based or knowledge-based feedback [16, 42, 43, 47], and one study evaluated email-delivered, spaced education game [46]. The control interventions were either less interactive form of the digital education or non-interactive, online resources. Four studies measured behaviour and largely reported no difference between the groups (Fig. 4, Table 1) [16, 42, 43, 46]. Of three studies measuring knowledge [16, 46, 47], only one study on spaced education game favoured intervention (SMD = 0.81, 95% CI 0.43–1.20, moderate quality of evidence) [46]. This study also reported a modest improvement in patient outcomes. One study reported knowledge growth rate and reported no difference in mean change scores between the most interactive intervention groups and the less interactive control groups [16]. This study also reported no differences in satisfaction scores between the groups. One study reported moderate improvement in knowledge growth retention at 30-day follow-up in the more interactive form of digital education intervention compared to less interactive one (SMD = 0.63, 95% CI 0.01; 1.24) [47]. The same study reported higher satisfaction in the more interactive group at follow-up. No studies reported attitudes, adverse effect or cost outcomes.

Discussion

We identified 17 studies evaluating the effectiveness of digital education for clinical practice guideline adoption among healthcare professionals. Studies mostly focused on primary care physicians, computer-based educational interventions with low interactivity and measurement of participants’ knowledge and behaviour. With regard to knowledge, studies comparing the effect of digital education with no intervention showed a moderate, statistically significant difference in favour of digital education. Studies comparing the effect of digital education with traditional learning on knowledge showed a small, statistically non-significant difference in favour of digital education. The digital education group was more satisfied than the traditional learning group. Overall, there was little or no change in healthcare professionals’ behaviour, except in one study favouring the email-delivered, spaced education intervention group. Of three studies measuring patient outcomes, only one study on spaced education game intervention reported a modest improvement in the intervention group. The quality of evidence ranged from low to moderate across outcomes due to risk of bias, inconsistency in the findings and/or imprecision of the outcome data.

The evaluated digital educational interventions had diverse formats. The existing literature on the effectiveness of traditional learning for clinical practice guideline adoption shows that interactive approaches may be more effective than passive guideline dissemination [10, 11]. In our review, most digital education interventions focused on passive dissemination of resources with no or low interactivity, e.g. PowerPoint presentations or computer-based text [31, 34, 36,37,38]. Three studies compared the effectiveness of more interactive digital education interventions in the form of spaced education via email or online simulation to no intervention or traditional learning and reported large beneficial effect in the intervention group [30, 32, 39]. However, most studies comparing more interactive to less interactive forms of digital education reported no statistically significant difference between the groups. The interactive component of these digital education interventions was mostly in the form of performance feedback. The only study favouring the more interactive form of digital education knowledge and patient outcomes compared spaced education via emails to passive online resources. Based on these findings, future research should explore further the effectiveness of spaced digital education and simulation on clinical practice guidelines compared to other forms of education.

Included studies on interventions with limited interactivity reported various challenges relating to participants’ attrition and limited usage of the resources. Two studies evaluating the use of websites and online modules reported high attrition rates [16, 31]. In a study on a computer-based module with performance feedback, participants considered the intervention too time-demanding [42]. A study evaluating a digital education intervention in the form of a non-interactive, digitally presented clinical practice guidelines reported that more than half of participants accessed the provided resources either once or never [31]. Another study with computer-based resources customised to participants with hyperlinks reported that less than 60% of provided educational material was accessed [34]. Conversely, more interactive interventions reported lower attrition rates and higher participants’ engagement [39, 44].

The effectiveness of digital education for guideline adoption may also depend on the type of content it focuses on. Learners may already have high baseline knowledge on some topics such as diabetes which may lead to a ceiling effect in the knowledge score [33]. In addition, educational interventions focusing on several guidelines concurrently may lead to increased cognitive load and therefore be less effective than those focusing on a single guideline. Yet with the growing prevalence of chronic illnesses, ageing population and multi-morbidity, single guidelines are perceived as impractical and unhelpful [48]. Instead of passive dissemination of several distinct guidelines, digital education offers opportunity for seamless and engaging education and blending of diverse guidelines, using for example scenario-based learning and simulation. Notably, none of the included studies in our review evaluated novel educational modalities enabling simulated learning such as virtual or augmented reality. Also, none of the studies used mobile devices for delivery of digital education interventions. Mobile-delivered education may be more suitable to meet the needs of healthcare professionals by enabling easy, on-the-go access to training. These diverse digital education modalities may help promote better participant engagement and prove a more effective approach to health professions’ training on guidelines and should be evaluated in the future.

Our review has several limitations. Randomised controlled trials included in this review mostly lacked information on randomisation method, allocation concealment or blinding method. Included studies also largely reported post-intervention data, so we could not calculate pre-post intervention change data nor ascertain whether the intervention groups were matched at baseline for key characteristics and outcome measure scores. In studies reporting pre-post intervention change data, we extracted post-intervention data to ensure consistency in the presentation of findings across the studies included in this review. Studies with pre-post intervention change data reported an improvement from the baseline, but the findings were in all cases consistent with the post-intervention data. Furthermore, in our review, we only focused on studies on clinical practice guidelines and may have missed studies that use other forms of evidence-based recommendations or do not explicitly cite a clinical practice guideline. In line with other systematic reviews on digital education interventions, we have encountered substantial heterogeneity in terms of the intervention, participants, outcomes and comparisons [49]. Given such heterogeneity, our findings have to be interpreted with caution. In addition, our review focuses only on digital education on guidelines and its findings are not applicable to other digital interventions such as computerised decision support systems. Finally, some studies reported that digital education interventions were delivered as part of a continuing medical education programme which may have affected their findings. Our review strengths include a comprehensive and sensitive search as well as clear inclusion and exclusion criteria encompassing a broad range of participants, outcomes and interventions. We also performed parallel, independent and reproducible screening, data extraction and rigorous risk of bias assessment.

Conclusion

Digital education on clinical practice guidelines seems to be more effective than no intervention and at least as effective as traditional learning in terms of participants’ knowledge. Participants reported higher satisfaction with digital education compared to traditional learning. Yet, digital education overall led to little or no difference in health professionals’ behaviour as compared to control interventions. Higher interactivity and engagement in digital education interventions for clinical practice guideline adoption may lead to larger educational gains. Future research should aim to determine the effectiveness of novel modalities (e.g. mobile learning and virtual reality), as well as spaced and interactive formats of digital education, and focus on outcomes such as skills, attitudes, cost, behaviour and patient outcomes. There is a need for high-quality, well-reported RCTs with a clear presentation of the random sequence generation and allocation concealment approach as well a detailed description of the intervention and the control. Future studies should report pre-post intervention change outcome data, use validated outcome measurement tools and assess learners’ retention as well as long-term outcomes.

Availability of data and materials

This systematic review included the data extracted from the primary studies. The whole set of data extraction sheet is available upon request.

Abbreviations

BP:

Blood pressure

HR:

Hazard ratio

LDL:

Low-density lipoproteins

MCQ:

Multiple choice questionnaire

RCT:

Randomised controlled trial

RR:

Risk ratio

SMD:

Standardised mean difference

References

  1. Medicine Io. Crossing the quality chasm: a new health system for the 21st century. Washington, DC: The National Academies Press; 2001.

    Google Scholar 

  2. Bighelli I, Ostuzzi G, Girlanda F, Cipriani A, Becker T, Koesters M, et al. Implementation of treatment guidelines for specialist mental health care. Cochrane Database Syst Rev. 2016;12:CD009780.

  3. Grimshaw JM, Russell IT. Effect of clinical guidelines on medical practice: a systematic review of rigorous evaluations. Lancet. 1993;342(8883):1317–22.

    Article  CAS  PubMed  Google Scholar 

  4. Woolf SH, Grol R, Hutchinson A, Eccles M, Grimshaw J. Potential benefits, limitations, and harms of clinical guidelines. BMJ. 1999;318:527–30.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  5. Lobach DF, Hammond WE. Computerized decision support based on a clinical practice guideline improves compliance with care standards. Arm J Med. 1997;102(1):89–98.

    Article  CAS  Google Scholar 

  6. Flodgren G, Hall AM, Goulding L, Eccles MP, Grimshaw JM, Leng GC, et al. Tools developed and disseminated by guideline producers to promote the uptake of their guidelines. Cochrane Database Syst Rev. 2016;8:CD010669.

  7. Katz DA. Barriers between guidelines and improved patient care: an analysis of AHCPR’s Unstable Angina Clinical Practice Guideline. Health Serv Res. 1999;34(1 Pt 2):377–89.

    CAS  PubMed  PubMed Central  Google Scholar 

  8. Tierney WM, Overhage JM, Takesue BY, Harris LE, Murray MD, Vargo DL, et al. Computerizing guidelines to improve care and patient outcomes: the example of heart failure. J Am Med Inform Assoc. 1995;2(5):316–22.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  9. Grimshaw JM, Thomas RE, MacLennan G, Fraser C, Ramsay CR, Vale L, Whitty P, Eccles MP, Matowe L, Shirran L, et al. Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technol Assess. 2004;8(6):iii–v 1-72.

    Article  CAS  PubMed  Google Scholar 

  10. Bloom BS. Effects of continuing medical education on improving physician clinical care and patient health: a review of systematic reviews. Int J Technol Assess Health Care. 2005;21(3):380–5.

    Article  PubMed  Google Scholar 

  11. Forsetlund L, Bjørndal A, Rashidian A, Jamtvedt G, O'Brien MA, Wolf FM, et al. Continuing education meetings and workshops: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2009;2:CD003030.

  12. Reeves S, Fletcher S, McLoughlin C, Yim A, Patel KD. Interprofessional online learning for primary healthcare: findings from a scoping review. BMJ Open. 2017;7(8):e016872.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Dasgupta N, Yadav AKS, Dasgupta S. Information-seeking behavior of medical professionals in the digital age in Kolkata, India. J Electron Resour Med Libr. 2017;14(1):1–16.

    Article  Google Scholar 

  14. Al-Shorbaji N, Atun R, Car J, Majeed A, Wheelr E. eLearning for undergraduate health professional education: a systematic review informing a radical transformation of health workforce development. In: World Health Organization; 2015.

    Google Scholar 

  15. Glicksman JT, Brandt MG, Moukarbel RV, Rotenberg B, Fung K. Computer-assisted teaching of epistaxis management: a randomized controlled trial. Laryngoscope. 2009;119(3):466–72.

    Article  PubMed  Google Scholar 

  16. Schroter S, Jenkins RD, Playle RA, Walsh KM, Probert C, Kellner T, Arnhofer G, Owens DR. Evaluation of an online interactive Diabetes Needs Assessment Tool (DNAT) versus online self-directed learning: a randomised controlled trial. BMC Med Educ. 2011;11:35.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Woo MK, Ng KH. A model for online interactive remote education for medical physics using the Internet. J Med Internet Res. 2003;5(1):e3.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Akl EA, Sackett KM, Erdley WS, Mustafa RA, Fiander M, Gabriel C, Schunemann H. Educational games for health professionals. Cochrane Database Syst Rev. 2013;1:Cd006411.

    Google Scholar 

  19. Flodgren G, Conterno LO, Mayhew A, Omar O, Pereira CR, Shepperd S. Interventions to improve professional adherence to guidelines for prevention of device-related infections. Cochrane Database Syst Rev. 2013;3:CD006559.

  20. Giguère A, Légaré F, Grimshaw J, Turcotte S, Fiander M, Grudniewicz A, et al. Printed educational materials: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2012;10:CD004398.

  21. Ivers N, Jamtvedt G, Flottorp S, Young JM, Odgaard-Jensen J, French SD, et al. Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2012;6:CD000259.

  22. Shojania KG, Jennings A, Mayhew A, Ramsay CR, Eccles MP, Grimshaw J. The effects of on-screen, point of care computer reminders on processes and outcomes of care. Cochrane Database Syst Rev. 2009;3:Cd001096.

    Google Scholar 

  23. Higgins JPT, Altman DG, Gøtzsche PC, Jüni P, Moher D, Oxman AD, Savović J, Schulz KF, Weeks L, Sterne JAC. The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928. https://0-doi-org.brum.beds.ac.uk/10.1136/bmj.d5928

    Article  PubMed  PubMed Central  Google Scholar 

  24. Moher D, Liberati A, Tetzlaff J, Altman DG, The PG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6(7):e1000097.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Gentry S, L'Estrade Ehrstrom B, Gauthier A, Alvarez J, Wortley D, van Rijswijk J, Car J, Lilienthal A, Tudor Car L, Nikolaou CK, et al. Serious gaming and gamification interventions for health professional education. Cochrane Database Syst Rev. 2016;6:CD012209.

  26. Payne TH. Computer decision support systems. Chest. 2000;118(2 Suppl):47s–52s.

    Article  CAS  PubMed  Google Scholar 

  27. Cook DA, Hatala R, Brydges R, Zendejas B, Szostek JH, Wang AT, Erwin PJ, Hamstra SJ. Technology-enhanced simulation for health professions education: a systematic review and meta-analysis. JAMA. 2011;306(9):978–88.

    Article  CAS  PubMed  Google Scholar 

  28. Reporting the effects of an intervention in EPOC reviews. In: EPOC resources for review authors: Cochrane Effective Practice and Organisation of Care (EPOC); 2018.

  29. Guyatt GH, Oxman AD, Schünemann HJ, Tugwell P, Knottnerus A. GRADE guidelines: a new series of articles in the journal of clinical epidemiology. J Clin Epidemiol. 2011;64(4):380–2.

    Article  PubMed  Google Scholar 

  30. Attin M, Winslow K, Smith T. Animation shows promise in initiating timely cardiopulmonary resuscitation: results of a pilot study. Comput Inform Nurs. 2014;32(4):166–3.

    Article  PubMed  Google Scholar 

  31. Butzlaff M, Vollmar HC, Floer B, Koneczny N, Isfort J, Lange S. Learning with computerized guidelines in general practice?: a randomized controlled trial. Fam Pract. 2004;21(2):183–8.

    Article  CAS  PubMed  Google Scholar 

  32. Kerfoot BP, Lawler EV, Sokolovskaya G, Gagnon D, Conlin PR. Durable improvements in prostate cancer screening from online spaced education a randomized controlled trial. Am J Prev Med. 2010;39(5):472–8.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Stewart M, Marshall JN, Ostbye T, Feightner JW, Brown JB, Harris S, Galajda J. Effectiveness of case-based on-line learning of evidence-based practice guidelines. Fam Med. 2005;37(2):131–8.

    PubMed  Google Scholar 

  34. Bell DS, Fonarow GC, Hays RD, Mangione CM. Self-study from web-based and printed guideline materials. A randomized, controlled trial among resident physicians. Ann Intern Med. 2000;132(12):938–46.

    Article  CAS  PubMed  Google Scholar 

  35. Fordis M, King JE, Ballantyne CM, Jones PH, Schneider KH, Spann SJ, Greenberg SB, Greisinger AJ. Comparison of the instructional efficacy of Internet-based CME with live interactive CME workshops: a randomized controlled trial. JAMA. 2005;294(9):1043–51.

    Article  PubMed  Google Scholar 

  36. Hemmati N, Omrani S, Hemmati N. A comparison of internet-based learning and traditional classroom lecture to learn CPR for continuing medical education. TOJDE. 2013;14(1):256–65.

    Google Scholar 

  37. Jousimaa J, Makela M, Kunnamo I, MacLennan G, Grimshaw JM. Primary care guidelines on consultation practices: the effectiveness of computerized versus paper-based versions. A cluster randomized controlled trial among newly qualified primary care physicians. Int J Technol Assess Health Care. 2002;18(3):586–96.

    PubMed  Google Scholar 

  38. Nurse R. Computer-assisted versus traditional classroom instruction to promote change in the nursing management of the second stage of labor. Diss Abstr Int. 2010;71(3-B):1625.

    Google Scholar 

  39. Schwid HA, Rooke GA, Ross BK, Sivarajan M. Use of a computerized advanced cardiac life support simulator improves retention of advanced cardiac life support guidelines better than a textbook review. Crit Care Med. 1999;27(4):821–4.

    Article  CAS  PubMed  Google Scholar 

  40. Shenoy S. Cluster randomized controlled trial to evaluate the effectiveness of a multifaceted active strategy to implement low back pain practice guidelines; effect on competence process of care and patient outcomes in physical therapy: University of Pittsburgh; 2013.

  41. Stephan F, Groetschel H, Buscher AK, Serdar D, Groes KA, Buscher R. Teaching paediatric basic life support in medical schools using peer teaching or video demonstration: a prospective randomised trial. J Paediatr Child Health. 2018; (no pagination).

  42. Bonevski B, Sanson-Fisher RW, Campbell E, Carruthers A, Reid AL, Ireland M. Randomized controlled trial of a computer strategy to increase general practitioner preventive care. Prev Med. 1999;29(6 Pt 1):478–86.

    Article  CAS  PubMed  Google Scholar 

  43. Billue KL, Safford MM, Salanitro AH, Houston TK, Curry W, Kim Y, Allison JJ, Estrada CA. Medication intensification in diabetes in rural primary care: a cluster-randomised effectiveness trial. BMJ Open. 2012;2:e000959. https://0-doi-org.brum.beds.ac.uk/10.1136/bmjopen-2012-000959

    Article  PubMed  PubMed Central  Google Scholar 

  44. Crenshaw K, Curry W, Salanitro AH, Safford MM, Houston TK, Allison JJ, Estrada CA. Is physician engagement with web-based CME associated with patients’ baseline hemoglobin A1c levels? The rural diabetes online care study. Acad Med. 2010;85(9):1511–7.

    Article  PubMed  PubMed Central  Google Scholar 

  45. Estrada CA, Safford MM, Salanitro AH, Houston TK, Curry W, Williams JH, et al. A web-based diabetes intervention for physician: A cluster-randomized effectiveness trial. Int J Qual Health Care. 2011;23(6):682–9.

    Article  PubMed  PubMed Central  Google Scholar 

  46. Kerfoot BP, Turchin A, Breydo E, Gagnon D, Conlin PR. An online spaced-education game among clinicians improves their patients' time to blood pressure control: a randomized controlled trial. Circ Cardiovasc Qual Outcomes. 2014;7(3):468–74.

    Article  PubMed  PubMed Central  Google Scholar 

  47. Leszczynski P, Charuta A, Laziuk B, Galazkowski R, Wejnarski A, Roszak M, Kolodziejczak B. Multimedia and interactivity in distance learning of resuscitation guidelines: a randomised controlled trial. Interact Learn Envir. 2018;26(2):151–62.

    Article  Google Scholar 

  48. Boyd CM, Kent DM. Evidence-based medicine and the hard problem of multimorbidity. J Gen Intern Med. 2014;29(4):552–3.

    Article  PubMed  PubMed Central  Google Scholar 

  49. Cook DA, Levinson AJ, Garside S, Dupras DM, Erwin PJ, Montori VM. Internet-based learning in the health professions: a meta-analysis. JAMA. 2008;300(10):1181–96.

    Article  CAS  PubMed  Google Scholar 

Download references

Acknowledgements

We thank Dr. Ram Bajpai for his statistical advice. We gratefully acknowledge funding support from the Lee Kong Chian School of Medicine, Nanyang Technological University Singapore, Singapore.

Funding

Lee Kong Chian School of Medicine, Nanyang Technological University Singapore, Singapore Start-Up Grant.

Author information

Authors and Affiliations

Authors

Contributions

LTC conceived the idea. AS and KL screened the articles and extracted the data. LTC and AS wrote the manuscript. BMK, KL, NLB and AM revised the manuscript critically. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Lorainne Tudor Car.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional file

Additional file 1:

Supplementary information including MEDLINE (Ovid) search strategy, Summary of findings tables and Forest plot of all behavioural change outcomes. (DOCX 155 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tudor Car, L., Soong, A., Kyaw, B.M. et al. Health professions digital education on clinical practice guidelines: a systematic review by Digital Health Education collaboration. BMC Med 17, 139 (2019). https://0-doi-org.brum.beds.ac.uk/10.1186/s12916-019-1370-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12916-019-1370-1

Keywords