Skip to main content

Table 2 Examples of applications of research impact assessment frameworks

From: Research impact: a narrative review

Author/year (country)

Approach taken

Main findings

Comment

Payback Framework

Kwan et al., 2007 [67] (Hong Kong)

Surveyed 205 projects funded by the Health and Health Services Research fund; used main Payback categories and framework processes

Between a third and a half of principal investigators claimed impact on policy, practice and health service benefit; liaison with potential users and participation in policy committees was significantly associated with achieving wider impacts

Multivariate analysis of data enabled identification of factors associated with impact; however, study relied solely on self-reported data from researchers

Hanney et al., 2007 [7] (UK)

16 case studies randomly selected from wider survey of all projects funded by the NHS Health Technology Assessment (HTA) programme 1993–2003; survey data supplemented by documentary and bibliometric analysis and researcher interviews

Survey showed considerable impact in knowledge production (publications), changes in policy (73 % of projects) and behaviour (42 %); case studies showed diversity in levels and forms of impacts and ways in which they arose; studies commissioned for policy customers showed highest policy impact

All case studies were written up around stages of Payback, which facilitated cross-case analysis; affirmed the value of agenda setting to meet needs of healthcare system

Scott et al., 2011 [68] (USA) (methods) and Madrillon Group, 2011 [69] (findings)

Assessed impact of National Institutes of Health’s (NIH) Mind Body Interactions and Health programme; for centres and projects: documentary review, bibliometric and database analysis, interviews; impact of centres scored using Payback scales

Findings covered programme as a whole, centres, and research projects; study demonstrated that centres and projects had produced clear and positive impacts across all five Payback categories; for projects, 34 % claimed impact on policies, 48 % led to improved health

Payback was adaptable to meet needs of specific evaluation, covering different levels; assessment occurred too early to capture many of the ‘latent’ outcomes

Hanney et al., 2013 [70] (UK)

Assessed impact of Asthma UK’s portfolio of funding including projects, fellowships, professorial chairs and a new collaborative centre; surveys to 163 researchers, interviews, documentary analysis, 14 purposively selected case studies

Findings highlighted academic publications, and considerable leverage of follow-on funding; each of the wider impacts (informing guidelines, product development, improved health) achieved by only a small number of projects or fellowships – but some significant examples, especially from chairs

The charity used the findings to inform their research strategy, notably in relation to centres; many impacts were felt to be at an early stage

Donovan et al., 2014 [71] (Australia)

Assessed impact of research funded by National Breast Cancer Foundation; survey of 242 researchers, document analysis plus 16 purposively selected case studies; considered basic and applied research and infrastructure; cross-case analysis

Impacts included academic publications, research training, research capacity building, leveraged additional funding, changed policy (10 %, though 29 % expected to do so), new product development (11 %), changed clinical practice (14 %)

The charity considered that findings would help to inform their research strategy; many projects recently completed, hence emphasis on expected impacts

Wooding et al., 2014 [72] (Australia, Canada, UK)

29 case studies randomly selected from cardiovascular/stroke research funders, scored using Payback categories; compared impact scores with features of research processes

Wide range of impacts; some projects scored very high, others very low; basic research had higher academic impacts, clinical had more impact beyond academia; engagement with practitioners/patients linked to academic and wider impacts

Payback enabled collection of data about a wide range of impacts plus processes/features of each project; this facilitated innovative analysis of factors associated with impact

Research Impact Framework

Kuruvilla et al., 2007 [32] (UK)

Pilot study, 11 projects; used semi-structured interview and document analysis, leading to one-page ‘researcher narrative’ that was sent to the researcher for validation

Interviews with researchers allowed them to articulate and make sense of multiple impact channels and activities; the structured researcher narratives, which were objectively verifiable, facilitated comparison across projects

Applied a wider range of impact categories than the Payback Framework; approach was adaptable and acceptable to researchers, however, it was only a small pilot conducted in the researchers’ group

Canadian Academy of Health Sciences (CAHS) Framework

Montague and Valentim, 2010 [73] (Canada)

Applied the CAHS Framework to assess the impact of a large randomised trial of a new treatment for breast cancer; divided the impacts into proximate (e.g. changes in awareness) and more long-term (including changes in breast cancer mortality)

Numerous impacts were documented at different levels of the CAHS Framework; findings suggested a direct link between publication of the trial, change in clinical practice and subsequent reduction in morbidity and mortality

Published as an early worked example of how CAHS can inform the systematic documentation of impacts

Adam et al., 2012 [74] (Catalonia)

Applied the CAHS Framework to assess the impact of clinical and health services research funded by the main Catalan agency; included bibiliometric analysis, surveys to 99 researchers with 70 responses, interviews with researchers and decision-makers, in-depth case study of translation pathways, as well as a focus on intended impacts

In the CAHS category of informing decision-making by policymakers, managers, professionals, patients, etc. 40 out of 70 claimed decision-making changes were induced by research results: 29 said changed clinical practice, 16 said organisational/policy changes; interactions in projects with healthcare and policy decision-makers was crucial

The study provided both knowledge to inform the funding agency’s subsequent actions and a basis on which to advocate for targeted research to fill knowledge gaps; the team noted limitations in relation to attribution, time lags and the counterfactual

Graham et al., 2012 [75] (Canada)

Adapted and applied CAHS to assess impact of research funded by a not-for-profit research and innovation organization in Alberta, Canada

After a formal adaptation phase, CAHS proved flexible and robust both retrospectively (to map pre-existing data) and prospectively (to track new programmes); some new categories were added

Had a particular focus on developing data capture approaches for the many indicators identified; also a focus on how the research funding organisation could measure its own contribution to achieving health system impacts

Cohen et al., 2015 [76] (Australia)

Adapted categories from Payback and CAHS; mixed method sequential methodology; surveys and interviews of lead researchers (final sample of 50); data from surveys, interviews and documents collated into case studies which were scored by an expert panel using criteria from the UK Research Excellence Framework (REF)

19 of 50 cases had policy and practice impacts with an even distribution of high, medium and low impact scores across the (REF-based) criteria of corroboration, attribution, reach and importance; showed that real world impacts can occur from single intervention studies

Innovative approach by blending existing frameworks; limitations included not always being able to obtain documentary evidence to corroborate researcher accounts

Monetisation Models

Johnston et al., 2006 [34] (USA)

Collated data on 28 Phase III clinical trials funded by the National Institute of Neurological Disorders and Stroke up to 2000; compared monetised health gains achieved by use of new healthcare interventions (measured in QALYs and valued at GDP per head) to investment in research, using cost-utility analyses and actual usage

$335 m research investment generated 470,000 QALYs 10 years post funding; return on investment was 46 % per year

Used a bottom-up approach to quantify health gains through individual healthcare interventions; assumed that all changes in usage were prompted by NIH phase III trials; no explicit time-lag; highlights data difficulties in bottom-up approach, as required data were only available for eight trials

Access Economics, 2008 [39] (Australia)

Quantified returns from all Australian health R&D funding between 1992/3 and 2004/5. Monetised health gains estimated as predicted DALYs averted in 2033–45 compared to 1993 (valued at willingness to pay for a statistical life-year)

Return on investment of 110 % from private and public R&D; assumed that 50 % of health gains are attributable to R&D, of which 3.04 % is Australian R&D

Top-down approach; high uncertainty and sensitivity of results in 50 % assumption; forecasted future health gains

Buxton et al., 2008 [38] (UK)

Estimated returns from UK public and charitably funded cardiovascular research 1975–1988; data from cost-utility studies and individual intervention usage; health gains expressed as monetised QALYs (valued at healthcare service opportunity cost) net costs of delivery for years 1986–2005

Internal rate of return of 9 % a year, plus a component added for non-health economic ‘spill-over’ effects of 30 %; assumed a 17 year lag between investment and health gains (based on guideline analysis – knowledge cycle time), and 17 % of health gains attributable to UK research

Bottom-up approach; judgement on which interventions to include was required; explicit investigation of time-lag

Deloitte Access Economics, 2011 [35] (Australia)

Applied same methods as Access Economics (2008); quantified returns from National Health and Medical Research Council funding 2000–2010, focusing on five burdensome disease areas; monetised health gains estimated as predicted DALYs averted in 2040–50 compared to 2000, valued at willingness to pay for a statistical life-year

Return on investment ranged from 509 % in cardiovascular disease to –30 % for muscular dystrophy research; assumed that 50 % of health gains are attributable to R&D, of which 3.14 % was Australian R&D and 35 % of that is NHMRC; assumed time lag of 40 years between investment and benefit

Top-down approach; added layer in attribution problem (because it was a programme rather than totality of research funding)

Societal Impact Assessment and Related Approaches

Spaapen et al., 2007 [46] (Netherlands)

Mainly a methodological report on the Sci-Quest Framework with brief case examples including one in pharmaceutical sciences; proposed mixed-method case studies using qualitative methods, a quantitative instrument called contextual response analysis and quantitative assessment of financial interactions (grants, spin-outs, etc.). Produced a bespoke Research Embedment and Performance Profile (REPP) for each project

Productive interactions (direct, indirect, financial) must happen for impact to occur; there are three social domains: science/certified knowledge, industry/market and policy/societal; REPP in pharmaceutical sciences example developed 15 benchmarks (five for each domain) and scored on 5-point scale

Illustrates ‘performative’ approach to impact (column 6 in Table 1); ERiC (Evaluating Research in Context) programme, focuses assessment on the context and is designed to overcome what were seen as the linear and deterministic assumptions of logic models, but complex to apply

Molas-Gallart and Tang, 2011 [77] (UK)

Applied SIAMPI Framework to assess how social science research in a Welsh university supports local businesses; case study approach using two structured questionnaires – one for researchers and one for stakeholders

Authors found few, if any, examples of linear research-impact links but “a mesh of formal and informal collaborations in which academics are providing support for the development of specific business models in emerging areas, many of which have not yet yielded identifiable impacts

Good example from outside the medical field of how SIAMPI Framework can map the processes of interaction between researchers and stakeholders

UK Research Excellence Framework (secondary analyses of REF impact case study database)

Hinrichs and Grant, 2015 [78] (UK)

Preliminary analysis of all 6679 non-redacted impact case studies in REF 2014, based mainly but not exclusively on automated text mining

Text mining identified 60 different kinds of impact and 3709 ‘pathways to impact’ through which these had (according to the authors) been achieved; researchers’ efforts to monetise health gains (e.g. as QALYs) appeared crude and speculative, though in some cases the evaluation team were able (with additional efforts) to produce monetised estimates of return on investment

Authors commented: “the information presented in the [REF impact] case studies was neither consistent nor standardised.” There is potential to improve data collection and reporting process for future exercises

Greenhalgh and Fahy, 2015 [79] (UK)

Manual content analysis of all 162 impact case studies submitted to a single sub-panel of the REF, with detailed interpretive analysis of four examples of good practice

REF impact case study format appeared broadly fit for purpose but most case studies described ‘surrogate’ and readily verifiable impacts, e.g. changing a guideline; models of good practice were characterised by proactive links with research users

Sample was drawn from a single sub-panel (public health/health services research), so findings may not be generalizable to other branches of medicine

Realist Evaluation

Rycroft-Malone et al., 2015 [56] (UK)

In the national evaluation of first-wave Collaborations for Leadership in Applied Health Research and Care (CLAHRCs), qualitative methods (chiefly, a series of stakeholder interviews undertaken as the studies unfolded) were used to tease out actors’ theories of change and explore how context shaped and constrained their efforts to both generate and apply research knowledge

Impact in the applied setting of CLAHRCs requires commitment to the principle of collaborative knowledge production, facilitative leadership and acknowledgement by all parties that knowledge comes in different forms; impacts are contingent and appear to depend heavily on how different partners view the co-production task

Illustrates realist model of research impact (column 4 in Table 1); the new framework developed for this high-profile national evaluation (Fig. 3) has yet to be applied in a new context

Participatory Research Impact Model

Cacari-Stone et al., 2014 [60] (USA)

In-depth case study of policy-oriented participatory action research in a deprived US industrial town to reduce environmental pollution; mixed methods including individual interviews, focus groups, policymaker phone interviews, archival media and document review, and participant observation

Policy change occurred and was attributed to strong, trusting pre-existing community-campus relationships; dedicated funding for the participatory activity; respect for ‘street science’ as well as academic research; creative and effective use of these data in civic engagement activities; diverse and effective networking with inter-sectoral partners including advocacy organisations

Illustrates ‘critical’ model of research impact (column 5 in Table 1)