Skip to main content

Target Product Profiles for medical tests: a systematic review of current methods

Abstract

Background

A Target Product Profile (TPP) outlines the necessary characteristics of an innovative product to address an unmet clinical need. TPPs could be used to better guide manufacturers in the development of ‘fit for purpose’ tests, thus increasing the likelihood that novel tests will progress from bench to bedside. However, there is currently no guidance on how to produce a TPP specifically for medical tests.

Methods

A systematic review was conducted to summarise the methods currently used to develop TPPs for medical tests, the sources used to inform these recommendations and the test characteristics for which targets are made. Database and website searches were conducted in November 2018. TPPs written in English for any medical test were included. Based on an existing framework, test characteristics were clustered into commonly recognised themes.

Results

Forty-four TPPs were identified, all of which focused on diagnostic tests for infectious diseases. Three core decision-making phases for developing TPPs were identified: scoping, drafting and consensus-building. Consultations with experts and the literature mostly informed the scoping and drafting of TPPs. All TPPs provided information on unmet clinical need and desirable analytical performance, and the majority specified clinical validity characteristics. Few TPPs described specifications for clinical utility, and none included cost-effectiveness.

Conclusions

We have identified a commonly used framework that could be beneficial for anyone interested in drafting a TPP for a medical test. Currently, key outcomes such as utility and cost-effectiveness are largely overlooked within TPPs though and we foresee this as an area for further improvement.

Peer Review reports

Background

Despite significant advances and innovation in the field of disease detection, the majority of the resulting technologies fail to be translated into tests that are used in clinical practice [1, 2]. For instance, it is estimated that less than 1% of novel cancer biomarkers actually reach clinical practice [3]. One possible driver is that the development of new tests is often driven by laboratory discoveries rather than by clinical needs [4]. Test manufacturers have to fulfil extensive evidence requirements demonstrating the fitness of their test for clinical practice [5]. A poor understanding of unmet clinical needs and the clinical pathway, within which a test will sit, will often mean that the clinical and economic benefits cannot be convincingly demonstrated and hence the test may fail to be adopted into clinical practice [1].

Under the ‘Quality by design’ framework, a new product is designed with the aim of meeting pre-identified quality objectives [6]. A Target Product Profile (TPP), also known as a Quality Target Product Profile (QTPP), is a strategic document which summarises the necessary characteristics of an innovative product to address an unmet clinical need [7]. TPPs exemplify the concept of ‘beginning with the goal in mind’, establishing key features and performance specifications in advance to ensure that the new product is developed to meet specific health-related goals [7, 8]. TPPs should also be seen as ‘living’ documents that can be refined and updated as additional relevant information becomes available [7, 9].

TPPs could therefore be particularly useful when designing ‘fit for purpose’ medical tests [10]; they could be used during the development and manufacturing phase to ensure that a new test meets pre-established operational and performance requirements, in line with unmet clinical need [7]. Generating the required evidence for a new test can take many years and significant investment [11]. TPPs therefore have great potential to be used as guiding documents for test developers to avoid late-stage development failures and reduce research waste.

Although we are aware of some examples where TPPs have been developed for medical tests, to our knowledge, there is no formal guidance as to best practice methods. In the USA, guidance for developing TPPs is available for new pharmaceutical drugs [7]. This guidance, issued by the US Food and Drug Administration (FDA), provides an overview of the purpose and attributes of TPPs, and which requirements for a new drug should be included [7]. In this context, TPPs are used as voluntary briefing documents to stimulate discussion between the manufacturer and the FDA throughout the drug development process [7]. The TPP itself outlines specific criteria that a new drug should meet [7]. However, medical tests differ from pharmaceuticals, both in terms of their characteristics and in the indirect way in which they impact on patient health [5], and therefore, this guidance is not directly transferable to the context of medical tests.

Here we report a systematic review of the methods currently used to develop TPPs for medical tests, allowing us to (1) describe a commonly adopted methodology framework, (2) outline the test characteristics for which targets are often set and (3) identify areas requiring further methodological development.

Methods

The protocol for this review was registered on the PROSPERO database (CRD42018115133) [12].

Search strategy

Details of the full search strategy can be found in Additional file 1. The following electronic databases were searched: MEDLINE, EMBASE, CAB Abstract Online, CINHAL, Global Health, Scopus and Web of Science. The database search was performed in November 2018 and encompassed a combination of key terms such as ‘TPP’, ‘quality by design’, ‘QTPP’ and ‘test’.

The grey literature and websites were also searched using structured methods proposed by Godin et al. [13]. A customised Google search was conducted to identify relevant websites, and then, each of these websites was hand-searched. For each website, the internal search engine was used supplemented with hand-searching to identify potentially relevant references. Duplicates across searches were removed. This search was also conducted in November 2018. For more details, please see Additional file 1: Table 1.3, Table 1.4 and 1.5.

All searches were conducted by PC and peer reviewed by an information specialist.

Screening

TPPs written in English for any type of medical test were included (e.g. imaging, in vitro and in vivo medical tests). There were no restrictions in terms of publication date. All publication formats were included except for newsletters and PowerPoint presentations, as these did not report the methods in sufficient detail to review them.

Endnote was used to manage references. Titles and abstracts of the retrieved references were fully screened by PC based on the inclusion criteria, of which a random 10% sample was independently screened by BS. TPPs that met the inclusion criteria at this stage or those for which it was not possible to determine eligibility based on title and abstract were then screened based on the full text. For those references where full text was not available, we contacted authors. All full texts of the eligible TPPs at this stage were screened independently by PC and BS based on the inclusion criteria. The inter-reviewer agreement rate was calculated with Cohen’s κ statistic. For more details, please see Additional file 3: Table 3.1 and 3.2. Where any disagreements occurred, a consensus-based discussion with the other authors (MM, RW) determined whether the reference was eligible or not.

Data extraction and analysis

A data extraction spreadsheet was developed including basic descriptive information relating to the TPP (e.g. publication format, disease of interest, targeted clinical setting, funder, time horizon). Further to this, we extracted data on the methodology used to develop the TPP, including details of the input sources (e.g. expert consultation, review of the literature), the reported decision-making process and the stakeholders involved at each stage of the TPP development. As a common decision-making framework was apparent across the TPPs, we summarised the input sources and stakeholders involved for each phase of this process. Where stakeholders and input sources for the drafting phase were not explicitly reported for, we assessed the sources included in each TPP table and the longer descriptions of each test characteristic.

Each TPP was also assessed in terms of the transparency of reporting the adopted input sources, decision-making process and which stakeholder groups were consulted.

All data extraction was conducted independently by PC and AAS, and in case of disagreement, BS, MM and RW resolved any differences.

Test characteristic clustering

In addition to the information above, the test characteristics reported within each TPP were extracted and de-duplicated. Based on an existing evaluation framework for tests (the ACCE framework [14]), two reviewers (BS and MM) independently categorised each of the test characteristics under the following outcomes: (1) test definition, (2) analytical performance, (3) clinical validity, (4) clinical utility, (5) regulatory legitimacy and (6) economic acceptability. The category ‘test definition’ specifies the disorder of interest, target population and purpose of the test, and thus, it overlaps with the concept of ‘unmet clinical need’. Therefore, we renamed the outcome ‘test definition’ as ‘unmet clinical need’ to better represent the type of information TPPs provide.

Analytical performance describes the ability of a test to correctly detect and measure a particular analyte (e.g. precision, trueness, analytical sensitivity and specificity, limits of detection) [15, 16]. Clinical validity is defined as ‘the ability of a device to yield results that are correlated with a particular clinical condition or a physiological or pathological process or state’ [15], whilst clinical utility represents the ability of a test to affect relevant health-related outcomes for patients (e.g. improvement in quality of life, longer lifespan) [17].

Some characteristics did not fall within any of the pre-defined categories. Three additional categories were therefore identified to accommodate these additional characteristics: (7) human factors, (8) environmental impact and (9) infrastructural requirements. Human factors are concerned with the interaction between users and devices [18]. Environmental impact encompasses a change to the environment following an interaction with the product [19]. Infrastructural requirements entail ‘the stock of the basic facilities and equipment needed for realizing a product or providing a service’ [20].

Results

Literature search

Full details of the literature search results are reported in Fig. 1. Forty-four TPPs were deemed eligible for inclusion in the systematic review [8,9,10, 21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61]. Inter-reviewer agreement was high at title and abstract (κ = 96%) and full-text screening (κ = 98%). For more details, please see Additional file 3: Table 3.1 and 3.2.

Fig. 1
figure1

PRISMA flow diagram illustrating literature search results

Features of included TPPs

The included 44 TPPs consisted of 23 reports, 16 journal articles, 4 published TPP tables (a TPP without any background information or context e.g. [34]) and one conference poster. All TPPs provided guidance on developing medical tests to detect infectious diseases. Fourteen of the 44 TPPs focused on neglected tropical diseases (32%) (e.g. soil-transmitted helminths, Chagas disease, human African trypanosomiasis, schistosomiasis, trachoma, taeniasis cysticercisosis) and on tests for vector-borne infections (32%) (e.g. Zika virus, dengue fever, hepatitis C, malaria, E. coli). Other types of infection included sexually transmitted infections (16%, n = 7), respiratory infections (14%, n = 6) (e.g. lower respiratory tract infection, tuberculosis, pneumonia), Ebola virus [57], meningitis [61] and severe febrile illness [8].

Seven of the 44 TPPs were funded by Bill and Melinda Gates Foundation (16%), and three TPPs received funding from WHO [8, 48, 49]. The healthcare setting of interest was mostly low- and middle-income countries. The majority of TPPs did not disclose funding sources (64%, n = 28).

In some TPPs, a time horizon was chosen to represent the timeframe within which achieving the specifications described in the TPP was considered feasible [22, 23, 60]. In one TPP, this was based on a landscape analysis [22]. In another, expected advancements in technologies and knowledge related to a certain field seemed to justify the time horizon considered for the TPP [27]. Of the 44 TPPs identified, 7 reported the time horizon during which the information included in the TPP will be relevant for manufacturers (16%). Of these, 6 TPPs stated a time horizon of 5 years [22, 23, 27, 28, 51, 60], whilst the remaining considered a time horizon of 10 years [29].

Decision-making steps

A common decision-making framework, consisting of three distinct phases, was apparent across the included TPPs: scoping, drafting and consensus-building. Figure 2 presents the most commonly adopted activities, input sources and engaged stakeholder groups.

Fig. 2
figure2

Typical activities involved, input sources and stakeholders invited for each decision-making phase

Table 1 provides a summary of the stakeholders contributing to each phase. Some of the included TPPs are not included in Table 1 as they did not report any information related to input sources or stakeholder groups [33,34,35,36, 52, 53, 55, 61]. A summary of the input sources reported to have been used at scoping and drafting phase can be found in Additional file 3: Table 3.3.

Table 1 Stakeholders contributing to each phase

We will therefore describe the aim of each phase and breakdown the methodology (activities, input sources and stakeholders) used within the included TPPs where reported. For specific details on each included TPP, see Additional file 3: Table 3.4.

Scoping phase methodology

Half of the TPPs provided some information on the scoping phase (n = 22). The aim of this phase was to provide an overview of the disease area and the limitations associated with existing technologies. The clinical problems and unmet needs were defined, in addition to identification of which test characteristics to include in the TPP.

Some of the key activities undertaken during the scoping phase included reviewing published literature (n = 6) or available data (n = 1), and introductory meetings with stakeholders (n = 4).

Some authors reported (n = 4) [22, 26, 37, 50] that they had conducted a ‘landscape analysis’, providing information on the disease area of interest, available diagnostic technologies and related characteristics and limitations. These were usually based on interviews with stakeholders and reviews of the literature. Only Toskin et al. [50] conducted a systematic literature review, reporting the databases searched and key words used.

Consultation with experts (68%, n = 15) and the literature (36%, n = 8) were the most commonly sought sources of information during the scoping phase (see Additional file 3: Table 3.3. for a full breakdown). Only one type of source was considered in 15 TPPs (of which 11 was consulting experts), whilst 7 TPPs considered more than one source.

Denkinger et al. [22] mapped the diagnostic ecosystem of interest and then performed a survey to gauge stakeholders’ preferences. Reipold et al. [48] identified the main characteristic categories (e.g. scope, performance, operational characteristics and pricing) to be included in the TPP.

Five TPPs involved a priority-setting exercise which entailed ranking each identified health need [23, 28, 32, 48, 60].

During the scoping phase, a variety of stakeholders were engaged (Table 1).

Drafting phase methodology

The first draft of each TPP was usually prepared by either an established working group comprising experts from different organisations [9, 26, 29, 32, 40, 50, 58] or authors of the published TPP. There were two cases where the TPP was drafted by a completely different organisation [51, 57]. The TPP was often revised several times, and in some cases, it was then shortened to ensure it could be easily communicated to different stakeholders [22, 23, 28, 29, 60].

Of the 44 included TPPs, 33 of them reported which input sources were considered during the drafting phase (75%) (Additional file 3: Table 3.3). Common input sources for populating test characteristics were expert consultations (n = 22) and reviews of the literature (n = 22). Some also referred to mathematical models (n = 9), available data (n = 7), guidelines (n = 6) and ‘field observations’ (n = 5). Only one TPP was informed by pooled data from a systematic review [50].

Twenty-six of the 44 TPPs took into consideration more than one type of source at the drafting phase, as opposed to 7 TPPs which only adopted one (Additional file 3: Table 3.3). Meeting inputs were the most common single source (43%, n = 3).

The stakeholders engaged in the drafting phase are reported in Table 1.

Consensus-building phase methodology

Initial agreement with the TPP was often obtained using a survey of the stakeholders (n = 14). The survey either included general questions regarding stakeholders views on the TPP (n = 4) [22, 25, 27, 51] or adopted a Delphi-like approach to provide an initial consensus on various aspects of the TPP (n = 10). A consensus meeting with stakeholders and experts was typically held (n = 11) and a revised TPP generally agreed upon. In some cases, an additional survey was sent to stakeholders on trade-offs between test attributes [48], or on rating key parameters [51, 53]. For 2 TPPs, the final TPP draft was presented to a broader stakeholder base to validate it.

The number of participants invited to the consensus-building meetings varied (< 20 participants: n = 5; between 20 and 50 participants: n = 7). One meeting included 100 participants [27]. For a few of the TPPs, the authors also took part in the consensus meetings [29, 38, 58, 60].

Less than half of the included TPPs reported information on the activities and stakeholders invited to the consensus-building phase (n = 19). The stakeholders engaged in the consensus-building phase are reported in Table 1.

Transparency in reporting methods

We also assessed the transparency of the TPPs in terms of reporting their methodology (see Additional file 3: Table 3.5). The decision-making process behind the TPP was not reported in over a quarter of the included TPPs (n = 16). Further to this, many failed to report which information sources were considered to populate the TPP (n = 11). Just under half did not report which stakeholders were involved in the development of the TPP (n = 20). Specifically, the name of the organisations stakeholders were part was only reported in 11 TPPs, whilst 9 TPPs mentioned personal details of each stakeholder (20%) and 4 TPPs explained why certain stakeholders were invited [26, 38, 46, 58]. Sixteen TPPs reported the source of funding (36%).

There were some TPPs where the methodology was very clearly reported [26, 28, 38, 60].

Test characteristics included in TPPs

After removing duplicates, 140 different test characteristics were reported across the included TPPs. Some features which did not represent test characteristics have been excluded, such as factors relating specifically to the disease in question rather than the test. For more information, please see Test Characteristics Overview Excel spreadsheet (Availability of data and materials). Figure 3 shows the test characteristics most frequently reported (a full list is available in Additional file 2: Table 2.1).

Fig. 3
figure3

Test characteristics frequently reported in all TPPs (n = 44) sorted by categories

Figure 4 depicts which characteristic categories were reported in the included TPPs. Details on unmet clinical need, analytical performance and clinical validity appeared to be consistently reported; however, regulatory requirements, environmental footprint and clinical utility were less frequently considered.

Fig. 4
figure4

Test characteristic categories in absolute number (n) of included TPPs

Discussion

We report a systematic review of the methods currently used to develop Target Product Profiles for medical tests. Despite TPPs for any medical test being searched, all of the identified TPPs were focused on diagnostic tests for infection.

There was generally a lack of transparency and consistency in reporting the methods underlying TPPs. This would make it difficult to appraise the recommendations within the TPPs, ascertain whether the recommendations are generalizable to other settings, and challenging to reproduce.

Relevancy of TPPs for test manufacturers

The purpose of a TPP is to identify, upfront, the essential characteristics of a test for it to fulfil a pre-specified, unmet clinical need. This should, in turn, increase the likelihood that the test will be adopted into clinical practice and reimbursed [1]. A TPP should also account for contextual aspects that might affect the test’s real-world performance [10], defining infrastructural and technical constraints that impact on the implementation into clinical practice. This review shows that TPPs to date have primarily been developed for global health applications, as the main funding organisations are WHO, UNICEF and Bill and Melinda Gates Foundation. The primary focus on infectious diseases may be explained by the remit of the global organisations who fund TPPs. WHO included HIV/AIDS, neglected tropical diseases, tuberculosis and malaria as priority diseases [62]. Further to this, WHO established the ‘R&D Blueprint’, which aims to promote R&D activities (tests, vaccines, medicines) during epidemics [63]. After having identified pathogen to target first, TPPs are usually commissioned to guide the development process of new healthcare products which will address the high-priority pathogen [63].

However, the development of TPPs should not be limited to one specific disease area or clinical setting; the concept of ‘beginning with an end in mind’ embodied by TPPs could support both international and national health decision-makers. This activity should, in turn, stimulate innovation of new tests driven by clinical needs rather than solely by laboratory discoveries. It would also provide manufacturers with greater clarity around test requirements and confidence in the market for developing innovative tests.

Identified limitations in current TPP methodology

In reviewing current methodology for developing TPPs for medical tests, we have identified three key areas where current TPP methodology could be improved: (1) oversight of clinical utility, (2) a focus on price rather than cost-effectiveness and (3) subjectivity of information sources. Here we discuss each limitation and the implications.

Oversight of clinical utility

Very few of the TPPs reported desirable characteristics relating to the clinical utility of the test. This is not surprising given that the majority of research efforts has focused on generating evidence on the analytical performance and diagnostic accuracy of a new test [11]. A highly accurate test does not necessarily mean that the test will improve patient health, as factors relating to decision-making and the effectiveness of patient management strategies could fall short [64].

Assessing the clinical utility of a new test is extremely challenging. Measuring the impact of a test on patient health outcomes is difficult as tests tend to guide patient management decisions, rather than directly impacting on patient health outcomes [5]. Therefore, estimating the clinical utility of a test requires evidence of how the information from a test is incorporated into decision-making and the downstream effectiveness of those decisions [65]. In the case of a new test, this is particularly complicated given the uncertainty around the mechanisms by which the test will impact on patient outcomes [65].

Focus on price rather than cost-effectiveness

Although the minimum and optimal price of the tests featured in many of the TPPs, none of these was driven by the trade-off between the overall cost implications of implementing the test and the associated patient benefits. Cost-effectiveness analysis provides a framework to compare costs and benefits of an intervention against relevant comparators, including current practice. Specifically, cost-effectiveness analysis defines whether the intervention being evaluated represents good value for money.

It is important to consider the cost of the new test in the context of the benefits that the test may provide. For example, a new test may be relatively expensive but may also improve patient health to the extent that the additional is justified. Conversely, a new test may be relatively cheap but offer no improvements in patient health and therefore even the marginal increase in cost is not justified.

Conducting cost-effectiveness analysis at early R&D stages of new tests can therefore help manufacturers to avoid significant investments in tests that do not have the potential to be cost-effective [65].

These first two limitations are particularly relevant since decision-makers increasingly demand evidence that a new test improves patient health and is cost-effective rather than solely evidence of its analytical and clinical validity [5]. Specifically, many Health Technology Assessment bodies in Europe, Australia and North America consider clinical utility, cost and cost-effectiveness in relation to the target population, in addition to analytical performance and clinical validity when assessing new molecular diagnostic tests [66].

Subjectivity of input sources

Expert judgement and evidence identified in published literature were the main sources of information for defining desirable characteristics. Systematic reviews of the literature, where database searches are reproducible and the quality of relevant studies are appraised, were not conducted to identify relevant evidence at the scoping and drafting phase. This is likely to introduce bias and subjectivity in terms of the evidence used to underpin test characteristic recommendations.

Although expert judgement is undoubtedly useful, relying solely upon this information source has some limitations, particularly for quantitative estimates. How humans make probability judgements is highly affected by many heuristics and systematic biases (e.g. anchoring, availability, overconfidence and insight bias) [67]. Specifically, previous literature has found a poor understanding of test accuracy among healthcare professionals [68] as sensitivity and specificity are often misinterpreted and mistaken for predictive values [68].

Additionally, the quality of expert elicitation heavily relies on expert selection as it is important to choose experts with good subject knowledge. Only 4 TPPs described how the selection process took place, and therefore, the quality of expert judgements might be questioned. Furthermore, many TPPs reported literature as a source for informing TPPs; however, less than half of the TPPs cited the references considered. This lack of transparency might hinder the quality and credibility of sources on which TPPs are based.

Study limitations

Since this study is a systematic review of publicly available literature, a key study limitation is that we have inevitably missed any confidential or unpublished TPPs developed in-house by test manufacturers. Although the results of our online searches did not identify any companies stating that they have developed TPPs for medical tests, we would not expect to find such information on company websites. Anecdotally, however, we have not encountered any formal TPP development activity (by this we mean definition of desirable test characteristics) within the National Institute for Health Research Leeds In Vitro Diagnostics Co-operative (NIHR Leeds) MIC industry network.

As there are no guidelines on how TPPs for medical tests should be developed, we did not formally assess risk of bias. We did however appraise the transparency with which the methodology underpinning each TPP had been reported. Unfortunately, it was not possible to fully evaluate the TPP developed by PATH [40] as the online appendices were not accessible. Additionally, due to poor methodological transparency, it was difficult to assess with certainty authorship of TPPs and whether authors of TPPs took part themselves in the consensus-building meetings.

Future research

Although there is evidence of a common development framework, this review highlights that there is considerable variability in the methods employed to draft TPPs and inconsistencies in which test characteristics are described. A key issue in reviewing the methods implemented was the lack of transparency in methodology reporting.

Guidance on best practice methods for developing TPPs for medical tests would be highly beneficial. Similarly to the US FDA guidance on TPPs for drugs, a guidance document could be developed for TPPs for medical tests summarising the purpose, attributes of TPPs and which test characteristics should be included.

However, to inform the development of such guidance, future research should focus firstly on how to systematically identify unmet clinical needs underpinning a certain disease area. Monaghan et al. [69] developed a valuable checklist for identifying biomarkers based on literature findings and consultations with experts. We believe that this checklist could be pertinent for the scoping phase underlying TPP development; however, this would need further validation in this specific context.

More research is also required to understand how to better incorporate the assessment of desirable clinical utility and cost-effectiveness of innovative tests into TPPs. One possible way forward could be exploring how and if care pathway analysis and early economic modelling could be integrated into the development of TPPs. Care pathway analysis would provide clarity on the mechanisms by which a test could impact on downstream patient outcomes. Early economic modelling could be used to define desirable values for certain test characteristics (e.g. test price, diagnostic sensitivity and specificity) based on cost-effectiveness [67]. Therefore, integrating care pathway analysis and early economic modelling into TPP development might provide more evidence-based information to the test developers.

Outside of the actual methodology for developing TPPs, it would be useful to better understand whether manufacturers develop tests strictly in line with TPPs, or whether there are any factors which make this infeasible or challenging. Additionally, we would be interested in which methods manufacturers usually adopt to develop TPPs to assess if there are any differences with the methodological framework we highlighted here. To this end, interviewing test manufacturers might provide interesting insights on the intrinsic value of TPPs for the industry.

Most importantly to ensure that the development of TPPs becomes widespread practice, it would be valuable to explore how TPPs could be integrated into existing regulatory paths for innovation such as the European Union Regulation for In Vitro Diagnostics (Regulation 2017/746) and the US FDA Drug Development and Approval Process. It might then be possible to align test characteristics featured in TPPs with evidence requirements which are relevant for market approval decisions of new medical tests. This, in return, might increase the applicability of TPPs for the industry.

Conclusions

Based on this review, we summarised current methodological practice into a framework of value to those interested in developing TPPs for medical tests.

We also identified some key weaknesses, including the quality of the information sources underpinning TPPs and failure to consider test characteristics relating to clinical utility and cost-effectiveness.

This review thus provides some recommendations for further methodological research on the development of TPPs for medical test. This work will also help to inform the development of a formal guideline on how to draft TPPs for medical tests.

Availability of data and materials

The datasets generated and analysed during the current study are available in the University of Leeds repository https://0-doi-org.brum.beds.ac.uk/10.5518/781. These datasets entail an Excel spreadsheet with data extraction and an overview of test characteristics included in reviewed TPPs.

Abbreviations

ACCE:

Analytic validity, Clinical validity, Clinical Utility, Ethical, legal and social implications

AIDS:

Acquired immune deficiency syndrome

CEA:

Cost-effectiveness analysis

FDA:

Food and Drug Administration

FIND:

The Foundation For Innovative New Diagnostics

HIV:

Human immunodeficiency virus

IDC:

International Diagnostics Center

NIHR:

National Institute for Health Research

PRISMA:

Preferred Reporting Items for Systematic Reviews and Meta-Analysis

QTPP:

Quality Target Product Profile

R&D:

Research and Development

TPP:

Target Product Profile

UNICEF:

United Nations International Children’s Emergency Fund

WHO:

World Health Organization

References

  1. 1.

    Wang P, Kricka L. Current and emerging trends in point-of-care technology and strategies for clinical validation and implementation. Clin Chem. 2018. https://0-doi-org.brum.beds.ac.uk/10.1373/clinchem.2018.287052.

  2. 2.

    Drucker E, Krapfenbauer K. Pitfalls and limitations in translation from biomarker discovery to clinical utility in predictive and personalised medicine. EPMA Journal. 2013;4(1):4–7.

    Article  Google Scholar 

  3. 3.

    Kern S. Why your new cancer biomarker may never work: recurrent patterns and remarkable diversity in biomarker failures. Cancer Res. 2012. https://0-doi-org.brum.beds.ac.uk/10.1158/0008-5472.CAN-12-3232.

  4. 4.

    Engel N, Wachter K, Pai M, Gallarda J, Boehme C, Celentano I, et al. Addressing the challenges of diagnostics demand and supply: insights from an online global health discussion platform. BMJ Glob Health. 2016. https://0-doi-org.brum.beds.ac.uk/10.1136/bmjgh-2016-000132.

  5. 5.

    Horvath A, Lord S, St John A, Sandberg S, Cobbaert C, Lorenz S, et al. From biomarkers to medical tests: the changing landscape of test evaluation. Clin Chim Acta. 2014. https://0-doi-org.brum.beds.ac.uk/10.1016/j.cca.2013.09.018.

  6. 6.

    US Food and Drug Administration. Guidance for industry: Q8 (2) pharmaceutical development. 2009. http://academy.gmp-compliance.org/guidemgr/files/9041FNL%5b1%5d.PDF. Accessed 6 Nov 2018.

  7. 7.

    US Food and Drug Administration. Guidance for industry and review staff Target Product Profile — a strategic development process tool. 2007. https://www.fda.gov/media/72566/download. Accessed 6 Nov 2018. .

  8. 8.

    WHO, FIND, MsF. A multiplex multi-analyte diagnostic platform. 2017. https://www.who.int/medicines/TPP_intro_20171122_forDistribution.pdf. Accessed 8 Nov 2018.

  9. 9.

    PATH. Target Product Profile: HIV self-test version 4.1: a white paper on the evaluation of current HIV rapid tests and development of core specifications for next-generation HIV tests. 2014. https://path.azureedge.net/media/documents/TS_hiv_self_test_tpp.pdf. Accessed 8 Nov 2018.

  10. 10.

    Ebels K, Clerk C, Crudder C, McGray S, Magnuson K, Tietje K, et al. Incorporating user needs into product development for improved infection detection for malaria elimination programs: IEEE 2014 Global Humanitarian Technology Conference; 2014. https://0-doi-org.brum.beds.ac.uk/10.1109/GHTC.2014.6970338.

  11. 11.

    Verbakel J, Turner P, Thompson M, Plüddemann A, Price C, Shinkins B, et al. Common evidence gaps in point-of-care diagnostic test evaluation: a review of horizon scan reports. BMJ Open. 2017. https://doi.org/10.1136/bmjopen-2016-015760.

  12. 12.

    Cocco P, Ayaz-Shah A, Shinkins B, Messenger M, West R. Methods adopted to develop a Target Product Profile (TPP) in the field of medical tests: a systematic review. 2018. https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42018115133. Accessed 17 Jan 2019.

  13. 13.

    Godin K, Stapleton J, Kirkpatrick SI, Hanning RM, Leatherdale ST. Applying systematic review search methods to the grey literature: a case study examining guidelines for school-based breakfast programs in Canada. Syst Rev. 2015. https://0-doi-org.brum.beds.ac.uk/10.1186/s13643-015-0125-0.

  14. 14.

    Burke W, Zimmern R. Moving beyond ACCE: an expanded framework for genetic test evaluation. 2007. http://www.phgfoundation.org/documents/369_1409657043.pdf. 18 Accessed Feb 2019.

  15. 15.

    EUR-Lex. Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU (Text with EEA relevance). 2017. https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32017R0746&from=EN. Accessed 18 Feb 2019.

  16. 16.

    US Department of Health and Human Services, US Food and Drug Administration, Center for Drug Evaluation and Research, Center for Veterinary Medicine. Bioanalytical method validation: guidance for industry. 2018. https://www.fda.gov/media/70858/download. Accessed 18 Feb 2019.

  17. 17.

    Bossuyt P, Reitsma J, Linnet K, Moons K. Beyond diagnostic accuracy: the clinical utility of diagnostic tests. Clin Chem. 2012. https://0-doi-org.brum.beds.ac.uk/10.1373/clinchem.2012.182576.

  18. 18.

    Association for the Advancement of Medical Instrumentation. Human factors engineering- design of medical devices. 2013. https://my.aami.org/aamiresources/previewfiles/he75_1311_preview.pdf. Accessed 18 Feb 2019.

  19. 19.

    Organization. IS. Terms and definitions in ISO 14001: 2015 - where did they originate from. 2015. https://committee.iso.org/files/live/sites/tc207sc1/files/Terms%20and%20definitions%20in%20ISO%2014001_2015%20-%20where%20did%20they%20originate%20from.pdf. Accessed 18 Feb 2019.

  20. 20.

    Abuhav I. ISO 13485:2016: a complete guide to quality management in the medical device industry. editors; 2018.

    Google Scholar 

  21. 21.

    Chua A, Prat I, Nuebling C, Wood D, Moussy F. Update on Zika diagnostic tests and WHO’s related activities. PLoS Negl Trop Dis. 2017. https://0-doi-org.brum.beds.ac.uk/10.1371/journal.pntd.0005269.

  22. 22.

    Denkinger C, Dolinger D, Schito M, Wells W, Cobelens F, Pai M, et al. Target product profile of a molecular drug-susceptibility test for use in microscopy centers. J Infect Dis. 2015. https://0-doi-org.brum.beds.ac.uk/10.1093/infdis/jiu682.

  23. 23.

    Denkinger C, Kik S, Cirillo D, Casenghi M, Shinnick T, Weyer K, et al. Defining the needs for next generation assays for tuberculosis. J Infect Dis. 2015;211:S29–38.

    Article  Google Scholar 

  24. 24.

    DIAMETER Project, PATH. Target Product Profile: point-of-care malaria infection detection test for rapid detection of low-density, subclinical malaria infections. 2014. https://path.azureedge.net/media/documents/Highly_Sensitive_HRP2_RDT_TPP_Nov15.pdf. Accessed 8 Nov 2018.

  25. 25.

    Ding X, Ade M, Baird J, Cheng Q, Cunningham J, Dhorda M, et al. Defining the next generation of Plasmodium vivax diagnostic tests for control and elimination: target product profiles. PLoS Negl Trop Dis. 2017. https://0-doi-org.brum.beds.ac.uk/10.1371/journal.pntd.0005516.

  26. 26.

    Dittrich S, Tadesse B, Moussy F, Chua A, Zorzet A, Tangden T, et al. Target Product Profile for a diagnostic assay to differentiate between bacterial and non-bacterial infections to guide antimicrobials use in resource-limited settings: an expert consensus. Am J Trop Med Hyg. 2017. https://0-doi-org.brum.beds.ac.uk/10.1371/journal.pone.0161721.

  27. 27.

    Donadeu M, Fahrion A, Olliaro P, Abela-Ridder B. Target product profiles for the diagnosis of Taenia solium taeniasis, neurocysticercosis and porcine cysticercosis. PLoS Negl Trop Dis. 2017. https://0-doi-org.brum.beds.ac.uk/10.1371/journal.pntd.0005875.

  28. 28.

    FIND, Forum for Collaborative HIV Research. High-priority target product profile for hepatitis C diagnosis in decentralized settings: report of a consensus meeting. 2015. https://www.finddx.org/wp-content/uploads/2019/03/HCV-TPP-Report_FIND-2015.pdf. Accessed 8 Nov 2018.

  29. 29.

    FIND. Target Product Profile for tests for recent HIV infection. 2017. https://www.finddx.org/wp-content/uploads/2019/03/HIV-Incidence-TPP-FIND-2017.pdf. Accessed 8 Nov 2018.

  30. 30.

    FIND. Target Product Profile: rapid test for diagnosis of malaria and screening for hyman African trypanosomiasis (HAT). 2017. https://www.finddx.org/wp-content/uploads/2019/03/TPP-5-HAT-malaria-combo-test_02FEB17.pdf. Accessed 8 Nov 2018.

  31. 31.

    FIND. Target Product Profile: screening test for human African trypanosomiasis (HAT). 2017. https://www.finddx.org/wp-content/uploads/2019/03/TPP-1-HAT-screening-test_02FEB17.pdf. Accessed 8 Nov 2018.

  32. 32.

    Gal M, Francis N, Hood K, Villacian J, Goossens H, Watkins A, et al. Matching diagnostics development to clinical need: Target Product Profile development for a point of care test for community-acquired lower respiratory tract infection. PLoS One. 2018. https://0-doi-org.brum.beds.ac.uk/10.1371/journal.pone.0200531.

  33. 33.

    International Diagnostics Centre. Target Product Profile: point-of-care HIV viral load test. 2013. https://idc-dx.org/resource/target-product-profile-tpp-point-of-care-hiv-viral-load-test-2013/. Accessed 8 Nov 2018.

  34. 34.

    International Diagnostics Centre. Target Product Profile (TPP) – point-of-care CD4 test. 2013. https://idc-dx.org/resource/target-product-profile-tpp-point-of-care-cd4-test-2013/. Accessed 8 Nov 2018.

  35. 35.

    International Diagnostics Centre. Target Product Profile (TPP) – point-of-care EID test. 2013. https://idc-dx.org/resource/target-product-profile-tpp-point-of-care-eid-test-2013/. Accessed 8 Nov 2018.

  36. 36.

    International Diagnostics Centre. Target Product Profile (TPP) – combined HIV/syphilis test. 2014. https://idc-dx.org/resource/target-product-profile-tpp-combined-hiv-syphilis-test-2014/. Accessed 8 Nov 2018.

  37. 37.

    Lim M, Brooker S, Belizario VJ, Gay-Andrieu F, Gilleard J, Levecke B, et al. Diagnostic tools for soil-transmitted helminths control and elimination programs: a pathway for diagnostic product development. PLoS Negl Trop Dis. 2018. https://0-doi-org.brum.beds.ac.uk/10.1371/journal.pntd.0006213.

  38. 38.

    Nsanzabana C, Ariey F, Beck H, Ding X, Kamau E, Krishna S, et al. Molecular assays for antimalarial drug resistance surveillance: a target product profile. PLoS One. 2018. https://0-doi-org.brum.beds.ac.uk/10.1371/journal.pone.0204347.

  39. 39.

    Pal S, Jasper L, Lawrence K, Walter M, Gilliland T, Dauner A, et al. Assessing the dengue diagnosis capability gap in the military health system. Mil Med. 2016. https://0-doi-org.brum.beds.ac.uk/10.7205/MILMED-D-15-00231.

  40. 40.

    PATH. Diagnostics for neglected tropical diseases: defining the best tools through target product profiles. 2015. https://path.azureedge.net/media/documents/DIAG_ntd_prod_profile_rpt.pdf. Accessed 8 Nov 2018.

  41. 41.

    PATH. Target Product Profile: schistosomiasis surveillance diagnostic. 2015. https://path.azureedge.net/media/documents/2015.01.15_BMGF_SCH_MDAreduction_NoTech.pdf. Accessed 8 Nov 2018.

  42. 42.

    PATH. Target Product Profile: schistosomiasis surveillance diagnostic - Schistosoma genus-specific antibody lateral flow test post-elimination surveillance. 2015. https://path.azureedge.net/media/documents/2015.01.15_BMGF_SCH_postMDA_Ab.pdf. Accessed Nov 2018.

    Google Scholar 

  43. 43.

    PATH. Target Product Profile: schistosomiasis surveillance diagnostic lateral flow test CAA antigen. 2015. https://path.azureedge.net/media/documents/2015.01.15_BMGF_SCH_MDAreduction_Ag.pdf. Accessed 8 Nov 2018.

  44. 44.

    PATH. Target Product Profile: trachoma surveillance diagnostic - antigen lateral flow test. 2015. https://path.azureedge.net/media/documents/2015.01.15_BMGF_Trachoma_MDAreduction_Ag.pdf. Accessed 8 Nov 2018.

  45. 45.

    PATH. Target Product Profile: trachoma surveillance diagnostic – post-elimination surveillance antibody lateral flow test. 2015. https://path.azureedge.net/media/documents/2015.01.15_BMGF_Trachoma_postMDA_Ab.pdf. Accessed 8 Nov 2018.

  46. 46.

    Peck R, Wellhausen J, Boettcher M, Downing L. Defining product requirements for field deployable yellow fever tests. 2012. http://www.ajtmh.org/docserver/fulltext/14761645/89/5_Suppl_1/ASTMH_13_Abstracts_1001_1250.pdf?expires=1565877435&id=id&accname=guest&checksum=5DBD842D2A8FA98A39884EB2A0713FA3. Full text access permission from the author. Accessed 8 Nov 2018.

  47. 47.

    Porras A, Yadon Z, Altcheh J, Britto C, Chaves G, Flevaud L, et al. Target Product Profile (TPP) for Chagas disease point-of-care diagnosis and assessment of response to treatment. PLoS Negl Trop Dis. 2015. https://0-doi-org.brum.beds.ac.uk/10.1371/journal.pntd.0003697.

  48. 48.

    Reipold E, Easterbrook P, Trianni A, Panneer N, Krakower D, Ongarello S, et al. Optimising diagnosis of viraemic hepatitis C infection: the development of a target product profile. BMC Infect Dis. 2017. https://0-doi-org.brum.beds.ac.uk/10.1186/s12879-017-2770-5.

  49. 49.

    Solomon A, Engels D, Bailey R, Blake I, Brooker S, Chen J, et al. A diagnostics platform for the integrated mapping, monitoring, and surveillance of neglected tropical diseases: rationale and target product profiles. PLoS Negl Trop Dis. 2012. https://0-doi-org.brum.beds.ac.uk/10.1371/journal.pntd.0001746.

  50. 50.

    Toskin I, Murtagh M, Peeling R, Blondeel K, Cordero J, Kiarie J. Advancing prevention of sexually transmitted infections through point-of-care testing: target product profiles and landscape analysis. Sex Transm Infect. 2017. https://0-doi-org.brum.beds.ac.uk/10.1136/sextrans-2016-053071.

  51. 51.

    UNICEF. Pneumonia acute respiratory infection diagnostic aid - Target Product Profile introduction. 2014. https://www.unicef.org/supply/media/1356/file/Target%20product%20profile%20-%20Acute%20Respiratory%20Infection%20Diagnostic%20Aid%20-%20ARIDA.pdf. Accessed 8 Nov 2018.

  52. 52.

    UNICEF. UNICEF Target Product Profile: real time E. coli detection (version 1.4). 2016. https://www.unicef.org/supply/documents/target-product-profilerapid-e-coli-detection-tests. Accessed 8 Nov 2018.

  53. 53.

    UNICEF. UNICEF Target Product Profile: rapid E. coli detection (version 2.0). 2017. https://www.unicef.org/supply/documents/target-product-profile-rapid-e-coli-detection-tests. Accessed 8 Nov 2018.

  54. 54.

    UNICEF. Zika virus diagnostics: Target Product Profiles & supply update. 2017. https://www.unicef.org/supply/documents/target-product-profiles-zika-diagnostic-tests. Accessed 8 Nov 2018.

  55. 55.

    Utzinger J, Becker S, van Lieshout L, van Dam G, Knopp S. New diagnostic tools in schistosomiasis. Clin Microbiol Infect. 2015. https://0-doi-org.brum.beds.ac.uk/10.1016/j.cmi.2015.03.014.

  56. 56.

    WHO. Taenia solium taeniasis/cysticercosis diagnostic tool. Report of a stakeholder meeting. 2015. https://apps.who.int/iris/bitstream/handle/10665/206543/9789241510516_eng.pdf?sequence=1&isAllowed=y. Accessed 8 Nov 2018.

  57. 57.

    WHO, FIND. Target Product Profile for Zaire ebolavirus rapid, simple test to be used in the control of the Ebola outbreak in West Africa. 2014. https://www.who.int/medicines/publications/target-product-profile.pdf?ua=1. Accessed 8 Nov 2018.

  58. 58.

    WHO, FIND. Development of a Target Product Profile (TPP) and a framework for evaluation for a test for predicting progression from tubercolosis infection to active disease. 2017. https://apps.who.int/iris/bitstream/handle/10665/259176/WHO-HTM-TB-2017.18-eng.pdf;jsessionid=BD8E426CA555DE9FEAB589FF2A9B2CCC?sequence=1. Accessed 8 Nov 2018.

  59. 59.

    WHO, FIND. Report of a WHO-FIND meeting on diagnostics for Buruli ulcer. 2018. https://apps.who.int/iris/bitstream/handle/10665/274607/WHO-CDS-NTD-IDM-2018.09-eng.pdf?sequence=1&isAllowed=y. Accessed 8 Nov 2018.

  60. 60.

    WHO. High-priority target product profiles for new tuberculosis diagnostics: report of a consensus meeting. 2014. https://apps.who.int/iris/bitstream/handle/10665/135617/WHO_HTM_TB_2014.18_eng.pdf?sequence=1. Accessed 8 Nov 2018.

  61. 61.

    WHO. Specifications for a rapid diagnostic test for meningitis African meningitis belt. 2016. https://www.who.int/csr/disease/meningococcal/TPP-Meningitis-RDT-Apr16.pdf?ua=1. Accessed 8 Nov 2018.

  62. 62.

    Kaplan W, Wirtz V, Mantel-Teeuwisse A, Stolk P, Duthey B, Laing R. Priority medicines for Europe and the world- 2013 update 2013. 2013. https://www.who.int/medicines/areas/priority_medicines/MasterDocJune28_FINAL_Web.pdf?ua=1. Accessed 13 June 2019.

  63. 63.

    WHO. An R&D Blueprint for action to prevent epidemics 2016. https://www.who.int/blueprint/about/r_d_blueprint_plan_of_action.pdf?ua=1. Accessed 13 Jun 2019.

  64. 64.

    Linnet K, Bossuyt P, Moons K, Reitsma J. Quantifying the accuracy of a diagnostic test or marker. Clin Chem. 2012. https://0-doi-org.brum.beds.ac.uk/10.1373/clinchem.2012.182543.

  65. 65.

    Abel L, Shinkins B, Smith A, Sutton A, Sagoo G, Uchegbu I, et al. Early economic evaluation of diagnostic technologies: experiences of the NIHR diagnostic evidence co-operatives. Med Decis Mak. 2019;39(7):857–66.

    Article  Google Scholar 

  66. 66.

    Garfield S, Polisena J, Spinner D, Postulka A, Lu C, Tiwana S, et al. Health Technology Assessment for molecular diagnostics: practices, challenges, and recommendations from the Medical Devices and Diagnostics Special Interest Group. Value Health. 2016. https://0-doi-org.brum.beds.ac.uk/10.1016/j.jval.2016.02.012.

  67. 67.

    Tversky A, Kahneman D. Judgment under uncertainty: heuristics and biases. Science. 1974. https://0-doi-org.brum.beds.ac.uk/10.1126/science.185.4157.1124.

  68. 68.

    Whiting P, Davenport C, Jameson C, Burke M, Sterne J, Hyde C, et al. How well do health professionals interpret diagnostic information? A systematic review. BMJ Open. 2015. https://0-doi-org.brum.beds.ac.uk/10.1136/bmjopen-2015-008155.

  69. 69.

    Monaghan P, Lord S, St John A, Sandberg S, Cobbaert C, Lennartz L, et al. Biomarker development targeting unmet clinical needs. Clin Chim Acta. 2016;460:211–9.

    CAS  Article  Google Scholar 

Download references

Acknowledgements

The authors would like to express their gratitude to Natalie King (University of Leeds, Information Specialist) for supporting the design of the search strategy for this systematic review.

Funding

This systematic review was carried out as a part of the full-time School of Medicine PhD Scholarship awarded to Paola Cocco by the University of Leeds. Additionally, this work is supported by the ‘Tackling AMR-A Cross Council Initiative’ (grant number MR/N029976/1), Funding Partners: The Biotechnology and Biological Sciences Research Council, the Engineering and Physical Sciences Research Council, and the Medical Research Council. This work is also supported by the Medical Research Foundation's National AMR Training Programme. Bethany Shinkins and Michael Messenger are funded by NIHR Leeds In Vitro Diagnostic Co-operative and Cancer Research UK via CanTest Collaborative. Anam Ayaz-Shah is funded by Cancer Research UK via CanTest Collaborative.

Author information

Affiliations

Authors

Contributions

All authors contributed to the conception and design of the work. PC performed the search strategy; BS contributed to reference sifting whilst AAS conducted the data extraction. MM and RW solved any conflicts. PC, BS, MM and RW interpreted the data and drafted and revised the manuscript. The authors read and approved the final manuscript.

Corresponding author

Correspondence to Paola Cocco.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Cocco, P., Ayaz-Shah, A., Messenger, M.P. et al. Target Product Profiles for medical tests: a systematic review of current methods. BMC Med 18, 119 (2020). https://0-doi-org.brum.beds.ac.uk/10.1186/s12916-020-01582-1

Download citation

Keywords

  • Medical test
  • Target Product Profile
  • TPP
  • Quality by design
  • Diagnostic
  • Test characteristic