Methodological steps | Key challenges | Potential solutions |
---|---|---|
0. Initial network meta-analysis | Resource intensive but commonly one-shot investment | Setting-up of a research community (preferentially international) in charge of designing a high-quality and clinically relevant network meta-analysis and keeping it up-to-date for a given mandate (e.g., a 5- or 10-year period) |
Redundant meta-analyses frequently commissioned by different groups | ||
Need to consider all patient-important outcomes | ||
Perform iterations at regular intervals (e.g., every 3 months) through steps 1–5 | ||
1. Search for trials | Need to identify trials of novel drugs. For instance, six to nine new second-line therapies per year in advanced NSCLC | Community expert monitoring would identify pipeline therapies assessed in clinical trials and allow adapting the search equations |
 | Querying repeatedly a wide range of sources to identify trials with published and unpublished results is time consuming and labor intensive | Metasearch engine script designed for the question at hand would allow querying automatically and simultaneously the multiple sources [75] |
Need to identify multiple reports of the same trial. For instance, there were on average two reports per trial of second-line treatments in advanced NSCLC | The OpenTrials database would contain all openly available data and documents on all clinical trials threaded together by trial ID [76] | |
Need to update the list of treatments, of trials, and multiple reports for the same trial | ||
2. Screening of reports and selection of trials | Screening repeatedly may be resource intensive depending on the clinical question. In second-line therapies of advanced NSCLC we estimated that the workload would be manageable (about 50 new records to screen each month for CENTRAL, MEDLINE, EMBASE, and around 600 conference abstracts per year) | Using crowdsourcing for screening would allow distributing microtasks to community experts and dealing with increasing amounts of evidence [77, 78] |
Future automated technologies would help community experts in the screening process; for instance, natural language processing methods using the semantic features of the reports and could help identify potentially relevant trial reports [49, 50, 79–82] | ||
If required only (at least one trial with new results), continue with steps 3–5 | ||
3. Data extraction | Extracting data and assessing the risk of bias repeatedly may be resource intensive depending on the number of trials with new results. In second-line therapies of advanced NSCLC we estimated that the workload would be manageable (about 10 to 15 new trials per year) | Using crowdsourcing for data extraction would allow distributing microtasks to experts and dealing with increasing amounts of evidence [77, 83] |
4. Assessment of risk of bias | ||
Need to check for consistency in extracted data between multiple reports for the same trial; in cases of inconsistency, need to justify the choice of a specific source | Automatic data extraction is possible depending on the source. For instance, it is possible to abstract automatically posted results from ClinicalTrials.gov [84–86] | |
Future automated technologies could help experts to extract data or to assess the risk of bias within trials [49, 50, 62, 63] | ||
5. Updating of network meta-analysis | Need to develop online software for updating the network meta-analysis* | Online solutions in development for conventional meta-analysis could be extended to network meta-analysis [87, 88] |
6. Dissemination | Need to make the results publicly available after each iteration | A freely accessible website would allow reporting the live cumulative network meta-analysis, including all details regarding methods and processes, graphs, and data |
Need for transparent reporting of the whole process | ||
Need for peer-review | Alternative forms of peer-review (e.g., post-publication peer-review) could be implemented |