Skip to main content

Using flawed, uncertain, proximate and sparse (FUPS) data in the context of complexity: learning from the case of child mental health

Abstract

The use of routinely collected data that are flawed and limited to inform service development in healthcare systems needs to be considered, both theoretically and practically, given the reality in many areas of healthcare that only poor-quality data are available for use in complex adaptive systems. Data may be compromised in a range of ways. They may be flawed, due to missing or erroneously recorded entries; uncertain, due to differences in how data items are rated or conceptualised; proximate, in that data items are a proxy for key issues of concern; and sparse, in that a low volume of cases within key subgroups may limit the possibility of statistical inference. The term ‘FUPS’ is proposed to describe these flawed, uncertain, proximate and sparse datasets. Many of the systems that seek to use FUPS data may be characterised as dynamic and complex, involving a wide range of agents whose actions impact on each other in reverberating ways, leading to feedback and adaptation. The literature on the use of routinely collected data in healthcare is often implicitly premised on the availability of high-quality data to be used in complicated but not necessarily complex systems. This paper presents an example of the use of a FUPS dataset in the complex system of child mental healthcare. The dataset comprised routinely collected data from services that were part of a national service transformation initiative in child mental health from 2011 to 2015. The paper explores the use of this FUPS dataset to support meaningful dialogue between key stakeholders, including service providers, funders and users, in relation to outcomes of services. There is a particular focus on the potential for service improvement and learning. The issues raised and principles for practice suggested have relevance for other health communities that similarly face the dilemma of how to address the gap between the ideal of comprehensive clear data used in complicated, but not complex, contexts, and the reality of FUPS data in the context of complexity.

Peer Review reports

Background

There is growing interest in the possibilities posed by the analysis of routinely collected administrative data for the improvement of healthcare, and in particular in the assessment of the impact of services (e.g. [1, 2]). Whilst there has been debate about the challenges of finding the best metrics to use or which analyses are the most appropriate (e.g. [3, 4]), the literature is generally premised on an assumption of the requirement to use high-quality, routinely collected data. However, in many areas of healthcare, the reality is that routinely collected datasets are often of low quality. Data may be flawed, due to missing or erroneously recorded data; uncertain, due to differences in how data items are rated or conceptualised; proximate, in that they are a proxy for the focus of interest; and sparse, in that there may be a particularly low volume of cases for key subgroups. Given the slow pace of advance in routine data capture across many parts of the health system in England (e.g. [5]), it is perhaps best to assume that these datasets may remain flawed, uncertain, proximate and sparse long enough to warrant coining the acronym ‘FUPS’ [6].

Much sophisticated attention and thought has been given to the merits and demerits of different metrics to judge the impact of using routine data in healthcare, alongside calls to support the uptake of new evidence [7,8,9,10,11]. However, there has been less attention given to the properties of the systems in which such data will be used. The prevailing assumption appears to be that such data will be used in systems that may be highly complicated but not necessarily complex. The distinction in this context is that complex systems have key attributes missing from other systems, however complicated, and include emergence (the system has properties greater than the sum of parts not directly predicted from the elements within), feedback (changes reinforce or offset further changes), and adaptation (agents adjust and adapt in response to other agents). Such complex systems have been characterised as “a collection of individual agents with freedom to act in ways that are not always totally predictable, and whose actions are interconnected so that one agent’s actions changes the context for other agents” [12]. There are increasing calls for a greater consideration of the implications of the complexity of the healthcare system in relation to both research and practice [12, 13].

This paper considers the use of FUPS data in the context of the complex system that is child mental health. Part case study and part vision for the future, we draw on learning from the use of a national dataset of child mental health outcomes following contact with specialist mental health services. The learning is relevant for any healthcare system that faces the gap between the ideal of comprehensive, clear data used in complicated but not complex contexts, and the reality of FUPS data used in contexts of complexity. In particular, we consider both how FUPS data can and should be used to help evaluate aspects of a complex system as well as to help influence behaviours in such a system.

In terms of using FUPS data to evaluate outcomes in complex systems, the paper explores how analysis of FUPS data can in part be treated like a historical investigation where partial remnants and sources are used to consider the complex reality they relate to, but cannot fully capture it, in order to build up narrative arguments and hypotheses that can be explicitly contested and debated within a system.

With regard to using FUPS data to help influence behaviours in complex systems, the paper examines the potential to draw on learning from a range of disciplines to consider some key factors that might influence the use of findings from data, whether FUPS or not; for example, findings from cognitive psychology that suggest people are most likely to reject data that challenge existing assumptions [14], and findings from sociology that suggest power elites may look to protect their vested interests [15]. The implications of these findings for the use of FUPS data are explored and the differentiated standards of proof derived from legal frameworks to aid appropriate use are considered.

The philosopher and urban planner Donald Schon famously drew the distinction between the “high ground, [where] manageable problems lend themselves to solution through the use of research-based theory and technique” and the “swampy lowlands, [where] problems are messy and confusing and incapable of technical solution…” [16]. This paper is premised on the belief that decision-making in complex healthcare systems occurs in the ‘swampy lowlands’ of practice, where decisions are necessarily made every day, whether or not there are good quality data to support them, and in the context of a complex network of existing beliefs, relationships and assumptions; it is in this context that we consider the use of FUPS.

Child mental healthcare: an example of a complex system

Emergent properties: structural

A diverse range of providers, such as health providers (both in primary, secondary and tertiary care), voluntary sector providers, social care and, increasingly, schools, work to support child mental health across various agencies [17]. A lack of quality data to support this system is perceived as a major issue, with services having been described as “working in a fog” [18]. The central flow of child-level data to NHS Digital was only initiated in mid-2016, following a decade-long process of implementing a national dataset for child mental health and, to date, returns are still limited in quality and quantity. Whilst there has been a policy and practice commitment with regards to the need for integrated cross-agency collaboration and co-ordination, and appropriate use of data for more than a decade, fragmentation and confusion have been the described emergent defining features of the system [18, 19].

Agents within the system

Agents within the system include (but are not limited to) children and young people with mental health issues, their parents and family members, school staff, primary care providers (including GPs and school nurses), specialist mental health providers (both voluntary, statutory and independent), care providers, trainers of specialist mental health providers, professional bodies of specialist mental health providers, pharmaceutical companies, psychological treatment developers, policymakers, politicians, civil servants, data analysts, commissioners, researchers, service support and improvement organisations (e.g. NHS Improvement), service review and assessment organisations (e.g. Care Quality Commission), and guideline developers (e.g. National Institute for Health and Care Excellence (NICE)).

Emergent properties: cultural

Whilst there are debates between agents within the system about the causes and nature of mental health problems and the best approaches to take, a dominant and consensual narrative is the need for earlier intervention and, in particular, faster and easier access to specialist services (e.g. [17, 20]). There is also a shared emphasis on the need to address the stigma surrounding mental health issues that are seen as blocking people from accessing help (e.g. the ‘Time to Talk’ campaign). Public discourse, guidance to policymakers and the wider public tend to emphasise the fact that many mental health problems in adulthood are reported to have originated in childhood; 75% of adult mental health problems start before the age of 18 [21], one in four children has a mental health problem at any one time [22], and these mental health problems may have long-term negative impacts if not successfully addressed [23].

The existence of a range of evidence-based treatments, whose effectiveness is based on comparison of differences in group means in randomised control trials, are emphasised. Thus, websites offering information to the public stress the advantages of accessing specialist help and suggest that, without help, children and adolescents will fail to improve. For example, “Like other medical conditions, anxiety disorders tend to be chronic unless properly treated. Most kids find that they need professional guidance to successfully manage and overcome their anxiety” [24]. These data and statements are regularly mobilised as part of an argument for more services and to encourage children and their parents to seek help sooner rather than later.

Less quoted in both public and professional discourse is what is known about rates of spontaneous improvement (which might in fact be better defined as improvement that occurs without professional input, since it may include interventions and input from many other agents in the system). The rates of non-professionally mediated improvement for key difficulties may be as high as 60% for adolescent depression [25]. Whilst a number of systematic reviews have identified effective prevention interventions for mental ill health in children, with moderate effect sizes across diverse populations [26,27,28], there is still a need for further evidence of the best ways forward in terms of early intervention or prevention and how to ensure there are no unintended harms [29, 30]. There is also little discussion of likely rates of recovery or non-recovery following treatment [31]. Further, no NICE guideline for child mental health contains any reference to non-response or how children should be supported if they do not respond to treatment.

Current system challenges: implications for feedback and adaptation

The complex and dynamic child mental health system is particularly beset at the moment with a range of challenges. The system faces a heady mix of increased prevalence rates [32, 33], cuts across a number of the diverse range of services [34], and increased pressure on both schools and specialist services [35, 36]. There has also been more public scrutiny and concern; in the last 5 years, there have been over five national reviews, three health committee reports, several policy documents and a Green Paper currently out for consultation. The outputs of these reports tend to repeat the same messages of the need for better coordination, earlier intervention and more resources. Given this context, it is anticipated that many stakeholders will be particularly sensitive and wary of any data that might undermine existing discourses on the benefits of increased provision and the need for more resources. In particular, there are likely concerns that any suggestion of poor performance on the part of current services may lead to further cuts and loss for children and families.

An attempt to ‘transform’ the system

A major initiative within the system was the Children and Young People’s Improving Access to Psychological Therapy (CYP IAPT) programme, led by the Department of Health and NHS England, and involving geographical partnerships between NHS providers, local authorities and voluntary sector providers across five areas (London and the South East, the North West, Oxford/Reading, Yorkshire, Humber and the North East, and the South West). The programme sought to embed best practice in child mental health provision by focussing on specific elements of participating services, namely helping them work effectively in partnership with children and young people so that they were active in shaping their local services; supporting services to develop a culture of reflective practice and accountability; improving the workforce through training in best evidence-based practice; developing mechanisms to deliver frequent outcome monitoring to help the therapist and service user work together in their session, and to help supervisors support therapists in improving outcomes; and supporting local areas in improving the infrastructure they use to collect and analyse data to assess whether children and young people are getting better.

The premise of CYP IAPT was to train a selection of practitioners, supervisors and managers, alongside providing additional resources for infrastructure and building regional and national collaborations to support best practice. In this way, the aim was to maximise limited resources and facilitate the embedding of sustainability. Specific training programmes were developed for both practice and supervision in cognitive behavioural therapy for anxiety, parent training for behavioural difficulties in children under the age of 9, systemic family therapy for eating disorders, conduct disorders and depression, interpersonal psychotherapy for adolescent depression, leadership, service development, supervision skills and service transformation skills, and enhanced evidence-based practice.

The programme was rolled out over 4 years (2011–2015) and sought to embed seven key principles in child mental health services, namely to support whole service transformation through leadership; to improve access through self-referral; to work in partnership with the young person and their parent/carer in service delivery and design; to deliver evidence-based psychological treatments; to deliver outcomes-focused psychological treatments; to work in partnership with the young person and their parent/carer throughout treatment; and to provide supervision to support the delivery of evidence-based, service user-informed and outcomes-informed practice. The programme involved directly training over 1000 clinicians and service managers in evidence-based approaches and leadership [6]. The vision was that these trained staff would lead service transformation and more effective practice within their organisations.

A key aspect of this initiative was the emphasis on the collection of child- and parent-reported questionnaire data throughout the course of treatment that sought to capture change in symptoms, wellbeing, functioning or achievement of goals during the course of treatment [6]. Between 2011 and 2015, the lead author (MW) and colleagues were commissioned first by the Department of Health and then NHS England to agree which data to collect and then to collect and analyse routinely collected CYP IAPT data with a particular focus on the child- and parent- reported outcome data [6].

An outcomes and evaluation group was convened, chaired by MW. This group oversaw measure selection and the approach to data collection; chose measures based on review of psychometric properties, feasibility, utility, compatibility and cost; advised on how to implement routine outcome measures and how to report findings; and consulted with wider networks and held regular public consultations on measures to include in the dataset [6]. Out of this process, 21 child-report scales and 15 parent-report scales were used to cover the range of problems seen in child mental health services. Since there was no national data flow, patient-level data from participating sites were submitted quarterly using an agreed data specification. Data were uploaded via secure data handling to a data storage provider and collated centrally. In the first year of a site’s involvement in the initiative, data were largely sent from those involved directly in the training; from the second year of involvement onwards, data were sent from all practitioners across the partnership. Data collected included demographic information and outcome and experience measures, with a particular emphasis on child and parent reports [6].

CYP IAPT data: an example of a FUPS dataset

The CYP IAPT data can be seen as an example of a FUPS dataset. Out of the approximately 23,000 cases of completed treatmentFootnote 1 in this period (April 2011 – June 2015), approximately 8000 had paired child and/or parent-reported data relevant to outcomes (~ 6000 had child-report data and ~ 4000 parent-report data) (Fig. 1). Based on these data, 52% of child-report data and 40% of parent-report data showed ‘reliable improvement’,Footnote 2 9% of child-report data and 9% of parent-report data showed ‘reliable deterioration’,Footnote 3 and 36% of child-report data and 26% of parent-report data showed ‘recovery’Footnote 4. These findings were summarised in an infographic (Fig. 2), which was designed to share this information with children and families (see discussion on the use of FUPS data below).

Fig. 1
figure 1

Diagram to show data captured (and lost) in the project

Fig. 2
figure 2

Infographic explaining findings to children and families

CYP IAPT data fit the criteria for FUPS

Flawed

There is a high degree of missing data. The sample of approximately 8000 cases with outcome data available for analysis represents just under half of the approximately 16,000 cases who had completed treatment. This represents less than a third of the over 23,000 people who completed treatment during this period. The missing data are unlikely to be missing entirely at random, although it is hard to know how the different skews might operate at different points. Taking the paired data subsample, for example, it could be that those who feel most positive about treatment are more likely to complete a second questionnaire, leading to a potential inflation of positive outcomes, or it may be that those who quickly improve stop coming and thus are less likely to complete a second questionnaire, leading to a potential underestimation of positive outcomes [6].

Uncertain

A total of 14 questionnaires were used (10 child completed, 4 parent completed) covering anxiety, depression, trauma, behavioural problems, attention problems, general functioning and general distress [37]. Different questionnaires use different approaches to calculate ‘clinical’ thresholds, all involve a high degree of measurement error and there is known to be a low correlation between child, parent and teacher perspectives even when completing the same questionnaire [38,39,40,41].

Proximate

Without a biological marker, the field is largely reliant on changes in questionnaire scores over time. Even if change in scores is agreed as a suitable proxy of change in mental health status the data are still proximate to the issue of ultimate issue interest, i.e. the impact of service provision. Child mental health problems follow a fluctuating course. To determine if someone would have improved or worsened without the treatment being offered it is necessary to have a counterfactual, which is not available in these data.

Sparse

For key subgroups, data are particularly sparse. For example, the vast majority of children in this sample were from white British ethnic communities, with only 18% of the full sample being from black, Asian and minority ethnic (BAME) groups. Whilst this may be an appropriate reflection of the populations covered, this means that, for the sample with paired outcomes, the numbers from BAME groups are too small to be able to undertake viable subgroup analyses, which may be important in the light of findings that BAME groups may differ in access and use of services [42].

Analysing FUPS data: the example of CYP IAPT

Given the FUPS nature of the data and the complexity of the system, careful thought was given to how the data can be analysed and used. The responsibility to steer between over- and under-interpretation of any findings was felt very keenly. It was recognised that these data were not being considered in a neutral space. The current discourse and challenges within the child mental health system meant that consideration of these data might have important implications for future service development and it was clear that the data could not provide a comprehensive insight into the complex reality of the full system. In examining and using data such as these, it was felt relevant to draw on learning from other disciplines that need to use FUPS data.

FUPS data for evaluation: considering data as historical fragments

Historians draw on FUPS data all the time. As an historical record, Pepys’ diary [43] is flawed in that it is not representative of all people living in this period, uncertain in that the author is a highly unreliable narrator, proximate in that, even where Pepys is most honest, it is still refracted through his perception, and sparse on some of the key details we might like to know about. This does not stop it being a source much pored over for insights into seventeenth-century life. The scientific paradigm has developed, at least partly, to try to manage complexity in such a way as to make findings more generalisable, and this paper is not an attempt to undermine that paradigm. Rather, we are suggesting we can draw on FUPS data to generate and consider hypotheses and make arguments that can helpfully contribute to the discourse in a complex system.

This is in contradistinction to the current approach to FUPS data, which is generally attacked from two standpoints within the child mental health field. First, many agents in the system, such as clinical researchers, are influenced by the dominance of using biomedical evidence to inform healthcare and have been trained to interrogate data using the highest standards of traditional scientific evidence. Second, and in distinction, other agents, such as many talking therapists, have been trained to contest data using a different but equally demanding set of criteria, pointing to limitations in being able to capture the complexities of human experience. In both instances, however, as noted above, criticisms of the flaws of the data and a tendency to dismiss them may be particularly evident where such data challenge strongly held convictions or interests.

Transparency and triangulation

In order to treat the data more as a historical source than as a sacred source of truth, three key principles of analysis are suggested and were used in this dataset. (1) Treat data as a fragment of the whole and be honest and upfront about its limitations – it is crucial to present data in such a way as to convey any limitations to the validity, reliability and generalisability of data, stemming from its FUPS characteristics. (2) Be transparent in all analyses and avoid ‘black box’ statistics – it is important to use precise and neutral language and to keep the sophistication of analysis commensurate with the flaws in the data. Thus, it is recommended to use very simple and transparent statistical approaches to allow maximum opportunity for debate and consideration. (3) Triangulate the findings with other information. It is important to remember that these data are being considered within the context of the ‘swampy lowlands’ of practice. In trying to make sense and reflect on information available, it is key to consider it in the context of other information to see what supports or undermines the findings from these particular FUPS data.

These principles, and how they were employed in relation to the CYP IAPT dataset, are outlined in Table 1 below.

Table 1 Outline of key proposed principles for analysing flawed, uncertain, proximate or sparse (FUPS) data and how they were employed in CYP IAPT

Using FUPS data: the example of CYP IAPT

When we shared findings with a group of respected and experienced child health academics, they suggested that these analyses should not be shared at all as the data were too flawed and any analysis might lead to misleading conclusions. After much careful reflection and debate, the decision was made to go ahead with presenting the analysis, but to do so in the context of stressing the FUPS qualities of the data and considering how best to share it in the light of the specific challenges and issues facing the complex dynamic system of child mental health care and with due attention to likely feedback and adaptation processes. We looked to another discipline, that of the law, for guidance on the use of FUPS data for decision-making.

Standards of evidence – the legal perspective

The courts deal with FUPS data all the time. Many jurisdictions apply different standards of evidence depending on a range of factors, one of which may be the consequence that hangs on any decision. UK law, for example, lays out three different standards of evidence that will be needed for different decisions, ranging from the highest standard of proof, which is ‘beyond reasonable doubt’ (used, for example, in criminal cases), to the next standard, which is ‘on the balance of probabilities’ (used in civil cases), and finally to concepts used where a decision needs to be made between competing accounts but the risks and benefits are such that the court will base its decision on lower standards of proof such as ‘a reasonable chance’, ‘substantial grounds for thinking’ and ‘a serious possibility’ as means of describing the likelihood [44]. These are frequently used in cases involving resolution of competing claims over contracts, for example.

In medical academic literature considering evidence is sometimes discussed as if there is only one standard of evidence, which set up scientific experiments with the search for a definitive answer (closest to ‘beyond reasonable doubt’). This may be appropriate for some decisions, such as the introduction of a new drug. However, it may be less appropriate for making a decision in the swampy lowlands, such as between funding options when a decision has to be made one way or the other, and there is therefore a need to use the best available evidence, even if that evidence is of poor quality [45, 46].

‘Reasonable chance’ as a basis for change

Healthcare is naturally a very conservative profession and the evidence alluded to earlier suggests the system will naturally adapt to continue to practise along well-worn and traditional grooves, regardless of new emerging evidence, even when of high quality [7].

Historical data and findings from cognitive psychology suggest that agents in the system are more likely to apply very high standards of evidence to new initiatives and those that challenge their beliefs and status, than to old or traditional practice, regardless of the fact that the latter may be based on historical precedence alone [7]. This is likely to result in healthcare professionals overestimating the risks of trying something new (including stopping doing something that has been found to be ineffective) and underestimating the risks of continuing to do what they have always done. A range of initiatives has been developed to try to address this tendency (for example, [47]).

With regard to using FUPS data, agents in the system should be encouraged to apply standards of evidence appropriate to the decision needing to be made. This may sometimes require only the standard of ‘a reasonable chance’ where a view of the risks and opportunities of action and inaction are carefully considered. Helping agents examine the risks and opportunities in an even-handed way, which takes into account the likely existing biases in approach to such data, is a key element of the appropriate use of such data to inform potential system change.

Opening up conversations

In order to try to open up conversations on the findings, and rather than inappropriately treating them as definitive, it was agreed not to issue a press release or seek to feed headlines. Instead, we decided to blog on the topic and invite comments from others in the field to elicit debate and to examine if the findings met the criteria for ‘reasonable chance’. A series of regional debates was convened involving a range of stakeholders (including children and young people with experience of service use, commissioners, policymakers and service providers). We invited a panel that included, as a minimum, a young person with experience of service use, a specialist mental health provider, and a commissioner to comment on the report. We also agreed to frame the conversations as safe spaces to be curious about what these findings might mean.

We were acutely aware that the data we were drawing on were ‘FUPS’. However, we were also aware that it is easy to dismiss uncomfortable findings or hypotheses arising from them due to the FUPSness of the data. Thus, we sought to facilitate conversations using a MINDFUL framework [6]. This involved the use of three principles in this context. (1) Encourage curiosity – it is vital to help stakeholders to maintain curiosity. This involves finding ways to help stakeholders to challenge their own and colleagues’ confirmatory biases, and to apply the same standards of scrutiny to analytic findings that support prior beliefs as to analytic findings that are uncomfortable or not wished for. This includes finding ways to help maintain this stance over time by the development of long-term safe space and relationships. (2) Apply the standard of ‘a reasonable chance’ rather than ‘beyond reasonable doubt’, drawing on how it meshes with existing narratives and how it triangulates with other information. (3) Encourage action – it may be important to help relevant stakeholders to consider possible initiatives that, even if not definitively indicated, may do more good than harm and challenge the assumption that change is always riskier than the status quo. Again, the focus needed to be on long-term change. These principles and how they were employed in relation to the CYP IAPT dataset are outlined in Table 2.

Table 2 Outline of key principles used for use of flawed, uncertain, proximate or sparse (FUPS) data in the Children and Young People’s Improving Access to Psychological Therapy (CYP IAPT) context

Impact in child mental health

As is to be expected with a complex dynamic system, it is hard to disentangle the impact of opening up conversations on the FUPS CYP IAPT data findings, and our perspectives (particularly those of MW, who led on some of these conversations) will themselves be partial, unreliable and flawed. With these caveats in mind, we would share the following reflections on emergent properties from the system which clearly are multiply determined and influenced by a range of factors, and where cause and effect reverberate around the system with impact from both feedback and adaptation. First, it is noted that the findings from this FUPS dataset are now being used nationally to help services benchmark their outcomes (in the context of all the caveats above), which has been welcomed as a way for service providers and commissioners to consider and agree realistic standards for local services [48]. A debate has opened up about how to end specialist mental health treatment in the context of a child having ongoing difficulties. Initiatives that have developed include the development of long-term conditions clinics specifically for mental health issues that allow people more flexible re-entry but also allow earlier case closure in the recognition that greater improvement may be unlikely [49]. Some clinicians have started to talk more openly about likely improvement rates of treatment and to use this with their clients [31]. We would contend that this suggests that, if the principles we outline above are applied, there is a possibility that a complex system will consider and respond to even challenging FUPS data, and that this may be true across a variety of healthcare contexts.

Conclusions

Datasets that can be considered FUPS are likely to exist in many domains of complex and dynamic healthcare systems. There are clearly dangers of over-interpretation of such data, but there may also be dangers of non-use, which allow stakeholders to use the FUPSness of the data to ignore potentially important but uncomfortable findings and hypotheses. This paper has presented some suggested principles for the use of FUPS data, drawing on both historical and legal disciplines to try to move beyond the biomedical model as the only model of evidence. The redoubtable historian EH Carr once noted that, rather than history being “a hard core of facts leading to a range of interpretations”, historical debate could be seen as “a hard core of interpretation surrounded by a pulp of disputable facts” [50]. Debates in complex healthcare systems take place within existing, highly charged discourses involving hard cores of interpretation formed over many decades. FUPS data are clearly disputable facts, but they can be drawn on as a form of evidence to aid decisions in the swampy lowlands of practice.

Notes

  1. Defined as two or more contacts and where at least one was not defined as assessment only for closed cases.

  2. Reliable improvement = amount of improvement between first and last collected score is greater than likely due to measurement error on at least one subscale of a questionnaire AND amount of deterioration of first and last collected score is NOT greater than likely due to measurement error on any subscale.

  3. Reliable deterioration = amount of deterioration between first and last collected score is greater than likely due to measurement error on at least one subscale regardless of amount of improvement on any other subscale.

  4. Recovery = at least one subscale score above threshold at outset and, at closure, all scores below threshold.

Abbreviations

BAME:

Black, Asian and Minority Ethnic

CYP IAPT:

Children and Young People’s Improving Access to Psychological Therapy

FUPS:

Flawed, Uncertain, Proximate and Sparse

NHS:

National Health Service

NICE:

National Institute for Health and Care Excellence

References

  1. Raghupathi W, Raghupathi V. Big data analytics in healthcare: promise and potential. Health Inform Sci Syst. 2014;2:3.

    Article  Google Scholar 

  2. McIntosh AM, Stewart R, John A, Smith DJ, Davis K, Sudlow C, Hotopf M. Data science for mental health: a UK perspective on a global challenge. Lancet Psychiatry. 2016;3(10):993–8.

    Article  PubMed  Google Scholar 

  3. Lilford R, Mohammed MA, Spiegelhalter D, Use TR. misuse of process and outcome data in managing performance of acute medical care: avoiding institutional stigma. Lancet. 2004;363(9415):1147–54.

    Article  PubMed  Google Scholar 

  4. Black N. Assessing the quality of hospitals. BMJ. 2010;340:c2066.

    Article  PubMed  Google Scholar 

  5. Greenhalgh T, Stramer K, Bratan T, Byrne E, Mohammad Y, Russell J. Introduction of shared electronic records: multi-site case study using diffusion of innovation theory. BMJ (Clinical Research Ed). 2008;337:a1786.

    Article  Google Scholar 

  6. Wolpert M, Jacob J, Napoleone E, Whale A, Calderon A, Edbrooke-Childs J. Child- and Parent-reported Outcomes and Experience from Child and Young People’s Mental Health Services 2011–2015. 2016. https://www.corc.uk.net/media/1172/201612child_and_parent-reported_outcomes_and_experience_from_child_and-young_peoples_mental_health_services_2011-2015.pdf. Accessed 5 Apr 2018.

  7. Muir Gray J. Evidence-Based Health Care. Edinburgh: Churchill Livingstone; 2001.

    Google Scholar 

  8. Heitjan DF, Rubin DB. Ignorability and coarse data. Annals Statistics. 1991;19:2244–53.

    Article  Google Scholar 

  9. Ford E, Nicholson A, Koeling R, Tate AR, Carroll J, Axelrod L, Smith HE, Rait G, Davies KA, Petersen I, et al. Optimising the use of electronic health records to estimate the incidence of rheumatoid arthritis in primary care: what information is hidden in free text? BMC Med Res Methodol. 2013;13(1):105.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Goldstein H, Spiegelhalter DJ. League tables and their limitations: statistical issues in comparisons of institutional performance. J Royal Stat Soc Series A (Statistics Soc). 1996;159:385–443.

    Article  Google Scholar 

  11. Leckie G, Goldstein H. The limitations of using school league tables to inform school choice. J Royal Stat Soc Series A (Statistics Soc). 2009;172(4):835–51.

    Article  Google Scholar 

  12. Plsek PE, Greenhalgh T. Complexity science: the challenge of complexity in health care. BMJ. 2001;323(7313):625.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  13. Rutter H, Savona N, Glonti K, Bibby J, Cummins S, Finegood DT, Greaves F, Harper L, Hawe P, Moore L, et al. The need for a complex systems model of evidence for public health. Lancet. 2017;390(10112):2602–4.

    Article  PubMed  Google Scholar 

  14. Tversky A, Kahneman D. Judgment under uncertainty: heuristics and biases. Science. 1974;185(4157):1124–31.

    Article  PubMed  CAS  Google Scholar 

  15. Currie G, Lockett A, Finn R, Martin G, Waring J. Institutional work to maintain professional power: Recreating the model of medical professionalism. Organ Stud. 2012;33(7):937–62.

    Article  Google Scholar 

  16. Schon D. The Reflective Practitioner: How Professionals Think in Action. New York, NY: Basic Books; 1984.

    Google Scholar 

  17. Department of Health, Department for Education, NHS England. Future in Mind: Promoting, Protecting and Improving our Children and young People’s Mental Health and Wellbeing. London: Department of Health; 2015.

    Google Scholar 

  18. Hindley P. Written Evidence for the House of Commons Select Committee Inquiry into Child and Adolescent Mental Health Services from the Faculty of Child and Adolescent Psychiatrists. London: Royal College of Psychiatrists; 2014.

    Google Scholar 

  19. Care Quality Commission. Review of Children and Young People’s Mental Health Services. 2017. https://www.cqc.org.uk/sites/default/files/20171103_cypmhphase1_report.pdf. Accessed 5 Apr 2018.

  20. Department of Health, Department of Education. Transforming Children and Young People's Mental Health Provision: A Green Paper. London. 2017. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/664855/Transforming_children_and_young_people_s_mental_health_provision.pdf. Accessed 5 Apr 2018.

  21. Kim-Cohen J, Caspi A, Moffitt TE, Harrington H, Milne BJ, Poulton R. Prior juvenile diagnoses in adults with mental disorder - developmental follow-back of a prospective longitudinal cohort. Arch General Psychiatry. 2003;60:709–17.

    Article  Google Scholar 

  22. Green H, McGinnity A, Meltzer H, Ford T, Goodman R. Mental Health of Children and Young People in Great Britain, 2004. London: Palgrave; 2005.

    Book  Google Scholar 

  23. Copeland WE, Wolke D, Shanahan L, Costello EJ. Adult functional outcomes of common childhood psychiatric problems: a prospective, longitudinal study. JAMA Psychiatry. 2015;72(9):892–9.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Children and Teens: Treatment. https://adaa.org/living-with-anxiety/children/treatment. Accessed 5 Apr 2018.

  25. Whiteford HA, Harris MG, McKeon G, Baxter A, Pennell C, Barendregt JJ, Wang J. Estimating remission from untreated major depression: a systematic review and meta-analysis. Psychol Med. 2013;43(08):1569–85.

    Article  PubMed  CAS  Google Scholar 

  26. Stockings EA, Degenhardt L, Dobbins T, Lee YY, Erskine HE, Whiteford HA, Patton G. Preventing depression and anxiety in young people: a review of the joint efficacy of universal, selective and indicated prevention. Psychol Med. 2016;46:11–26.

    Article  PubMed  CAS  Google Scholar 

  27. Tennant R, Goens C, Barlow J, Day C, Stewart-Brown SA. systematic review of reviews of interventions to promote mental health and prevent mental health problems in children and young people. J Public Mental Health. 2007;6(1):25–32.

    Article  Google Scholar 

  28. Bayer J, Hiscock H, Scalzo K, Mathers M, McDonald M, Morris A, Birdseye J, Wake M. Systematic review of preventive interventions for children's mental health: what would work in Australian contexts? Aust N Z J Psychiatry. 2009;43(8):695–710.

    Article  PubMed  Google Scholar 

  29. Merry S, McDowell H, Hetrick S, Bir J, Muller N. Psychological and/or educational interventions for the prevention of depression in children and adolescents. Cochrane Database Syst Rev. 2004;1:CD003380.

    Google Scholar 

  30. Stallard P, Phillips R, Montgomery AA, Spears M, Anderson R, Taylor J, Araya R, Lewis G, Ukoumunne OC, Millings A, et al. A cluster randomised controlled trial to determine the clinical effectiveness and cost-effectiveness of classroom-based cognitive-behavioural therapy (CBT) in reducing symptoms of depression in high-risk adolescents. Health Technol Assess. 2013;17(vii–xvii):1–109.

    Google Scholar 

  31. Rousmaniere T, Wolpert M. Talking Failure in Therapy and Beyond. 2017. https://thepsychologist.bps.org.uk/volume-30/december-2017/talking-failure-therapy-and-beyond. Accessed 5 Apr 2018.

  32. Fink E, Patalay P, Sharpe H, Holley S, Deighton J, Wolpert M. Mental health difficulties in early adolescence: a comparison of two cross-sectional studies in England from 2009 to 2014. J Adolesc Health. 2015;56(5):502–7.

    Article  PubMed  Google Scholar 

  33. Deighton J, Humphrey N, Belsky J, Boehnke J, Vostanis P, Patalay P. Longitudinal pathways between mental health difficulties and academic performance during middle childhood and early adolescence. Br J Dev Psychol. 2018;36:110–26.

    Article  PubMed  Google Scholar 

  34. Stop Cutting CAMHS Services. http://www.youngminds.org.uk/about/our_campaigns/cuts_to_camhs_services. Accessed 5 Apr 2018.

  35. House of Commons Education and Health Committees. Children and Young People’s Mental Health – the Role of Education. First Joint Report of the Education and Health Committees of Session 2016–2017. London. 2017. https://publications.parliament.uk/pa/cm201617/cmselect/cmhealth/849/849.pdf. Accessed 5 Apr 2018.

  36. Edbrooke-Childs J, Deighton J, Wolpert M. Changes in severity of psychosocial difficulties in adolescents accessing specialist mental healthcare in England (2009–2014). J Adolesc. 2017;60:47–52.

    Article  PubMed  CAS  Google Scholar 

  37. Law D, Wolpert M, editors. Guide to Using Outcomes and Feedback Tools With Children, Young People and Families. 2nd ed. London: CAMHS Press; 2014.

    Google Scholar 

  38. Deighton J, Tymms P, Vostanis P, Belsky J, Fonagy P, Brown A, Martin A, Patalay P, Wolpert M. The development of a school-based measure of child mental health. J Psychoeducational Assess. 2013;31(3):247–57.

    Article  Google Scholar 

  39. Deighton J, Croudace T, Fonagy P, Brown J, Patalay P, Wolpert M. Measuring mental health and wellbeing outcomes for children and adolescents to inform practice and policy: a review of child self-report measures. Child Adolescent Psychiatry Mental Health. 2014;8(1):14.

    Article  PubMed  PubMed Central  Google Scholar 

  40. Patalay P, Deighton J, Fonagy P, Vostanis P, Wolpert M. Clinical validity of the Me and My School questionnaire: a self-report mental health measure for children and adolescents. Child Adolesc Psychiatry Ment Health. 2014;8:17.

    Article  PubMed  PubMed Central  Google Scholar 

  41. Yeh M, Weisz JR. Why are we here at the clinic? Parent-child (dis)agreement on referral problems at out-patient treatment entry. J Consulting Clin Psychol. 2001;69:1018–25.

    Article  CAS  Google Scholar 

  42. Edbrooke-Childs J, Newman R, Fleming I, Deighton J, Wolpert M. The association between ethnicity and care pathway for children with emotional problems in routinely collected child and adolescent mental health services data. Eur Child Adolesc Psychiatry. 2016;25(5):539–46.

    Article  PubMed  Google Scholar 

  43. Pepys S. The Diary of Samuel Pepys. London: Penguin; 2003.

    Google Scholar 

  44. Fernandez vs Government of Singapore In.: 1WLR 987 at 994; 1971. http://www.refworld.org/cases,GBR_HL,3ae6b707c.html. Accessed 10 May 2018.

  45. Fischer AJ, Threlfall A, Meah S, Cookson R, Rutter H, Kelly MP. The appraisal of public health interventions: an overview. J Public Health. 2013;35(4):488–94.

    Article  CAS  Google Scholar 

  46. Threlfall AG, Meah S, Fischer AJ, Cookson R, Rutter H, Kelly MP. The appraisal of public health interventions: the use of theory. J Public Health. 2015;37(1):166–71.

    Article  Google Scholar 

  47. Choosing Wisely. http://www.choosingwisely.org/. Accessed 5 Apr 2018.

  48. Child Outcomes Research Consortium. http://www.corc.uk.net/. Accessed 5 Apr 2018.

  49. The Long Term Conditions Clinic. http://www.implementingthrive.org/case-study-11/. Accessed 5 Apr 2018.

  50. Carr E. What is History? Cambridge: Cambridge University Press; 1961.

    Google Scholar 

Download references

Acknowledgements

The authors would like to thank all members of CORC, its committee at the time of writing (including MW): Ashley Wyatt, Mick Atkinson, Kate Martin, Ann York, Duncan Law, Julie Elliot and Isobel Fleming, and the CORC team at the time of writing: Benjamin Ritchie, Kate Dalzell, Julian Edbrooke-Childs, Jenna Jacob, Marianne Promberger, Meera Patel, Sally Marriott, Lee Atkins, Danielle Antha and Rebecca Neale. Thank you to Judith Shipman for editing the paper.

The authors would like to thank Martin Utley for his role in developing the idea of FUPS data and principles for analysis of FUPS data – all flaws in the use of this concept are the authors’ alone.

The authors would also like to thank the following peer reviewers for their helpful comments on the paper: Chrysanthi Papoutsi, Kimberly Hoagwood and Kelly Kelleher.

Funding

MW and HR were supported by the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care North Thames at Bart’s Health NHS Trust (NIHR CLAHRC North Thames). The views expressed in this article are those of the authors and not necessarily those of the NHS, the NIHR, or the Department of Health and Social Care.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Author information

Authors and Affiliations

Authors

Contributions

Both authors read and approved the final manuscript.

Corresponding author

Correspondence to Miranda Wolpert.

Ethics declarations

Ethics approval and consent to participate

The data used in this paper derive from a Department of Health commissioned audit that was undertaken in 2011–2015. Children and families who completed questionnaires as part of this audit consented for the information in the questionnaires to be linked to other information about them and for this to be used to underpin publications. This was agreed by the Caldicott Guardian at the Department of Health. No additional ethics were sought as this was judged an audit and not a research project.

Competing interests

MW was involved in the implementation of the CYP IAPT programme described here. The authors declare no other competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wolpert, M., Rutter, H. Using flawed, uncertain, proximate and sparse (FUPS) data in the context of complexity: learning from the case of child mental health. BMC Med 16, 82 (2018). https://0-doi-org.brum.beds.ac.uk/10.1186/s12916-018-1079-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12916-018-1079-6

Keywords