Skip to main content

Table 1 Summary of the measures of medical school differences. Measures in bold are included in the set of 29 measures in the path model of Fig. 5

From: Exploring UK medical school differences: the MedDifs study of selection, teaching, student and F1 perceptions, postgraduate outcomes and fitness to practise

Group

Measure name

Description

Reliability

Notes on reliability

Institutional history

Hist_Size

Historical size of medical school. Based on GMC LRMP, with average number of graduates entering the Register who qualified from 1990 to 2014. Note that since the University of London was actually five medical schools, size is specified as an average number per London school. Note also that Oxford and Cambridge refer to the school of graduation, and not school of entry, with some Oxbridge graduates qualifying elsewhere.

.925

Based on numbers of graduates in years 1990–1994, 1995–1999, 2000–2004, 2005–2009 and 2010–2014

Hist_GP

Historical production of GPs by medical schools. Based on the proportion of 1990–2009 graduates on the LRMP on the GP Register.

.968

Based on rates for graduates in years 1990–1994, 1995–1999, 2000–2004 and 2005–2009

Hist_Female

Historical proportion of female graduates. Based on GMC LRMP, with average percentage of female graduates entering the Register from 1990 to 2014.

.831

Based on percentage of female graduates in years 1990–1994, 1995–1999, 2000–2004, 2005–2009 and 2010–2014

Hist_Psyc

Historical production of psychiatrists by medical schools. Based on the proportion of 1990–2009 graduates on the LRMP on the Specialist Register for Psychiatry.

.736

Based on rates for graduates in years 1990–1994, 1995–1999 and 2000–2004. Ns for 2005–2009 graduates were too low to be useful

Hist_Anaes

Historical production of anaesthetists by medical schools. Based on the proportion of 1990–2009 graduates on the LRMP on the Specialist Register for Anaesthetics

.716

Based on rates for graduates in years 1990–1994, 1995–1999 and 2000–2004. Ns for 2005–2009 graduates were too low to be useful

Hist_OG

Historical production of obstetricians and gynaecologists by medical schools Based on the proportion of 1990–2009 graduates on the LRMP on the Specialist Register for O&G.

.584

Based on rates for graduates in years 1990–1994, 1995–1999 and 2000–2004. Ns for 2005–2009 graduates were too low to be useful

Hist_IntMed

Historical production of internal medicine physicians by medical schools. Based on the proportion of 1990–2009 graduates on the LRMP on the Specialist Register for Internal Medicine specialties.

.945

Based on rates for graduates in years 1990–1994, 1995–1999 and 2000–2004. Ns for 2005–2009 graduates were too low to be useful

Hist_Surgery

Historical production of surgeons by medical schools. Based on the proportion of 1990–2009 graduates on the LRMP on the Specialist Register for Surgical specialties.

.634

Based on rates for graduates in years 1990–1994, 1995–1999 and 2000–2004. Ns for 2005–2009 graduates were too low to be useful

Post2000

New medical school. A school that first took in medical students after 2000. The five London medical schools are not included as they were originally part of the University of London.

n/a

n/a

REF

Research Excellence Framework. Weighted average of overall scores for the 2008 Research Assessment Exercise (RAE), based on units of assessment 1 to 9, and the 2014 Research Excellence Framework (REF) based on units of assessments 1 and 2. Notes that UoAs are not directly comparable across the 2008 and 2014 assessments. Combined results are expressed as a Z score.

.691

Based on combined estimate from RAE2008 and REF2014

Curricular influences

PBL_School

Problem-based learning school. School classified in the BMA guide for medical school applicants in 2017 as using problem-based or case-based learning [35], with the addition of St George’s, which is also PBL.

n/a

n/a

Spend_Student

Average spend per student. The amount of money spent on each student, given as a rating out of 10. Average of values based on the Guardian guides for university applicants in 2010 [36], 2013 [37] and 2017 [38].

.843

Based on values for 2010, 2013 and 2017

Student_Staff

Student-staff ratio. Expressed as the number of students per member of teaching staff. Average of values based on the Guardian guides for medical school applicants in in 2010 [36], 2013 [37] and 2017 [38].

.835

Based on values for 2010, 2013 and 2017

Entrants_N

Number of entrants to the medical school. An overall measure of the size of the school based on MSC data for the number of medical students entering in 2012–16.

.994

Based on numbers of entrants 2012–2016

Selection

Entrants_Female

Percent of entrants who are female.

.903

Based on entrants for 2012–2016

Entrants_NonHome

Percent of entrants who are “non-home”. Percentage of all entrants for 2012–2017 with an overseas domicile who are not paying home fees, based on HESA data. Note that proportions are higher in Scotland (16.4%), and Northern Ireland (13.3%), than in England (7.5%) or Wales (5.9%), perhaps reflecting national policy differences. This variable may therefore be confounded to some extent with geography. For English schools alone, the reliability was only .537.

.958

Based on entrants for 2012–2017

EntryGrades

Average entry grades. The average UCAS scores of students currently studying at the medical school expressed as UCAS points. Average of values based on the Guardian guides for medical school applicants in 2010 [36], 2013 [37] and 2017 [38].

.907

Based on values for 2010, 2013 and 2017

Teaching, learning and assessment

Teach_Factor1_Trad

Traditional vs PBL teaching. Scores on the first factor describing differences in medical school teaching, positive scores indicating more traditional teaching rather than PBL teaching. From the AToMS study [22] for 2014–2015.

n/a

n/a

Teach_Factor2_Struc

Structured vs unstructured teaching. Scores on the second factor describing differences in medical school teaching, positive scores indicating teaching is more structured rather than unstructured. From the AToMS study [22] for 2014–2015.

n/a

n/a

Teach_GP

Teaching in General Practice. Total timetabled hours of GP teaching from the AToMS Survey [22] for 2014–2015.

n/a

n/a

Teach_Psyc

Teaching in Psychiatry. Total timetabled hours of Psychiatry teaching from the AToMS Survey [22] for 2014–2015.

n/a

n/a

Teach_Anaes

Teaching in Anaesthetics. Total timetabled hours of Anaesthetics teaching from the AToMS Survey [22] for 2014–2015.

n/a

n/a

Teach_OG

Teaching in Obstetrics and Gynaecology. Total timetabled hours of O&G teaching from the AToMS Survey [22] for 2014–2015.

n/a

n/a

Teach_IntMed

Teaching in Internal Medicine. Total timetabled hours of Internal Medicine teaching from the AToMS Survey [22] for 2014–2015.

n/a

n/a

Teach_Surgery

Teaching in Surgery. Total timetabled hours of Surgery teaching from the AToMS Survey [22] for 2014–2015.

n/a

n/a

ExamTime

Total examination time. Total assessment time in minutes for all undergraduate examinations, from the AToMS study [23] for 2014–2015.

n/a

n/a

SelfRegLearn

Self-regulated learning. Overall combined estimate of hours of self-regulated learning from survey of self-regulated learning [39] and HEPI data [22].

n/a

n/a

Student satisfaction measures

NSS_Satis’n

Course satisfaction in the NSS. The percentage of final-year students satisfied with overall quality, based on the National Student Survey (NSS). Average of values from the Guardian guides for medical school applicants in 2010 [36], 2013 [37] and 2017 [38]. Further data are available from the Office for Students [40] with questionnaires also available [41].

.817

Based on values for 2010, 2013 and 2017

NSS_Feedback

Satisfaction with feedback in the NSS. The percentage of final-year students satisfied with feedback and assessment by lecturers, based on the National Student Survey UKFPO- (NSS). Average of values from the Guardian guides for medical school applicants in 2010 [36], 2013 [37] and 2017 [38].

.820

Based on values for 2010, 2013 and 2017

Foundation entry scores

UKFPO_EPM

Educational Performance Measure. The EPM consists of a within-medical school decile measure, which cannot be compared across medical schools (“local outcomes” [42, 43]), along with additional points for additional degrees up to two peer-reviewed papers (which can be compared across medical schools and hence are “nationally comparable”). Data from the UK Foundation Programme Office [44] are summarised as the average of scores for 2012 to 2016.

.890

Based on values for the years 2013 to 2017

UKFPO_SJT

Situational Judgement Test. The UKFPO-SJT score is based on a nationally standardised test so that results can be directly compared across medical schools. The UK Foundation Programme Office [44] provides total application scores (UKFPO-EPM+ UKFPO-SJT) and UKFPO-EPM scores, so that UKFPO-SJT scores are calculated by subtraction. UKFPO-SJT scores are the average of scores for the years 2012–2016.

.937

Based on values for the years 2013 to 2017

F1 perception measures

F1_Preparedess

F1 preparedness. Preparedness for F1 training has been assessed in the GMC’s National Training Survey (NTS) in 2012 to 2017 by a single question, albeit with minor changes in wording [1]. For 2013 and 2014, the question read “I was adequately prepared for my first Foundation post”, summarised as the percentage agreeing or definitely agreeing. Mean percentage agreement was used to summarise the data across years. Note that unlike the other F1 measures, F1_Prep is retrospective, looking back on undergraduate training.

.904

Based on values for 2012 to 2017.

F1_Satis’n

F1 overall satisfaction. The GMC’s NTS for 2012 to 2017 contained summary measures of overall satisfaction, adequate experience, curriculum coverage, supportive environment, induction, educational supervision, teamwork, feedback, access to educational resources, clinical supervision out of hours, educational governance, clinical supervision, regional teaching, workload, local teaching and handover, although not all measures were present in all years. Factor analysis of the 16 measures at the medical school level, averaged across years, suggested perhaps three factors. The first factor, labelled overall satisfaction, accounted for 54% of the total variance, with overall satisfaction loading highest, and 12 measures with loadings of > .66.

.792

Reliability of factor scores was not available, but reliabilities of component scores were access to educational resources (alpha = .800, n = 5); adequate experience (alpha = .811, n = 6); clinical supervision (alpha = .711, n = 6); clinical supervision out of hours (alpha = .733, n = 3); educational supervision (alpha = .909, n = 6); feedback (alpha = .840, n = 6); induction (alpha = .741, n = 6); overall satisfaction (alpha = .883, n = 6); reporting systems (alpha = .846, n = 2); supportive environment (alpha = .669, n = 3); and work load (alpha = .773, n = 6). Reliabilities of factor scores estimated as median of component scores

F1_Workload

F1 Workload. See F1_Sat for details. The second factor in the factor analysis accounted for 11% of total variance with positive loadings of >.73 on Workload, Regional teaching and local teaching. The factor was labelled workload.

.792

F1_Superv’n

F1 Clinical Supervision. See F1_Sat for details. The third factor in the factor analysis accounted for 8% of total variance with a loading of .62 on Clinical supervision and − 0.75 on Handover. The factor was labelled workload.

.792

Choice of specialty training

Trainee_GP

Appointed as trainee in General Practice. UKFPO has reported the percentage of graduate by medical school who were accepted for GP training and Psychiatry training (but no other specialties) in 2012 and 2014–2016. The measure is the average of acceptances for GP in the 4 years.

.779

Based on rates for 2012, 2014, 2015 and 2016

Trainee_Psyc

Appointed as trainee in Psychiatry. See Trainee_GP. UKFPO has reported the percentage of graduate by medical school who were accepted for GP training in 2012 and 2014–2016. The measure is the average of the different years.

.470

Based on rates for 2012, 2014, 2015 and 2016

TraineeApp_Surgery

Applied for Core Surgical Training (CST). Percentage of applicants to CST for the years 2013–2015 [32].

.794

Based on rates for 2013, 2014 and 2015

TraineeApp_Ans

Applied for training in Anaesthetics. A single source for the number of applications in 2015 for training in anaesthetics by medical school is an analysis of UKMED data (Gale T, Lambe P, Roberts M: UKMED Project P30: demographic and educational factors associated with junior doctors' decisions to apply for general practice, psychiatry and anaesthesia training programmes in the UK, Plymouth, unpublished).

n/a

n/a

Postgraduate examination performance

GMC_PGexams

Overall pass rate at postgrad examinations. The GMC website has provided summaries of pass rates of graduates at all attempts at all UK postgraduate examinations taken between August 2013 and July 2016, broken down by medical school (https://www.gmc-uk.org/education/25496.asp). These data had been downloaded but on 18 January 2018 but were subsequently removed while the website was redeveloped, and although now available again, were unavailable for most of the time this paper was being prepared.

n/a

n/a

MRCGP_AKT

Average mark at MRCGP AKT. MRCGP results at first attempt for the years 2010 to 2016 by medical school are available at http://www.rcgp.org.uk/training-exams/mrcgp-exams-overview/mrcgp-annual-reports.aspx. Marks are scaled relative to the pass mark, a just passing candidate scoring zero, and averaged across years. AKT is the Applied Knowledge Test, an MCQ assessment.

.970

Based on values for years 2010 to 2016

MRCGP_CSA

Average mark at MRCGP CSA. See MRCGP-AKT. Marks are scaled relative to the pass mark, a just passing candidate scoring zero, and averaged across years. CSA is the Clinical Skills Assessment, and in an OSCE-type assessment.

.919

Based on values for years 2010 to 2016

FRCA_Pt1

Average mark at FRCA Part 1. Based on results for the years 1999 to 2008 [16]. Marks are scaled relative to the pass mark, so that just passing candidates score zero.

n/a

n/a

MRCOG_Pt1

Average mark at MRCOG part 1. Performance of doctors taking MRCOG between 1998 and 2008 [15]. Marks are scaled relative to the pass mark, so that just passing candidates score zero. Part 1 is a computer-based assessment.

n/a

n/a

MRCOG_Pt2

Average mark at MRCOG part 2 written. Performance of doctors taking MRCOG between 1998 and 2008 [15]. Marks are scaled relative to the pass mark, so that just passing candidates score zero. Part 2 consists of a computer-based assessment and an oral, but only the oral is included here.

n/a

n/a

MRCP_Pt1

Average mark at MRCP (UK) part 1. Marks were obtained for doctors taking MRCP (UK) exams at the first attempt between 2008 and 2016. Marks are scaled relative to the pass mark, so that just passing candidates score zero. Part 1 is an MCQ examination.

.977

Based on first attempts in the years 2010–2017

MRCP_Pt2

Average mark at MRCP (UK) part 2. Marks were obtained for doctors taking MRCP (UK) exams at the first attempt between 2008 and 2016. Marks are scaled relative to the pass mark, so that just passing candidates score zero. Part 2 is an MCQ examination.

.941

Based on first attempts in the years 2010–2017

MRCP_PACES

Average mark at MRCP (UK) PACES. Marks were obtained for doctors taking MRCP (UK) exams at the first attempt between 2008 and 2016. Marks are scaled relative to the pass mark, so that just passing candidates score zero. PACES is a clinical assessment of physical examination and communication skills.

.857

Based on first attempts in the years 2010–2017

Fitness to practise issues

GMC_Sanctions

GMC sanctions. Based on reported FtP problems (erasure, suspension, conditions, undertakings, warnings: ESCUW) from 2008 to 2016, for doctors qualifying since 1990. ESCUW events increase with time after graduation, and therefore, medical school differences were obtained from a logistic regression after including year of graduation. Differences are expressed as the log (odds) of ESCUW relative to the University of London, the largest school. Schools with fewer than 3000 graduates were excluded. Note that although rates of GMC sanctions are regarded here as causally posterior to other events, because of low rates, they mostly occur in doctors graduating before those in the majority of other measures. They do however correlate highly with ARCP-NotExam rates which do occur in more recent graduates (see above).

.691

Based on separate ESCUW rates calculated for graduates in the years 1990–1994, 1995–1999, 2000–2004 and 2005–2009. ESCUW rates in graduates from 2010 onwards were too low to have meaningful differences

ARCP_NonExam

Non-exam problems at ARCP (Annual Record of Competency Progression) [45] (section 4.33). Based on ARCP and RITA assessments from 2010 to 2014 (Smith D.: ARCP outcomes by medical school. London: General Medical Council, unpublished). Doctors have multiple assessments, and the analysis considers the worst assessment of those taken. Assessments can be problematic because of exam or non-exam reasons, and only non-exam problems are included in the data. Medical specialties differ in their rates of ARCP problems, and effects are removed in a multilevel multinomial model before effects are estimated for each medical school (see Table 4 in reference Smith D.: ARCP outcomes by medical school. London: General Medical Council, unpublished). Results are expressed as the log (odds) for a poor outcome.

n/a

n/a