Question 1b: Do you have any comments about how the proposed definitions of quality and standards set out in Table 1 of Annex A should be assessed for individual providers?
We note that Paragraph 39 sets out how sector-recognised standards will be used, notably the new sector-recognised standards adopted by UKSCQA in June 2019 for Level 6 programmes. Developing the capability of providers’ staff and external examiners to consistently and reliably meet these standards has been the focus of substantial work by a range of professional, subject and regulatory bodies (PSRBs) working in collaboration with Advance Higher Education. This work is described on the UKSCQA website. The professional development of external examiners and subject-specific calibration activities (across a range of subjects) developed through this work has relied upon the articulation of academic standards within QAA Subject Benchmark Statements (SBS).
SBSs are external reference points with the greatest input by employers and non-HEI stakeholders, whom the sector is seeking to reassure about standards. We consider these essential benchmark references for the definition and assessment of ‘quality’ and ‘standards’ as articulated in Table 1 of Annex A, notably the requirements under “Course content, structure and delivery” and “Secure standards”.
In addition, SBSs play an integral role in many subject-specific and professional pathway degree accreditation schemes – themselves an independent assurance of quality and standards – and play an important function in providing a common, subject-specific reference standard collectively agreed upon and accepted by the community. They may also fulfil an important regulatory function by specifying subject-specific standards required for future professional registration/ recognition. We note with concern recent comments by QAA representatives about the apparently declining regulatory importance of SBSs (QAA PSRB Forum, December 2020) and a proposal to reduce access exclusively to QAA members, which would reduce transparency around standards and limit the range of audiences who may use them.
Separately, we note a lack of student voice in this proposal, in particular a route for student views on the course delivery “in a way that meets the needs of individual students” and that “students are effectively engaged in the quality of their educational experience”. Challenges of using National Student Survey (NSS) data for assessments of teaching quality are well documented, but use in combination with other data and evidence and/or until a replacement mechanism is developed, may provide an important opportunity for student input.
Question 2a: Do you agree or disagree with the proposed approach to assessing student outcomes set out in Annex B?
Disagree.
We disagree with the elements of the proposed approach to assessing student outcomes, notably those relating to the nature of professional and managerial jobs (so-called 'graduate jobs'), the potential for income to be used as a proxy for assessing course/graduate quality, and the lack of attention the metrics give to regional or subject-specific contexts.
We are concerned that measures of 'good outcomes' are linked to measures for 'graduate jobs', a term which is not accurately and exclusively defined by roles with a 'professional and managerial' data classification. Data classification has not kept pace with the mix of skills and knowledge required by many roles, nor the emergence of new fields and job titles. The job market for geographers is complex and diverse, and will be increasingly so as the economy recovers from the impacts of the global pandemic and new labour market opportunities emerge, e.g. in response to climate change and a ‘green recovery’ and path to Net Zero, and in the increasing use of geospatial data.
Metricising the rate of student progression into 'managerial and professional employment' assumes a direct link between teaching quality and employment outcomes, failing to capture the myriad inter-dependent factors – location (and local economy/labour market), institutional reputation, gender and socio-demographic background, institutional and government policies, and disciplinary/subject context of their programme. All these are factors commonly influencing employment choices and outcomes. We are concerned that setting a baseline without consideration for such factors could have a punitive effect for some institutions or subjects, one likely to be magnified by the unequal regional and economic impacts of the Covid-19 pandemic.
The acquisition of skills, knowledge and understanding over the course of a degree extends well beyond a snapshot of employment status. Providers should not be deterred from recruiting students onto programmes that enhance social and cultural value, rather than future earning power. Positive outcomes for both individuals and broader society are much more extensive than that measured by paid employment (e.g. unpaid or voluntary work, time overseas) and far more difficult to capture in a quantitative measure.
In addition, employers are a heterogeneous group and their needs are diverse. Some students will pursue graduate careers ‘in’ their disciplines, others will draw on their transferable skills and find employment in a broad range of sectors and roles; many make substantial use of the knowledge and skills gained from their course to benefit society and the economy over a longer period of time than Graduate Outcomes surveys and Longitudinal Educational Outcomes analysis would allow for.
We welcome attention to provider student admissions arrangements and student cohort profiles. Over-valuing employment outcomes may cause providers to take a risk-averse approach to admissions, prioritising those applicants and courses which have the greatest likelihood of future success and reinforcing key structural issues and inequalities in both subject/course availability and employment outcomes.
Question 2b: Are there any other quantitative measures of student outcomes that we should consider in addition to continuation, completion and progression (see Annex B paragraph 18)?
We consider current quantitative measures of student outcomes and 'graduate job' destinations to be highly problematic. We believe the use of data on graduate salaries would further magnify these problems and object to its inclusion as an additional measure. We believe geography graduates contribute to civic society in a range of economic and non-economic ways, and that the wellbeing and happiness of graduates with the outcomes they have achieved and the contribution they are making to also be important.
Question 2d: Do you have any comments about an appropriate balance between the volume and complexity of indicators and a method that allows us to identify ‘pockets’ of performance that are below a numerical baseline?
The proposed metrics draw upon well-established definitions within higher education datasets, making it likely that the baseline indicators will be appropriately sensitive to level of study and other student and course characteristics. However, we note from our own experience with data for geography that the need to dig deeper into the data for improving granularity may result in sample sizes that are too small to be statistically valid, or which will prevent transparency in reporting and discussion of the findings. The subject-coding data structure does not currently address truly interdisciplinary courses, for example, and yet these are increasingly needed to develop the skills and knowledge to resolve complex challenges of the future (see British Academy (2016) Crossing Paths: Interdisciplinary Institutions, Careers, Education and Applications)
Question 2e: Do you agree or disagree with the demographic characteristics we propose to use (see Annex B paragraph 36)? Are there further demographic characteristics which we should consider including in the list of ‘split indicators’?
Neither.
We agree that these are the demographic characteristics as proposed, as these are well established within higher education datasets. However, we question how useful these will be in aggregated analyses of individual providers against a sector-wide baseline without subject or regional benchmarking.
We also note that the proposed use of POLAR as the sole measure of participation rates may be problematic, particularly given the lack of granularity available for more densely populated areas which could lead to misleading assumptions regarding student characteristics. Use of POLAR as one of several measures may help to address this issue, with UCAS multiple equality measure data and TUNDRA potentially able to help provide a more accurate picture of participation and wider student characteristics. TUNDRA’s linked data from KS4 to HE may be especially useful in this regard.
Question 2f: Do you agree or disagree that the longitudinal educational outcomes dataset should be used to provide further indicators in relation to graduate outcomes (see Annex B paragraph 46)?
Disagree.
We strongly disagree with the use of the Longitudinal Educational Outcomes dataset. Using graduate earnings as an indicator of degree standards or quality (within a broader regulatory approach) assumes that the only benefits from a university education are economic, via achieved salary. It ignores the wider social, cultural and other benefits for both the individual, their communities and civic society. We refer to our comments above on this topic.
Past performance of graduates is not a good indicator of future graduate outcomes, and both salaries and availability of roles fluctuate year on year, especially within different regional economies and labour markets. The LEO data is backward looking, and does not control for differences in regional employment, different between part and full-time employment, does not adequately cover those in self-employment, and does not include those who work abroad. Its use will especially disadvantage providers who have specialist courses that are specific to regional economic needs or small sectors.
Universities UK (2019) The uses and limits of Longitudinal Education Outcomes (LEO) data
Question 2g: Do you have any comments about how the range of sector-level performance should be taken into account in setting numerical baselines (see Annex B paragraph 57)?
We note that the Office for Students recognises that the use of single sector-wide baselines may lead to providers changing their behaviour in relation to risk and regulatory monitoring. We consider this to be especially the case for providers who recruit more students from under-represented groups and/or offer specialist and small courses. We urge the Office for Students to give attention to providers who serve these groups of students and courses, to ensure that the vibrancy and diversity of the UK’s higher education offering is maintained. The current funding and regulatory environment, including that proposed here, would disincentivise providers from maintaining subjects which are strategically important but which do not attract high levels of student demand or deliver “professional and managerial” student outcomes, and the lack of a long-term strategic approach to monitoring course (and provider) closures could have a detrimental impact to UK HE overall, notably in subject-specific and geographical availability.
Subject-specific graduate outcomes may also be strongly linked to the relative economic health of specific sectors recruiting students and to the demand of regional economies and labour markets. This is also linked to the mobility of students, in terms of both choosing where they go to university, and then in choosing where they pursue employment (and the relative freedom and flexibility they have in making those choices which may be influenced by other characteristics such as age, caring responsibilities, socio-economic background, etc.). These are particularly subject to year-on-year fluctuations and may have a disproportionate impact on some providers’ graduate employment outcomes. We suggest that the selection of any single sector-wide baseline that attempts to address the bottom x% should allow for significant regional or subject/sector-specific variations.
Question 2h: Do you have any comments about the other contextual factors that should be taken into account and the weight that should be placed on them (see Annex B paragraph 68)?
We encourage the Office for Students to take a broad range of contextual factors into account, including regional economies and labour markets and subject/sector-specific fluctuations in demand, which might be short-term in nature, driven by government policies or other external factors. In addition, the overall economic situation and the specific (and often regionally or demographically uneven) impact or economic shocks – such as that from Covid-19 – will have consequences for a provider’s performance and student outcomes.
Question 8: Do you have any other comments?
We note that the UK Quality Code for Higher Education, which sets out many of the standards around academic course delivery, has been adopted by providers on a cross-UK basis, but this proposal around regulating quality and standards is specifically for providers in England. We ask the OfS to be attentive to the impact this may have for the ways in which sector-wide benchmarks and codes are used and maintained for a range of potentially diverging regulatory applications.