1. Do you agree with the criteria proposed in Figure 4?
No. We support the framework for assessment that recognises the diversity of student needs and provision and that enables various forms of teaching and learning excellence to be identified – one size does not fit all.
We also support the idea that assessment will be holistic; will be undertaken by expert peer review panels; and that it also considers the learning environment and student outcomes.
Effective metrics must be valid, robust, comprehensive, reliable, and current. We do not believe there is a quantitative metric that can adequately capture teaching quality across the great diversity of teaching and learning approaches and environments found in universities. Metrics need to recognise differences in students - their backgrounds, experience, expectations and desired outcomes from higher education. Individual students will value different aspects of their degree experience – face-to-face contact hours, strong employability-focus, proximity to the best researchers, library facilities, etc. This may also vary between disciplines. The contextual/benchmark data for panels will be critical if metrics are used.
It is essential that the purpose is clear and that the metrics suit that purpose. Currently, it is not clear whether the metrics proposed have been selected to (a) inform student choice, (b) identify poor provision (or thresholds of provision), (c) allocate funds, or (d) enable the generation of ‘league tables’. Furthermore, metrics must relate to periods of time appropriate to the purpose.
In terms of measures of teaching quality, the core metrics proposed based on the NSS questions are not fit for purpose, for reasons of validity and reliability Teaching quality and student satisfaction are different things, and at best tangentially associated. Furthermore, metrics as they stand, show little variation and do not differentiate between the vast majority of universities. We draw reference to the analyses of the NSS (by the ONS, by Marsh and Cheng, by HECFE and by Surridge), cited by the Royal Statistical Society in their response.
There is a danger that measures such as student satisfaction will discourage innovation and drive behaviour, for example in inhibiting or limiting the provision of certain types of modules, especially those that challenge preconceptions or are in any way ‘non-standard’. In geography, for example, data skills and other methods training often, not always, receive lower satisfaction ratings; however they are of high value to employers and future career prospects.
Furthermore, a very important element of teaching and learning in higher education, particularly in the social sciences, is challenging students and exposing them to alternative perspectives and different ways of thinking about the world. This involves using a diverse range of teaching practices (seminars, labs, field courses etc). This can unsettle students, there are no ‘right’ answers and students are expected to be active participants in their learning. This learning experience may be as important as the learning outcome. Student satisfaction metrics soon after graduation do not always reflect the value of these experiences. Students do revise their understanding of the relevance and value of content of their degrees, but after some time post-graduation/in employment. Capturing such perspectives would be helpful – thus reinforcing the point about timing of data capture.
Contact hours per se are not a helpful indicator of teaching quality as defined by the criteria. We caution against this being used as a metric. However, if it is used , the time involved in fieldwork, independent research and study must be fully quantified and included. In programmes such as geography these intense experiences of learning are critical to learning outcomes. The time and role of all those supporting and facilitating the teaching and learning experience regardless of contract type (e.g. teaching assistants who facilitate small group teaching in the lab, field tutorials etc; technicians in the lab and field) must be considered and valued.
Student outcomes and learning gain. Changes in GPA over a programme of learning are not necessarily evidence of a student’s acquisition of skills, knowledge or understanding. While there is ongoing research on this, current approaches are not valid or reliable.
Paragraph 101, on page 28 is problematic and might have adverse consequences. Different disciplines have quite different approaches to teaching and learning and asking assessors to ‘avoid focusing on successful but localised practices’ may work against particularly effective practices related to, for example, fieldwork, data skills, independent research.
Professional bodies and learned societies have a key role to play in professional development, training and accreditation of those teaching in universities and of the courses delivered. This needs to be recognised.
Looking forward, careful attention needs to be directed to interdisciplinary, as well as disciplinary teaching and learning needs to be embedded into TEF. In this context we draw attention to the British Academy project (www.britac.ac.uk/interdisciplinarity)
We ask that a full assessment of the resources (institutionally and centrally) to deliver on TEF is built into the process from the outset and critically evaluated. This is particularly important in this period of significant and profound change in higher education as the UK transitions to a new relation with the European Union.
2a. How should we include a highly skilled employment metric as part of the TEF?
2b. If included as a core metric, should we adopt employment in Standard Occupational Classification (SOC) groups 1-3 as a measure of graduates entering highly skilled jobs?
No. A highly skilled employment metric needs to be rethought. It assumes a direct link between teaching quality and employment outcomes, which does not capture the myriad of interdependent factors (locational, institutional, socio-demographic, disciplinary etc) that influence employment choices and outcomes.
HEIs should not perversely be deterred from recruitment of students onto programmes with social value (as opposed to earning power). Positive outcomes are much broader than paid employment (e.g. unpaid or voluntary work, time overseas).
Employers are a heterogeneous group and their needs are diverse. The employment destinations of graduates also are diverse. For disciplines such as geography, given the variety of career paths and outcomes, identifying a highly skilled employment metric (or metrics) would be particularly difficult. Some students will pursue graduate careers ‘in’ their disciplines, others will draw on their transferable skills and find employment in a broad range of sectors and roles. Very careful attention needs to be given to engagement with employers (inclusive of large organisations and SMEs) and metrics used to document quality and success of graduates from their perspectives.
Employment destinations within a short period of graduation are a poor guide to later career progress. If employment destination is to be pursued, research needs to inform an understanding of the time required post graduation for students to enter such high skilled employment, which will vary by discipline. The current DHLE survey measures employment outcomes too early and does not adequately recognise the preferences/choices of graduates.
3a. Do you agree with the proposed approach for setting benchmarks?
Benchmarking (paragraphs 75-77)- is critical and needs to be better articulated. Very careful attention is needed to appropriately reflect the student body, institutions, disciplines and their context.