Methodology Behind The Guardian University Guide 2021

To create a league table for 54 subjects, we assess the performance of each provider using nine measures that cover all stages of the student life cycle. We treat each subject provider as a department and ask them to inform us which of their students count within each department. Our goal is to demonstrate how each department is likely to offer a positive overall experience to future students. To achieve this, we examine how past students have performed within the department.

Our assessment of each department’s performance involves quantifying the resources and staff contact that have been dedicated to past students. We also consider the standards of entry and the likelihood that students will receive support to continue their studies. We then evaluate student satisfaction levels, the extent to which they exceed expectations, and the likelihood of positive outcomes after completing the course. By combining these measures, we generate an overall score for each department and rank them accordingly.

To ensure comparability, we only use data from full-time first degree students. However, if prospective undergraduates are unsure about which subject they want to study, we have generated an institution-level table by averaging the Guardian scores across all subjects.

Numerous changes have been introduced for the 2021 rankings. One of the most significant changes is the use of the Graduate Outcomes survey to derive our career prospects score. We still value graduates entering professional, managerial and technical occupations or entering HE or professional further study, but we now use a central agency to contact graduates 15 months after graduation instead of relying on the DLHE survey, which was conducted by institutions contacting their own students 6 months after graduation. Data protection rules mean that we only show positive graduate outcomes for departments with at least 22.5 respondents.

We have also used data from the 2020 National Student Survey to gauge student satisfaction levels. If the number of 2020 respondents was under 23 but the total across 2019 and 2020 was above 23, we typically used the aggregated results. If the results were inconsistent, we excluded information that gave an unreliable impression of future student satisfaction levels.

We standardize each metric in relation to the distribution of scores within the subject, but if the number of departments with a valid metric score has fallen below nine, we standardize the metric in relation to a broader subject group.

This parameter carries a weight of 15% in the total score of a department and is reported at the HESA cost centre level. Each cost centre is mapped to one or more subjects, and the metric, namely student-staff ratios (SSR), is calculated to estimate the amount of staff contact a student may receive. The ratio is determined by dividing the number of students enrolled in a subject with the number of teaching staff available. A low ratio is vital as it indicates that students can expect increased interaction with staff.

For student-staff ratio calculation, both students and academic staff are reported on a full-time equivalent (FTE) basis. However, research-only staff are excluded from the staff volume, and students on placement or enrolled in courses franchised to other institutions are discounted. The SSR calculation involves a minimum of 28 students and three full-time-equivalent (FTE) staff, based on the 2018-19 data. A two-year average is calculated for smaller departments that have at least seven student and two staff FTE based on the 2018-19 data and a minimum of 30 student FTE in total.

Another critical parameter in assessing departments’ performance is the expenditure per student. To ensure students receive adequate resources, the level of expenditure in each subject area is divided by the number of students taking the subject. Notably, academics’ costs are excluded since they are already captured in the SSR metric. To calculate the points/10, this metric is expressed in, the amount spent on academic services like computing facilities, libraries, per student over the last two years, is added. This parameter contributes 5% to the department’s total score.

Continuation attempts to assess how well each department supports its students and promotes engagement with education. It measures the proportion of students who continue their studies beyond their first year, accounting for entry qualifications. The metric is calculated by identifying all first-year students in full-time courses longer than one year, and observing their activity on December 1st of the following academic year. Only those inactive within the UK’s higher education system are viewed negatively.

An index score is then created for each student who has a positive outcome and is limited to a maximum of 97%, designed to consider entry qualifications. To calculate the score for non-medical departments, a minimum of 35 entrants is needed in the most recent cohort and 65 across two to three years. The index score, averaged over two to three years, carries a weight of 10%, while the percentage score is displayed.

The National Student Survey seeks to determine how students feel about their academic experience, support received, and other aspects of the course. The satisfaction rate and average response are the two statistics produced using a 5-point Likert scale ranging from strongly disagree to strongly agree. To assess the quality of teaching, we aggregate responses to questions regarding staff ability to explain concepts, make the subject interesting, provide intellectual stimulation, and challenge students. To determine the likelihood of student satisfaction with feedback, responses to questions, including criteria used in marking and clarity of marking, are aggregated. The satisfaction rate and average response carry a weight of 10%.

I have been receiving prompt feedback on my work, with valuable comments that have been helpful in improving my skills.

To evaluate student satisfaction with their courses, we have combined responses from the 2019 and 2020 NSS surveys. The statement “Overall, I am satisfied with the quality of the course” has been used for this purpose. The overall satisfaction rating for each provider has been displayed with an average response rate of 5% weighting.

In order to obtain value-added scores for each department, we track the progress of each student from the time of enrolment to graduation. These scores take into account the qualifications that the students start with and report on how well they have exceeded expectations.

We give each full-time student a probability of earning a first or a 2:1 based on their qualifications or total percentage of good degrees expected in their department. Students score points based on the difficulty of achieving a good degree, and this contributes 15% to a department’s total score. A meaningful value-added score is calculated only when at least 30 students are in a subject.

The Graduate Outcomes survey for the graduating cohort of 2017-2018 is used to assess career prospects for students. We value students that enter graduate-level occupations or go on to further study at a professional or HE level, and any students who have completed or are currently undertaking a course are treated accordingly. This metric is worth 15% of the total score for non-medical subjects.

We only include institutions with enough data to support a ranking, with a minimum of 35 full-time first-degree students and at least 25 students in the relevant cost center. The weighting value of any missing indicators must not exceed 40%. We do not mix results across years to avoid the effect of the national economic environment on employment.

Despite not displaying anything, we still need to fill in the gap left in the overall score caused by missing indicators. To do this, we use a substitution method that first looks for the corresponding standardized score from the previous year. If this is not available, we then analyze whether the missing metric is correlated with general performance within that particular subject. If it is, we assume that the department would have performed as well in this missing metric as it did in everything else. If it is not correlated, we then use the average score achieved by other providers of the subject.

Using the weighting attached to each metric, the standardized scores are weighted and totalled to give an overall institutional score, which is then rescaled to 100. This is then used to rank the departments.

The institutional ranking takes into account two other factors. The number of students in a department influences the extent to which that department’s total standardized score contributes to the institution’s overall score. Secondly, the number of institutions included in the subject table determines the degree to which a department can affect the institutional table.

Each institution has an overall version of each performance indicator displayed next to its overall score out of 100. These indicators are crude institutional averages and are not connected to the tables or subject mix, and cannot be used to calculate the overall score or ranking position.

The indicators of performance for value added and expenditure per student are treated differently because they need to be converted into points out of 10 before being displayed. These indicators read from the subject level tables using student numbers to create a weighted average.

Institutions that appear in fewer than eight subject tables are not included in the main university ranking.

The data source used for the courses listed under each department in each subject group comes from the KIS database of courses, where institutions provide regular updates to describe courses that students can apply for in future years. We associate each full-time course with one or more subject groups based on the subject data associated with the courses. Institutions have the freedom to adjust these associations with subjects and to change details of the courses. We include courses that are not at degree level, though such provision is excluded from the data used to generate scores and rankings. An update to this data will take place in September, due to the timing of publication.

Author

  • owengriffiths

    Owen Griffiths is 35 years old and a blogger and teacher. He has written about education for over 10 years and has a passion for helping others learn.