Ben Williamson
Higher education increasingly depends on complex data systems to process and visualize performance metrics. Image by Dennis van Zuijlekom
Contemporary culture is increasingly defined by metrics. Measures of quantitative assessment, evaluation, performance, and comparison infuse public services, commercial companies, social media, sport, entertainment, and even human bodies as people increasingly quantify themselves with wearable biometrics. Higher education is no stranger to metrics, though they are set to intensify further in coming years under the Higher Education and Research Act. The measurement, comparison, and evaluation of the performance of institutions, staff, students, and the sector as a whole is expanding rapidly with the emergence and evolution of ‘the data-intensive university’.
This post continues a series on the expanding data infrastructure of HE in the UK, part of ongoing research charting the actors, policies, technologies, funding arrangements, discourses, and metrological practices involved in contemporary HE reforms. Current reforms of HE are the result of ‘fast policy’ processes involving ‘sprawling networks of human and nonhuman actors’, and more specifically that involve human data analytics experts and complex nonhuman systems of measurement. Only by identifying and understanding the mobile policy networks and the ‘metrological machinery’ of their HE data projects is it possible to adequately apprehend how macro-forces of governmental reform are being operationalized, enacted, and accomplished in practice.
Metrological machinery
The collection and use of UK university performance data has expanded and mutated dramatically in scope over the last two decades. The metrification of HE through the ‘evaluation machinery’ of research assessment exercises, teaching evaluation frameworks, impact measurements, student satisfaction ratings, and so on, is frequently viewed as part of an ongoing process of neoliberalization and marketization of the sector. One particularly polemical critique describes a ‘pathological organizational dysfunction’ whereby neoliberal priorities and corporate models of marketization, competition, audit culture, and metrification have combined to produce ‘the toxic university’.
The narrative is that HE has been made to resemble a market in which institutions, staff and students are all positioned competitively, with measurement techniques required to assess, compare and rank their various performances. It is a compelling if unsettling narrative. But if we really want to understand the metrification, marketization, and neoliberalization of HE, then we need to train the analytical gaze more closely on the specific and ever-mutating metrological mechanisms by which these changes are being made to happen.
In previous posts I examined the market-making role of the edu-business Pearson, and the ways the Office for Students (OfS), the HE market regulator, and the Higher Education Statistics Agency (HESA), its designated data body, intend to use student data to compare sector performance. Together, these organizations and their networks are building a complex and evolving data infrastructure that will cement metrics ever more firmly into HE, while opening up the sector to a new marketplace of technical providers of data analysis, performance measurement, comparison and evaluation.
Political demands to make HE more data-driven have opened up a marketplace for providers of digital technologies. Image by Eduventures
In this update I continue unpacking this data infrastructure by focusing on the Quality Assurance Agency for Higher Education (QAA) and the Joint Information Services Committee (Jisc). Both of them, along with HESA, are engaging in significant metrological work in HE. In fact, HESA, QAA and Jisc together constitute the M5 Group of agencies—‘UK higher education’s data, quality and digital experts’—formed in 2016 and named after their collective proximity to the M5 motorway in southwest England. Together, the QAA, HESA and Jisc also co-organize and run the annual Data Matters conference for HE data practitioners, quality professionals and digital service specialists.
To approach these organizations, the concept of ‘metric power’ from David Beer provides a useful framing. Drawing on key theorists of statistics (Desrosières, Espeland, Foucault, Hacking, Porter, Rose etc), metric power accounts for the long-growing intensification of measurement over the last two centuries to the current mobilization of digital or ‘big’ data across diverse domains of societies. Central to metric power is the close alignment of metrics to neoliberal governance. Following the lead of Foucault and others to define neoliberalism as the ‘generalization of competition’ and the extension of the ‘model of the market’ to diverse social domains, Beer argues that ‘put simply, competition and markets require metrics’ because ‘measurement is needed for the differentiations required by competition’.
The concept of metric power, then, is potentially a useful way to approach the metrification of higher education and to explore how far this represents processes of neoliberalization and marketization. By examining the recent projects and future aspirations of agencies such as Jisc and QAA we can develop a better understanding of how a form of metric power is transforming the sector. To be clear at this point, there is nothing to suggest that either the QAA or Jisc are run by neoliberal ideologues–something more subtle is happening. The point is that both organizations, along with HESA and the OfS, are pursuing projects which potentially reinforce neoliberalizing processes by expanding the data infrastructures of HE measurement. They are ‘fast policy’ nodes in the mobile policy networks enacting the metrological machinery of HE reform.
QAA—sentimental evidence
The QAA is the sector agenda ‘entrusted with monitoring and advising on standards and quality in UK higher education’. It maintains the UK Quality Code for Higher Education used for quality assessment reviews, as well as the Subject Benchmark Statements describing the academic standards expected of graduates in specific subject areas. QAA also undertakes in-house research and produces policy briefings.
One of its major strands of activity, via the QAA Scotland office, is an ‘Evidence Enhancement Theme’ focusing on ‘the information (or evidence) used to identify, prioritise, evaluate and report’ on student satisfaction. Its priorities are:
- Optimising the use of existing evidence: supporting staff and students to use and interpret data and identifying data that will help the sector to understand its strengths and challenges better
- Student engagement: understanding and using the student voice, and considering concepts where there is no readily available data, such as student community, identity and belonging
- Student demographics, retention and attainment: using learning analytics to support student success, and supporting institutions to understand the links between changing demographics, retention, progression and attainment including the ways these are reported
The Evidence Enhancement program is unfolding collaboratively across all Scottish HE providers and is intended to result in sector-wide improvements in data use related to student satisfaction.
More experimentally, the QAA released a 2018 study into student satisfaction using data scraped from social media. The student sentiment scraping study, entitled The Wisdom of Students: Monitoring quality through student reviews, was based on a large sample of over 200,000 student reviews of higher education provision to produce a ‘collective-judgment’ score for each provider. These data were then compared with other sources such as TEF and NSS, and found to have a strong positive association. Crowdsourced big data from the web, it suggested, were as reliable as large-scale student surveys and bureaucratic quality assessment exercises as student experience metrics.
The QAA project is a clear example of how big data methodologies of sentiment analysis and discovery using machine learning and web-scraping are being explored for HE insights. For the QAA, taking such an approach is necessary because, as the sector has become more marketized and focused on the experience of the student in a ‘consumer-led system’ regulated by the ‘data-driven’ Office for Students, there has been ‘a gradual reduction in the remit of QAA in assessing and assuring teaching and learning quality in providers, and the rise in the perception of student experience and employment outcomes’ data as more accurately indicating excellence in higher education provision’. As such, measuring student experience in a timely, low-burden and cost-effective fashion has become a new policy priority, while existing instruments such as the TEF and NSS remain annual, retrospective, and potentially open to ‘gaming’ by institutions.
In contrast, collecting ‘unsolicited student feedback’ from reviews on social media platforms is seen by the QAA as a way of ‘securing timely, robust, low-burden and insightful data’ about student experience. In particular, the study involved collecting student reviews from Facebook, Whatuni.com and Studentcrowd.com, with Twitter data to be included in future research. The study authors found that 365 HE providers have Facebook pages with the ‘reviews’ function available, as well as many pages relating to departments, schools, institutes, faculties, students’ unions, and career services.
Perhaps most significantly, given the constraints of TEF and NSS, the scraping methodology allowed the QAA to come up with collective judgment scores for each provider on any given day. In other words, it allowed for the student experience to be calculated as time-series data, and opened up the possibility of ‘near real-time’ monitoring of provider performance in terms of delivering a positive student experience, which could then be used by providers to specify need for action. The advantages of the approach, according to the QAA, are that it makes year-round real-time feedback feasible ‘based on what students deem to be important to them, rather than on what the creator of surveys or evaluation forms would like to know about’; reduces the data-collection burden; minimizes providers’ ‘opportunities to influence or sanitise the feedback’; and opens up ‘the ability to explore sector-wide issues, such as feedback relating to free speech, contact hours, or vice-chancellor pay’.
In sum, the report concludes, ‘the timely and reliable extraction of the student collective-judgement is an important method to facilitate quality improvement in higher education’. The QAA intends to pilot the methodology with ten HE providers late in 2018.
The QAA concern with sentiment analysis of student experience needs to be understood not just as an artefact of HE marketization and consumerization, but as part of a wider turn to ‘feelings’, ‘sensation’ and ‘emotion’ in contemporary metric cultures. As William Davies notes, ‘Emotions can now be captured and algorithmically analysed (“sentiment analysis”) thanks to the behavioural data that digital technologies collect’, and these data are increasingly of interest as sources of intelligence to be harnessed for political purposes by authorities such as government departments or agencies. Scraping student sentiment from social media replicates the logic of psychological and behavioural forms of governance within HE, and has potential to make the sector ever-more responsive to real-time traces of the study body’s emotional pulse.
The QAA-led Provider Healthcheck Dashboard allows institutions to monitor and compare their performance through data visualizations. Image from HESA
The medicalized metaphor of tracing pulses can be carried further in relation to another QAA project. In collaboration with its M5 Group partners Jisc and HESA, QAA led the production of a data visualization package called the ‘Provider Healthcheck Dashboard’. The purpose of the tool is to allow providers to perform ‘in-house healthchecks’ by comparing their institutional performances, on many metrics, against competitors. The metrics used in the Healthcheck dashboard include TEF ratings, QAA quality measurements, NSS scores, Guardian league tables, percentage of 1st or 2:1 degree rankings, and graduate employment performance over five years.
These metrics are presented on the dashboard as if they constitute the ‘vital signs’ of a provider’s medical health and their comparison with norms of performance, as depicted visually as percentage differences from benchmarks. The provider healthcheck acts as a kind of medical read-out of the competitive health of an institution, demonstrating in visual, easy-to-read format how an individual provider is situated in the wider market, and catalyzing relevant treatments to strengthen its performance.
Jisc—predicting performance
Jisc is a membership organization providing ‘digital solutions for UK education and research’. Its strategic ‘vision is for the UK to be the most digitally-advanced higher education, further education and research nation in the world’. Besides its vision, Jisc is the sector’s key driver of learning analytics—the measurement of student learning activities—which it is circulating via its formal associations with the other M5 Group members HESA and QAA.
As a key part of its vision Jisc has conducted significant work outlining a future data-intensive HE and how to accomplish it over the coming decade. It envisages a HE landscape dominated by learning analytics and even artificial intelligence, in which students will increasingly experience a personalized education based on their unique data profiles. Jisc’s chief executive has described ‘the potential of Education 4.0‘ as a response to the ‘fourth industrial revolution’ of AI, big data, and the internet of things. Education 4.0 would involve lecturers being displaced by technologies that ‘can teach the knowledge better’, are ‘immersive’ and ‘adaptive’ to learners’ needs, and that include ‘virtual assistants’ to ‘support students to navigate this world of choice and work with them to make decisions that will lead to future success’.
Towards this vision of an ‘AI-led’ future of HE, Jisc collaborated with Universities UK on the 2016 report Analytics in Higher Education. A key observation of the report is that existing datasets such as TEF provide very limited information for universities, policymakers or regulators to act on:
External performance assessments, such as the TEF, don’t in themselves support institutions understanding and using their data. Advanced learning analytics can allow institutions to move beyond the instrumental requirements of these assessments to a more holistic data analytic profile. Predictive learning analytics are also increasingly being used to inform impact evaluations, via outcomes data as performance metrics. Ultimately, this allows institutions to assess the return on investment in interventions.
As this excerpt indicates, Jisc has key interests in learning analytics, predictive analytics, outcomes data, performance metrics, and measuring return on investment.
It is now seeking to realize these ambitions through its launch in September 2018 of a national learning analytics service for further and higher education. According to the press release, the learning analytics service ‘uses real time and existing data to track student performance and activities’:
From libraries to laboratories, learning analytics can monitor where, when and how students learn. This means that both students and their university or college can ensure they are making the most of their learning experience. … This AI approach brings existing data together in one place to support academic staff in their efforts to enhance student success, wellbeing and retention.
The service itself consists of a number of interrelated parts. It includes cloud-based storage through Amazon Web Services so individual providers do not need to invest in commercial or in-house solutions, and ‘data explorer’ functionality ‘that brings together the data from your various sources and provides quick, flexible visualisations of VLE usage, attendance and assessment – for cohorts and individual students. … The information will help you to plan effective personal interventions with students and to identify under-performing areas of the curriculum’. A third aspect of the service is the ‘learning analytics predictor’ that helps teaching and support staff to use ‘predictive data modelling to identify students who might have problems’ and ‘to plan interventions that support students’.
The final part of the service is a student app called Study Goal, which is available for student download from major app stores. As it is described on the Google Play app store, ‘Study Goal borrows ideas from fitness apps, allowing students to see their learning activity, set targets, record their own activity amongst other things’. In addition, it encourages students to benchmark themselves against peers, and can be used to monitor attendance at lectures.
The Jisc Study Goal app is based on fitness devices enabling students to monitor their performance and benchmark themselves against peers. Image from Google Play
Study Goal is an especially interesting part of the Jisc learning analytics architecture because, like the provider healthcheck dashboard, it appeals to images of fitness and healthiness through self-quantification, personal analytics and self-monitoring. The logic of activity tracking and self-quantification has been translated from the biometrics of the body back to a kind of health metrics of the institution. University leaders and students alike are being responsibilized for their academic health, while their data are also made available to other parties for inspection, evaluation, prediction and potential intervention. Beyond the learning analytics service and Study Goal app, Jisc has also supported the Learnometer environmental sensing device, which ‘automatically samples your classroom environment, and makes suggestions through a unique algorithm as to what might be changed to allow students to learn and perform at their best’. Not only is student academic and emotional health understood to underpin their performance, but the environment needs to be healthy and amenable to performance-maximization too.
All of these developments indicate a significant reimagining of universities and all who inhabit them as being amenable to ever-more pervasive forms of performance sensing and medicalized inspection. Higher education is becoming a kind of experimental laboratory environment where subjects are exposed through metrics to a data-centric clinical gaze, and where everything from students’ feelings and teaching quality to classroom environment and graduate employment outcomes is a source of risk requiring quantitative anticipation, modelling, and real-time management. Positioned in this way, the political priority to make HE function as a ‘healthy market’ of self-improving competitors thoroughly depends on the metric machinery of agencies such as the QAA and Jisc, and on the expanding data infrastructure in which they are key nodes of experimentation, policy influence, and technical authority.
Metric authority
Jisc and the QAA are bringing new metric techniques into HE, such as sentiment analysis, predictive modelling, comparative data visualization and student benchmarking apps, in ways that do appear to reinforce the ongoing marketization of the sector. They are key nodal actors in the policy networks building a data infrastructure for student measurement–an infrastructure that remains unfinished and continues to evolve, mutate and expand in scope as new actors contribute to it, new analyses are made possible, and new demands are made for data, comparison and evaluation.
It is necessary to restate at this point that the QAA and Jisc are not necessarily uncritically pursuing a market-focused neoliberalizing agenda. The QAA’s sentiment analysis report appears somewhat critical of the market reform of HE under the Office for Students. The point is that these sector agencies are all now part of an expanding data infrastructure that appears almost to have its own volition and authority, and that is inseparable from political logics of competition, measurement, performance comparison, and evaluation that characterize the neoliberal style of governance. It is a data infrastructure of metric power in higher education.
David Beer rounds off his book with several key themes which he proposes as a framework for understanding metric power. These can be applied to the examples of the metrological machinery of HE being developed by the QAA and Jisc.
Limitation. According to Beer, metric power operates through setting limits on ‘what can be known’ and ‘what can be knowable’ by setting the ‘score’ and ‘the rules and norms of the game’. The QAA and Jisc have become key actors of limitation by constraining the assessment and evaluation of HE to what is measurable and calculable. Through learning analytics, Jisc is pushing a particular set of calculative techniques that make students knowable in quantitative terms, as sets of ‘scores’ which may then be compared with norms established from vast quantities of data in order to attach evaluations to their profiles. The QAA-led dashboard similarly sets constraints on how provider performance is defined, and cements the idea that performance comparison is the only game to be played.
Visibility. Metric power is based on what can be rendered visible or left invisible—a ‘politics of visibility’ that also translates into how certain practices, objects, behaviours and so on gain ‘value’, while others are not measured or valued. Through their data visualization projects, Jisc and the QAA are involved in rendering HE visible as graphical comparisons which can be used for making value-judgments—in terms of what is deemed to be valuable market information. But such data visualization projects also render invisible anything that remains un-countable or incalculable, and inevitably make quantitative data that can be translated into graphics appear more valuable than other qualitative evaluations and professional assessments. The Study Goal app reinforces to students that certain forms of quantifiable engagement are valued and prized more highly than other qualitative modes.
Classification. Metric power works by sorting, ordering, classifying and categorizing, with ‘the capacity to order and divide, to group or to individualise, to make-us-up and to sort-us-out’. Through learning analytics pushed by Jisc, students are sorted into clusters and groupings as calculated from their individual data profiles, which might then lead, in Jisc’s ideal, to personalized intervention. Likewise, the sorting of universities by comparative healthcheck dashboards and their ordering into hierarchical league tables serves to classify some as winners and others as fallers and failures in a competitive contest for performance ranking and advantage.
Prefiguration. Metric power ‘works by prefiguring judgements and setting desired aims and outcomes’ as ‘metrics can be used in setting out horizons … and imagined futures and then using them in current decision-making processes’—and this is especially the case with the imagining and pursuit of markets and the measurement of their performance. Here Beer appears to be pointing to the performativity or productivity of data to anticipate future possibilities in ways that catalyse pre-emptive action. Clearly, with its real-time sentiment analysis, the QAA’s student-scraping study is seeking to mobilize data for purposes of prompting action and pre-emption by promoting the use of time-series data that indicate trends towards future outcomes in terms of student ratings. Institutions that can read student satisfaction near to real-time from social media sentiment ,might act to pre-empt their TEF and NSS ratings. Likewise, the Healthcheck Dashboard allows institutions to anticipate future challenges, while Jisc has specifically sought to embed predictive analytics in institutional decision-making.
Intensification. Metric power perpetuates the models of the world with which it sets out, with metrics satisfying the ‘desire for competition’, intensifying processes of neoliberalization, and expanding its models of the market into new areas. We can see with the QAA and Jisc how the market model of competitive evaluation and ranking has extended from research and teaching assessment to rating of institutions via social media scoring and user-reviews. Jisc’s Study Goal app also puts the market model under the very eyes and fingertips of students as it invites them to compare and benchmark themselves against their peers, thereby intensifying metric power through competitive peer relations and positioning students as responsible for their own market performance and prospects.
Authorization. Metric power works by ‘authenticating, verifying, legitimating, authorizing, and endorsing certain outcomes, people, actions, systems, and practices,’ with market-based models and metrics taken and trusted as sources of ‘truth production’. The dashboards and analytics advanced by QAA and Jisc are being propelled into the sector with promises of objectivity, impartiality and neutrality, free of human bias and subjective judgment. As such, these data and their visualization constitute a seemingly authoritative set of truths, yet are ultimately an artificial reality of higher education formed only from those aspects of the sector that are countable and measurable.
Automation. Metric power shapes human agency, decision-making, judgement and discretion as systems of computation and the ‘decisive outcomes of metrics’ are taken as objective, legitimate, fair, neutral and impartial, especially as ‘automated metric-based systems’ potentially take ‘decisions out of the hands of the human actors’ and ‘algorithms are making the decisions’ instead. Although QAA and Jisc are clearly not removing human judgment from the loop in HE decision-making, they are introducing limited forms of automation into the sector through algorithmic sentiment analysis, machine learning and data visualization dashboards that generate ‘decisive outcomes’ and thereby shape institutional or personal decisions.
Affective. Finally, metric power and systems of measurement induce affective responses and feelings—metrics have ‘affective capacities’ such as inducing anxiety or competitive motivation, and thereby ‘promote or produce actions, behaviours, and pre-emptive responses’, largely by prompting people to ‘perform’ in ways that can be valued, compared and judged in measurable terms. Jisc’s Study Goal is exemplary in this respect, as it is intended to incite students to benchmark themselves in order to prompt competitive action. The healthcheck dashboards, likewise, are designed to induce performance anxiety in university leaders and prompt them to take strategic action to ensure advantageous positioning in the variety of metrics by which the institution is assessed. In both examples, HE is framed in terms of ‘risk’, a highly affective state of uncertainty, as a way of catalyzing self-improvement.
As these points illustrate, through organizations such as the QAA and Jisc, HE is encompassed in the sprawling networks of actors and technologies of metric power. The data infrastructure of higher education is an accomplishment of a mobile policy network of sector agencies along with a whole host of other organizations and experts from the governmental, commercial and nonprofit sectors. A form of mobile, networked fast policy is propelling metrics across the sector, and increasingly prompting changes in organizational and individual behaviours that will transform the higher education sector to see and act upon itself as a market.