Learning from surveillance capitalism

Ben Williamson

Fraction collectorSurveillance capitalism combines data analytics, business strategy, and human behavioural experimentation. Image: “Fraction collector” by proteinbiochemist

‘Surveillance capitalism’ has become a defining concept for the current era of smart machines and Silicon Valley expansionism. With educational institutions and practices increasingly focused on data collection and outsourcing to technology providers, key points from Shoshana Zuboff’s The Age of Surveillance Capitalism can help explore the consequences the field of education. Mindful of the need for much more careful studies of the intersections of education with commercially-driven data-analytic strategies of ‘rendition’ and ‘behavioural modification’, here I simply outline a few implications of surveillance capitalism for how we think about education policy and about learning.

Data, science and surveillance
Zuboff’s core argument is that tech businesses such as Google, Microsoft, Facebook and so on have attained unprecedented power to monitor, predict, and control human behaviour through the mass-scale extraction and use of personal data. These aren’t especially novel insights—Evgeny Morozov has a 16,000 word essay on the book’s analytical and stylistic shortcomings—but Zuboff’s strengths are in the careful conceptualization and documentation of some of the key dynamics that have made surveillance capitalism possible and practical. As James Bridle argued in his review of the book, ‘Zuboff has written what may prove to be the first definitive account of the economic – and thus social and political – condition of our age’.

Terms such as ‘behavioural surplus’, ‘prediction products’, ‘behavioural futures markets’, and ‘instrumentarian power’ provide a useful critical language for decoding what surveillance capitalism is, what it does, and at what cost. Some of the most interesting documentary material Zuboff presents include precedents such as the radical behaviourism of BF Skinner and the ‘social physics’ of MIT Media Lab pioneer Sandy Pentland. For Pentland, quoted by Zuboff, ‘a mathematical, predictive science of society … has the potential to dramatically change the way government officials, industry managers, and citizens think and act’ (Zuboff, 2019, 433) through ‘tuning the network’ (435). Surveillance capitalism is not and was never simply a commercial and technical task, but deeply rooted in human psychological research and social experimentation and engineering. This combination of tech, science and business has enabled digital companies to create ‘new machine processes for the rendition of all aspects of human experience into behavioural data … and guarantee behavioural outcomes’ (339).

Zuboff has nothing to say about education specifically, but it’s tempting straight away to see a whole range of educational platforms and apps as condensed forms of surveillance capitalism (though we might just as easily invoke ‘platform capitalism’). The classroom behaviour monitoring app ClassDojo, for example, is a paradigmatic example of a successful Silicon Valley edtech business, with vast collections of student behavioural data that it is monetizing by selling premium features for use at home and offering behaviour reports to subscribing parents. With its emphasis on positive behavioural reinforcement through reward points, it represents a marriage of Silicon Valley design with Skinner’s aspiration to create ‘technologies of behaviour’. ClassDojo amply illustrates the combination of behavioural data extraction, behaviourist psychology and monetization strategies that underpin surveillance capitalism as Zuboff presents it.

Perhaps more pressingly from the perspective of education, however, Zuboff makes a number of interesting observations about ‘learning’ that are worth unpacking and exploring.

Learning divided
The first point is about the ‘division of learning in society’ (the subject of chapter 6, and drawing on her earlier work on the digital transformation of work practices). By this term Zuboff means to demarcate a shift in the ‘ordering principles’ of the workplace from the ‘division of labour’ to a ‘division of learning’ as workers are forced to adapt to an ‘information-rich environment’. Only those workers able to develop their intellectual skills are able to thrive in the new digitally-mediated workplace. Some workers are enabled (and are able) to learn to adapt to changing roles, tasks and responsibilities, while others are not. The division of learning, Zuboff argues, raises questions about (1) the distribution of knowledge and whether one is included or excluded from the opportunity to learn; (2) about which people, institutions or processes have the authority to determine who is included in learning, what they are able to learn, and how they are able to act on their knowledge; and (3) about what is the source of power that undergirds the authority to share or withhold knowledge (181).

But this division of learning, according to Zuboff, has now spilled out of the workplace to society at large. The elite experts of surveillance capitalism have given themselves authority to know and learn about society through data. Because surveillance capitalism has access to both the ‘material infrastructure and expert brainpower’ (187) to transform human experience into data and wealth, it has created huge asymmetries in knowledge, learning and power. A narrow band of ‘privately employed computational specialists, their privately owned machines, and the economic interests for who sake they learn’ (190) has ultimately been authorized as the key source of knowledge over human affairs, and empowered to learn from the data in order to intervene in society in new ways.

Sociology of education researchers have, of course, asked these kinds of questions for decades. They are ultimately questions about the reproduction of knowledge and power. But in the context of surveillance capitalism such questions may need readdressing, as authority over what constitutes valuable and worthwhile knowledge for learning passes to elite computational specialists, the commercial companies they work for, and even to smart machines. As data-driven knowledge about individuals grows in predictive power, decisions about what kinds of knowledge an individual learner should receive may even be largely decided by ‘personalized learning platforms’–as current developments in learning analytics and adaptive learning already illustrate. The prospect of smart machines as educational engines of social reproduction should be the subject of serious future interrogation.

Learning collectives
The second key point is about the ‘policies’ of smart machines as a model for human learning (detailed in chapter 14). Here Zuboff draws on a speech by a senior Microsoft executive talking about the power of combined cloud and Internet of Things technologies for advanced manufacturing and construction. In this context, Zuboff explains, ‘human and machine behaviours are tuned to pre-established parameters determined by superiors and referred to as “policies”’ (409). These ‘policies’ are algorithmic rules that

substitute for social functions such as supervision, negotiation, communication and problem solving. Each person and piece of equipment takes a place among an equivalence of objects, each one “recognizable” to the “system” through the AI devices distributed across the site. (409)

In this example, the ‘policy’ is then a set of algorithmic rules and a template for collective action between people and machines to operate in unison to achieve maximum efficiency and optimal outcomes. Those ‘superiors’ with the authority to determine the policies, of course, are those same computational experts and machines that have benefitted from the division of learning. This gives them unprecedented powers to ‘apply policies’ to people, objects, processes and activities alike, resulting in a ‘grand confluence in which machines and humans are united as objects in the cloud, all instrumented and orchestrated in accordance with the “policies” … that appear on the scene as guaranteed outcomes to be automatically imposed, monitored and maintained by the “system”’ (410). These new human-machine learning collectives represent the future for many forms of work and labour under surveillance capitalism, according to Zuboff.

Zuboff then goes beyond human-machine confluences in the workplace to consider the instrumentation and orchestration of other types of human behaviour. Drawing parallels with the behaviourism of Skinner, she argues that digitally-enforced forms of ‘behavioral modification’ can operate ‘just beyond the threshold of human awareness to induce, reward, goad, punish, and reinforce behaviour consistent with “correct policies”’, where ‘corporate objectives define the “policies” toward which confluent behaviour harmoniously streams’ (413). Under conditions of surveillance capitalism, Skinner’s behaviourism and Pentland’s social physics spill out of the lab into homes, workplaces, and all the public and private space of everyday life–ultimately turning the world into a gigantic data science lab for social and behavioural experimentation, tuning and engineering.

And the final point she makes here is that humans need to become more machine-like to maximize such confluences. This is because machines connected to the IoT and the cloud work through collective action by each learning what they all learn, sharing the same understanding and ‘operating in unison with maximum efficiency to achieve the same outcomes’ (413). This model of collective learning, according to surveillance capitalists, can learn faster than people, and ‘empower us to better learn from the experiences of others’:

The machine world and the social world operate in harmony within and across ‘species’ as humans emulate the superior learning processes of the smart machines. … [H]uman interaction mirrors the relations of the smart machines as individuals learn to think and act by emulating one another…. In this way, the machine hive becomes the role model for a new human hive in which we march in peaceful unison toward the same direction based on the same ‘correct’ understanding in order to construct a world free of mistakes, accidents, and random messes. (414)

For surveillance capitalists human learning is inferior to machine learning, and urgently needs to be improved by gathering together humans and machines into symbiotic systems of behavioural control and management.

Learning in, from, or for surveillance capitalism?
These key points from The Age of Surveillance Capitalism offer some provocative starting places for further investigations into the future shape of education and learning amid the smart machines and their smart computational operatives. Three key points stand out.

1) Cultures of computational learning. One line of inquiry might be into the cultures of learning of those computational experts who have gained from the division of learning. And I mean this in two ways. How are they educated? How are they selected into the right programs? What kinds of ongoing training provides the kinds of privilege to learn about society through mass-scale behavioural data? These are questions about new and elite forms of workforce preparation and professional education. How, in short, are these experts educated, qualified and socialized to do data analytics and behaviour modification—if that is indeed what they do? In other words, how is one educated to become a surveillance capitalist?

The other way of approaching this concerns what is actually involved in ‘learning’ about society through its data. This is both a pedagogic and a curricular question. Pedagogically, education research would benefit from a much better understanding of the kinds of workplace education programmes underway inside the institutions of surveillance capitalism. From a curricular perspective, this would also require an engagement with the kinds of knowledge assumptions and practices that flow through such spaces. As mentioned earlier, sociology of education has long been concerned with how aspects of culture are ‘selected’ for reproduction by transmission through education. As tech companies and related academic labs become increasingly influential, they are producing new ‘social facts’ that might affect how people both within and outside those organizations come to understand the world. They are building new knowledge based on a computational, mathematical, and predictive style of thinking. What, then, are the dynamics of knowledge production that generate these new facts, and how do they circulate to affect what is taught and learnt within these organizations? As Zuboff notes, pioneers such as Sandy Pentland have built successful academic teaching programs at institutes like MIT Media Lab to reproduce knowledge practices such as ‘social physics’.

2) Human-machine learning confluences. The second key issue is what it means to be a learner working in unison with the Internet of Things. Which individuals are included in the kind of learning that is involved in becoming part of this ‘collective intelligence? When smart machines and human workers are orchestrated together into ‘confluence’, and human learning is supposed to emulate machine learning, how do our existing theories and models of human learning hold up? Machine learning and human learning are not obviously comparable, and the tech firms surveyed by Zuboff appear to hold quite robotic notions of what constitutes learning. Yet if the logic of extreme instrumentation of working environments develops as Zuboff anticipates, this still raises significant questions about how one learns to adapt to work in unison with the smart machines, who gets included in this learning, who gets excluded, how those choices and decisions are made, and what kinds of knowledge and skills are gained from inclusion. Automation is likely to lead to both further divisions in learning and more collective learning at the same time–with some individuals able to exercise considerable autonomy over the networks they’re part of, and others performing the tasks that cannot yet be automated.

In the context of concerns about the role of education in relation to automation, intergovernmental organizations such as the OECD and World Economic Forum have begun encouraging governments to focus on ‘noncognitive skills’ and ‘social-emotional learning’ in order to pair human emotional intelligence with the artificial cognitive intelligence of smart machines. Those unique human qualities, so the argument goes, cannot be quantified whereas routine cognitive tasks can. Classroom behaviour monitoring platforms such as ClassCraft have emerged to measure those noncognitive skills and offer ‘gamified’ positive reinforcement for the kind of ‘prosocial behaviours’ that may enable students to thrive in a future of increased automation. Being emotionally intelligent, by these accounts, would seem to allow students to enter into ‘confluent’ relations with smart machines. Rather than competing with automation, they would complement it as collective intelligence. ‘Human capital’ is no longer a sufficient economic goal to pursue through education—it needs to produce ‘human-computer capital’ too.

3) Programmable policies. A third line of inquiry would be into the idea of ‘policies’. Education policy studies have long engaged critically with the ways government policies circumscribe ‘correct’ forms of educational activity, progress, and behaviour. With the advance of AI-based technologies into schools and universities, policy researchers may need to start interrogating the policies encoded in the software as well as the policies inscribed in government texts. These new programmable policies potentially have a much more direct influence on  ‘correct’ behaviours and maximum outcomes by instrumenting and orchestrating activities, tasks and behaviours in educational institutions.

Moreover, researchers might shift their attention to the kind of programmable policies that are enacted in the instrumented workplaces where, increasingly, much learning happens. Tech companies have long bemoaned the adequacy of school curricula and university degrees to deliver the labour market skills they require. With the so-called ‘unbundling’ of the university in particular, higher education may be moving further towards ‘demand driven’ forms of professional learning and on-the-job industry training provided by private companies. When education moves into the smart workplace, learning becomes part of the confluence of humans and machines, where all are equally orchestrated by the policies encoded in the relevant systems. Platforms and apps using predictive analytics and talent matching algorithms are already emerging to link graduates to employers and job descriptions. The next step, if we accept the likeliness of the direction of travel of surveillance capitalism, might be to match students directly to smart machines on-demand as part of the collective human-machine intelligence required to achieve maximum efficiency and optimized outcomes for capital accumulation. In this scenario, the computer program would be the dominant policy framework for graduate employability, actively intervening in professional learning by sorting individuals into appropriate networks of collective learning and then tuning those networks to achieve best effects.

All of this raises one final question, and a caveat. First the caveat. It’s not clear that ‘surveillance capitalism’ will sustain as an adequate explanation for the current trajectories of high-tech societies. Zuboff’s account is not uncontested, and it’s in danger of becoming an explanatory shortcut for deployment anywhere that data analytics and business interests intersect (as ‘neoliberalism’ is sometimes evoked as a shortcut for privatization and deregulation). The current direction of travel and future potential described by Zuboff are certainly not desirable, and should not be accepted as inevitable. If we do accept Zuboff’s account of surveillance capitalism, though, the remaining question is whether we should be addressing the challenges of learning in surveillance capitalism, or the potential for whole education systems to learn from surveillance capitalism and adapt to fit its template. Learning in surveillance capitalism at least assumes a formal separate of education from these technological, political and economic conditions. Learning from it, however, suggests a future where education has been reformatted to fit the model of surveillance capitalism–indeed, where a key purpose of education is for surveillance capitalism.

Zuboff, S. 2019. The Age of Surveillance Capitalism: The fight for a human future at the new frontier of power. London: Profile.
Posted in Uncategorized | Tagged , , , , , , , | Leave a comment

Education for the robot economy

Ben Williamson

Robot by Saundra CastanedaRobotization is driving coding and emotional skills development in education. Image by Saundra Castaneda

Automation, coding, data and emotions are the new keywords of contemporary education in an emerging ‘robot economy’. Critical research on education technology and education policy over the last two decades has unpacked the connections of education to the so-called ‘knowledge economy’, particularly as it was projected globally into education policy agendas by international organizations including the OECD, World Economic Forum and World Bank. These organizations, and others, are now shifting the focus to artificial intelligence and the challenges of automation, and pushing for changes in education systems to maximize the new economic opportunities of robotization.

Humans & robots as sources of capital
In the language of the knowledge economy, the keywords were globalization, innovation, networks, creativity, flexibility, multitasking and multiskilling—what social theorists variously called ‘NewLiberalSpeak’ and the ‘new spirit of capitalism’. With knowledge a new source of capital, education in the knowledge economy was therefore oriented towards the socialization of students into the practices of ICT, communication, and teamwork that were seen as the necessary requirements of the new ‘knowledge worker’.

In the knowledge economy, learners were encouraged to see themselves as lifelong learners, constantly upskilling and upgrading themselves, and developing metacognitive capacities and the ability to learn how to learn in order to adapt to changing economic circumstances and occupations. Education policy became increasingly concerned with cultivating the human resources or ‘human capital’ necessary for national competitive advantage in the globalizing economy. Organizations such as the OECD provided the international large scale assessment PISA to enable national systems to measure and compare their progress in the global knowledge economy, treating young people’s test scores as indicators of human capital development.

The steady shift of the knowledge economy into a robot economy, characterized by machine learning, artificial intelligence, automation and data analytics, is now bringing about changes in the ways that many influential organizations conceptualize education moving towards the 2020s. Although this is not an epochal or decisive shift in economic conditions, but rather a slow metamorphosis involving machine intelligence in the production of capital, it is bringing about fresh concerns with rethinking the purposes and aims of education as global competition is increasingly linked to robot capital rather than human capital alone.

According to many influential organizations, it is now inevitable that automated technologies, artificial intelligence, robotization and so on will pose a major threat to many occupations in coming years. Although the evidence of automation causing widespread technological unemployment is contested, many readings of this evidence adopt a particularly determinist perspective. The robots are coming, the threat of technology is real and unstoppable, and young people are going to be hit hardest because education is largely still socializing them for occupations that the robots will replace.

The OECD has produced findings reporting on the skill areas that automation could replace. A PriceWaterhouseCoopers report concluded that ‘less well educated workers could be particularly exposed to automation, emphasising the importance of increased investment in lifelong learning and retraining’. Pearson and Nesta, too, collaborated on a project to map the ‘future skills’ that education needs to promote to prepare nations for further automation, globalization, population ageing and increased urbanization over the next 10 years. The think tank Brookings has explicitly stated, ‘To develop a workforce prepared for the changes that are coming, educational institutions must de-emphasize rote skills and stress education that helps humans to work better with machines—and do what machines can’t’.

For most of these organizations, the solution is not to challenge the encroachment of automation on jobs, livelihoods and professional communities. Instead, the robot economy can be even further optimized by enhancing human capabilities through reformed institutions and practices of education. As such, education is now being positioned to maximize the massive economic opportunities of robotization.

Two main conclusions flow from the assumption that young people’s future jobs and labour market prospects are under threat, and that the future prospects of the economy are therefore uncertain, unless education adapts to the new reality of automation. The first is that education needs to de-emphasize rote skills of the kind that are easy for computers to replace and stress instead more digital upskilling, coding and computer science. The second is that humans must be educated to do things that computerization cannot replace, particularly by upgrading their ‘social-emotional skills’.

Learning to code, programming and computer science have become the key focus for education policy and curriculum reform around the world. Major computing corporations such as Google and Oracle have invested in coding programs alongside venture capitalists and technology philanthropists, while governments have increasingly emphasized new computing curricula and encouraged the involvement of both ed-tech coding products and not-for-profit coding organizations in schools.

The logic of encouraging coding and computer science education in the robot economy is to maximize the productivity potential of the shift to automation and artificial intelligence. In the UK, for example, artificial intelligence development is at the centre of the government’s industrial strategy, which made computer programming in schools an area for major investment. Doing computer science in schools, it is argued, equips young people not just with technical coding skills, but also new forms of computational thinking and problem-solving that will allow them to program and instruct the machines to work on their behalf.

This emphasis on coding is also linked to wider ideas about digital citizenship and entrepreneurship, with the focus on preparing children to cope with uncertainty in an AI age. A recent OECD podcast on AI and education, for example, put coding, entrepreneurship and digital literacy together with concerns over well-being and ‘learning to learn’. Coding our way out of technological unemployment, by upskilling young people to program, work with, and problem-solve with machines, then, is only one of the proposed solutions for education in the robot economy.

The other solution is ‘social-emotional skills’. Social-emotional learning and skills development is a fast-growing policy agenda with significant buy-in by international organizations. The World Economic Forum has projected a future vision for education that includes the development and assessment of social-emotional learning through advanced technologies. Similarly, the World Bank has launched a program of international teacher assessment that measures the quality of instruction in socioemotional skills.

The OECD has perhaps invested the most in social-emotional learning and skills, as part of its long-term ‘Skills for Social Progress’ project and its Education 2030 framework. The OECD’s Andreas Schleicher is especially explicit about the perceived strategic importance of cultivating social-emotional skills to work with artificial intelligence, writing that ‘the kinds of things that are easy to teach have become easy to digitise and automate. The future is about pairing the artificial intelligence of computers with the cognitive, social and emotional skills, and values of human beings’.

Moreover, he casts this in clearly economic terms, noting that ‘humans are in danger of losing their economic value, as biological and computer engineering make many forms of human activity redundant and decouple intelligence from consciousness’. As such, human emotional intelligence is seen as complementary to computerized artificial intelligence, as both possess complementary economic value. Indeed, by pairing human and machine intelligence, economic potential would be maximized.

Intuitively, it makes sense for schools to focus on the social and emotional aspects of education, rather than wholly on academic performance. Yet this seemingly humanistic emphasis needs to be understood as part of the globalizing move by the OECD and others to yet again reshape the educational agenda to support economic goals.

The fourth keyword is data, and it refers primarily to how education must be ever more comprehensively measured to assess progress in relation to the economy. Just as the OECD’s PISA has become central to measuring progress in the knowledge economy, the OECD’s latest international survey, the Study of Social and Emotional Skills—a computer-based test for 10 and 15 year-old young people that will report its first findings in 2020—will allow nations and cities to assess how well their ‘human capital’ is equipped to complement the ‘robot capital’ of automated intelligent machines.

If the knowledge economy demanded schools help produce measurable quantities of human capital, in the robot economy schools are made responsible for helping the production of ‘human-computer capital’–the value to be derived from hybridizing human emotional life with AI. The OECD has prepared the test to measure and compare data on how well countries and cities are progressing towards this goal.

While, then, automation does not immediately pose a threat to teachers–unless we see AI_based personalized learning software as a source of technological unemployment in the education sector–it is likely to affect the shape and direction of education systems in more subtle ways in years to come. The keywords of the knowledge economy have been replaced by the keywords of the robot economy. Even if robotization does not pose an immediate threat to the future jobs and labour market prospects of students today, education systems are being pressured to change in anticipation of this economic transformation.

The knowledge economy presented urgent challenges for research; its metamorphosis into an emergent robot economy, driving policy demands for upskilling students with coding skills and upgraded emotional competencies, demands much further research attention too.

Posted in Uncategorized | Tagged , , , , , , , , | Leave a comment

Learning lessons from data controversies

Ben Williamson

This is a talk delivered at OEB2018 in Berlin on 7 December 2018, with links to key sources. A video recording is also available (from about 51mins mark)

Ten years ago ‘big data’ was going to change everything and solve every problem—in health, business, politics, and of course education. But, a decade later, we’re now learning some hard lessons from the rapid expansion of data analytics, algorithms, and AI across society.

DCMS Zuckerberg      Data controversies became the subject of international government attention in 2018

Data doesn’t seem quite so ‘cool’ now that it’s at the centre of some of society’s most controversial events. By ‘controversy’ here I mean those moments when science and technical innovation come into conflict with the public or political concerns.

Internationally, politicians have already begun to ask hard questions, and are looking for answers to recent data controversies. The current level of concern about companies like Facebook, Google, Uber, Huawei, Amazon and so on is now so acute that some commentators say we’re witnessing a ‘tech-lash’—a backlash of public opinion and political sentiment to the technology sector.

The tech sector is taking this on board, such as the Centre for Humane Technology seeking to stop tech from ‘hijacking our minds and society’. Universities that nurture the main tech talent, such as MIT, have begun to recognize their wider social responsibility and are teaching their students about the power of future technologies, and their potentially controversial effects. The AI Now research institute just launched a new report on the risks of algorithms, AI and analytics, calling for tougher regulation.

TES-algorithms-printPrint article on AI & robotization in teaching, from Times Education Supplement, 26 May 2017

We’re already seeing indications in the education media of a growing concern that AI and algorithms are ‘gonna get you’—as it said in the teachers’ magazine the Times Education Supplement last year.

In the states the FBI even issued a public service announcement warning that the collection of sensitive data by ‘edtech’ could result in social engineering, bullying, tracking, identity theft, or other means for targeting children’. An ‘edtech-lash’ has begun.

The UK Children’s Commissioner has also warned of the risks of ‘datafying children’ both at home and at school. ‘We simply do not know what the consequences of all this information about our children will be,’ she argued, ‘so let’s take action now to understand and control who knows what about our children’.

And books like Weapons of Math Destruction and The Tyranny of Metrics have become surprise non-fiction successes, both drawing attention to the damaging effects of data use in schools and universities.

So, I want to share some lessons from data controversies in education in the last couple of years—things we can learn from to avoid damaging effects in the future.

Software can’t ‘solve’ educational ‘problems’ 
One recent moment of data controversy was the protest by US students against the Mark Zuckerberg-supported Summit Public Schools model of ‘personalized learning’. Summit is originally a charter school chain with an adaptive learning platform—partly built by Facebook engineers—that’s scaled up across many high school sites in the US.

But in November, students staged walkouts in protest at the educational limitations and data privacy implications of the personalized learning platform. Student protestors even wrote a letter to Mark Zuckerberg in The Washington Post, claiming assignments on the Summit Learning Platform required hours alone at a computer and didn’t prepare them for exams.

They also raised flags about the huge range of personal information the Summit program collected without their knowledge or consent.

‘Why weren’t we asked about this before you and Summit invaded our privacy in this way?’ they asked Zuckerberg. ‘Most importantly’, they wrote, ‘the entire program eliminates much of the human interaction, teacher support, and discussion and debate with our peers that we need in order to improve our critical thinking…. It’s severely damaged our education.’

So our first lesson is that education is not entirely reducible to a ‘math problem’, nor can it be ‘solved’ with software—it exceeds the increase in data available from teaching and learning processes. For many educators and students alike, education is more than the numbers in an adaptive, personalized learning platform, and includes non-quantifiable relationships, interactions, discussion, and thinking.

Global edtech influence raises public concern
Google, too, has become a controversial data company in education. Earlier this year it launched its Be Internet Awesome resources for digital citizenship and online safety. But the New York Times questioned whether the public should accept Google as a ‘role model’ for digital citizenship and good online conduct when it is seriously embattled by major data controversies.

Google NY TimesThe New York Times questioned Google positioning itself as a trusted authority in schools

Through its education services, it’s also a major tracker of student data and is shaping its users as lifelong Google customers, said the Times. Being ‘Internet Awesome’ is also about buying into Google as a user and consumer.

In fact, Google was a key target of a whole series of Times articles last year revealing Silicon Valley influence in public education. Silicon Valley firms, it appears, have become new kinds of ‘global education ministries’—providing hardware and software infrastructure, online resources and apps, curricular materials and data analytics services to make public education more digital and data-driven.

This is what we might call ‘global policymaking by digital proxy’ as the tech influences public education at speeds and international scale conventional policy approaches cannot achieve.

The lesson here is that students, the media and public may have ideas, perceptions and feelings about technology, and the companies behind it, that are different to companies’ aspirations—claims of social responsibility compete with feelings of ‘creepiness’ about commercial tracking and concern about private sector influence in public education.

Data leaks break public trust
Data security and privacy is perhaps the most obvious topic for a data controversy lesson—but it remains an urgent one as educational institutions and companies are increasingly threatened by cybersecurity attacks, hacks, and data breaches.

K12 cybermapThe K12 Cyber Incident map has catalogued hundreds of school data security incidents

The K-12 Cyber Incident Map is doing great work in the US to catalogue school hacks and attacks, importantly raising awareness in order to prompt better protection. And then there’s the alarming news of really huge data leaks from the likes of EdModo and SchoolZilla—raising fears this is surely only going to get worse as more data is collected and shared about students.

The key lesson here is that data breaches and student privacy leaks also break students’, parents’, and the public’s trust in education companies. This huge increase in data security threats risks exposing the ed-tech industry to media and government attack. We’re supposed to protect children, they might say, but we’re exposing their information to the dark web instead!

Algorithmic mistakes & encoded politics cause social consequences 
Then there’s the problem of educational algorithms being wrong. Earlier this year, the English Testing Service revealed results from a check of whether international students were cheating an English language proficiency test. To discover how many students had cheated, ETS used voice biometrics to analyze tens of thousands of recorded oral tests, looking for repeated voices.

What it found? According to reports, 20% of the time the algorithm was getting the voice matching wrong. That’s a huge error rate, with massive consequences.

Around 5000 international students in the UK wrongly had their visas revoked and were threatened with deportation, all related to the UK’s ‘hostile environment’ immigration policy. Many have subsequently launched legal challenges, and many have won.

Data lesson 4, then, is that poor quality algorithms and data can lead to life-changing outcomes and consequences for students—even raising the possibility of legal challenges to algorithmic decision-making. This example also shows the problem with ascribing too much objectivity and accuracy to data and algorithms—in reality, they’re the products of ‘humans in the room’ whose own assumptions, and potential biases and mistakes can be coded into the software that’s used to make life-changing decisions.

Let’s not forget, either, that the test wouldn’t even have existed except the UK government was seeking to root out and deport unwanted immigrants—the algorithm was programmed with some nasty politics.

Transparency, not algorithmic opacity, is key to building trust with users
The next lesson is about secrecy and transparency. The UK government’s Nudge Unit, for example, revealed this time last year that it had piloted a school-evaluating algorithm for school inspection, which could identify where a school might be failing from its existing data.

Many headteachers and staff are already fearful of the human school inspector. The automated school-inspecting algorithm secretly crawling around in their servers and spreadsheets, if not their corridors, offices and classrooms, hasn’t made them any less concerned. Especially as it can only rate their performance from the numbers, rather than qualitatively assessing the impact of local context on how they perform.

A spokesperson for the National Association of Headteachers said to BBC News, ‘We need to move away from a data-led approach to school inspection. It is important that the whole process is transparent and that schools can understand and learn from any assessment. Leaders and teachers need absolute confidence that the inspection system will treat teachers and leaders fairly’.

The lesson to take from the Nudge Unit experiment is that secrecy and lack of transparency in use of data analytics and algorithms do not win trust in the education sector—teacher unions and education press are likely to reject AI and algorithmic assistance if not believed to be transparent, fair, or context-sensitive.

Psychological surveillance raises fears of emotional manipulation
My last three lessons focus on educational data controversies that are still emerging. These relate to the idea that the ‘Internet of Bodies’ has arrived in the shape devices for tracking the ‘intimate data’ of your body, emotions and brain.

For example, ‘emotion AI’ is emerging as a potential focus of educational innovation—such as biometric engagement sensors, emotion learning analytics, and facial vision algorithms that can determine students’ emotional response to teaching styles, materials, subjects, and different teachers.

Emotive computingEmotionAI is being developed for use in education, according to EdSurge

Among others, EdSurge and the World Economic Forum have endorsed systems to run facial analytics and wearable biometrics of students’ emotional engagement, legitimizing the idea that invisible signals of learning can be detected through skin.

Emotion AI is likely to be controversial because it prioritizes the idea of constant psychological surveillance—the monitoring of intimate feelings and perhaps intervening to modify those emotions. Remember when Facebook got in trouble for its ‘emotional contagion’ study? Fears of emotional manipulation inevitably follow from emotionAI–and the latest AI Now report highlighted this as a key area of concern.

Facial coding and engagement biometrics with emotion AI could even be seen to treat teaching and learning as ‘infotainment’—pressuring teachers to ‘entertain’ and students to appear ‘engaged’ when the camera is recording or the biometric patch is attached.

‘Reading the brain’ poses risks to human rights 
The penultimate lesson is about brain-scanning with neurotechnology. Educational neurotechnologies are already beginning to appear—for example, the BrainCo Focus One brainwave-sensing neuroheadset and application spun out of Harvard University.

Such educational neurotechnologies are based on the idea that the brain has become ‘readable’ through wearable headsets that can detect neural signals of brain activity, then convert those signals into digital data for storage, comparison, analysis and visualization via the teacher’ brain-data dashboard. It’s a way of seeing through the thick protective barrier of the skull to the most intimate interior of the individual.

BrainCo 1The BrainCo Focus One neuroheadset reads EEG signals of learning & presents them on a dashboard

But ‘brain surveillance’ is just the first step as ambitions advance to not only read from the brain but to ‘write back’ into it or ‘stimulate’ its ‘plastic’ neural pathways for more optimal learning capacity.

Neurotechnology is going to be extraordinarily controversial, especially as it is applied to scanning and sculpting the plastic learning brain. ‘Reading’ the brain for signals, or seeking to ‘write back’ into the plastic learning brain, raises huge ethical and human rights challenges—‘brain leaks’, neural security, cognitive freedom, neural modification—with prominent neuroscientists, neurotechnologists and neuroethics councils already calling for new frameworks to protect the readable and writable brain.

Genetic datafication could lead to dangerous ‘Eugenics2.0’
I’ve saved the biggest controversy for last: genetics, and the possibility of predicting a child’s educational achievement, attainment, cognitive ability, and even intelligence from DNA. Researchers of human genomics now have access to massive DNA datasets in the shape of ‘biobanks’ of genetic material and information collected from hundreds of thousands of individuals.

The clearest sign of the growing power of genetics in education was the recent publication of a huge, million-sample study of educational attainment which concluded the number of years you spend in education can be partly predicted genetically.

The study of the ‘new genetics of intelligence’, based on very large sample studies and incredibly advanced biotechnologies, is also already leading to ever-stronger claims of the associations between genes, achievement and intelligence. And these associations are already raising the possibility of new kinds of markets of genetic IQ testing of children’s mental abilities.

Many of you will also have heard the news last week that a scientist claimed to have bred the first ever genetically edited babies, raising a massive debate about re-programming human life itself.

Basically, it is becoming more and more possible to study digital biodata related to education, to develop genetic tests to measure students’ ‘mental rating’, and perhaps even to recode, edit or rewrite the instructions for human learning.

It doesn’t get more controversial than genetics in education. So what data lesson can we learn? Genetic biodata risks reproducing dangerous ideas about the biologically determined basis of achievement, while genetic ‘intelligence’ tests are a step towards genetic selection, brain-rating, and gene-editing for ‘smarter kids’—raising risks of genetic discrimination, or ‘Eugenics 2.0’.

Preventing data controversies 
So why are these data lessons important? They’re important because governments are increasingly anxious to sort out the messes that overenthusiastic data use and misuse has got societies into.

In the UK we have a new government centre for data ethics, and a current inquiry and call for evidence on data ethics in education. Politicians are now asking hard questions about algorithmic bias in edtech, accuracy of data models, risk of data breaches in analytics systems, and the ethics of surveillance of students.

Data and its controversies are under the microscope in 2018 for reasons that were unimaginable during the big data hype of 2008. Data in education is already proving controversial too.

In Edinburgh, we are trying to figure out how to build productive collaborations between social science researchers of data, learning scientists, education technology developers, and policymakers—in order to pre-empt the kind of controversies that are now prompting politicians to begin asking those hard questions.

By learning lessons from past controversies with data in education, and anticipating the controversies to come, we can ensure we have good answers to these hard questions. We can also ensure that good, ethical data practices are built in to educational technologies, hopefully preventing problems before they become full-blown public data controversies.

Posted in Uncategorized | Tagged , , , , , | Leave a comment

The app store for higher education

Ben Williamson

young people phoneA government competition aims to make choosing a degree as easy as swiping a smartphone. Image by Garry Knight

App stores are among the most significant aspects of contemporary cultures. Commercial environments where consumers choose digital products, they are also important spaces where app producers and platform businesses first come into contact with users. As the shopping centres of platform capitalism, app stores enable users to become sources of data collection and value extraction.

Apps for higher education have become a key focus of government investment, and have the potential to become significant intermediaries bringing students, applicants and other publics into contact with HE data. This post continues ongoing research documenting the expanding data infrastructure of HE in the UK, which has already explored the policy context, data-led regulatory approach, data-centred sector agencies, and involvement of data-driven edu-businesses. New apps for shaping student choice bring small businesses, edtech startups, and the not-for-profit sector into the expanding infrastructure, and are introducing the idea that student choice can be shaped (or ‘nudged’) through the interactive presentation of data on apps, price-comparison websites, and social media-style services that indicate the quality of a provider’s performance.

An ‘information revolution’ in student choice
Universities Minister Sam Gyimah announced a competition in summer 2018 for small businesses to create new apps or online services to assist young people in making choices about going to university. Controversially to many in the sector, he claimed the competition would allow tech companies to use graduate earnings data—taken from the Longitudinal Educational Outcomes (LEO) dataset—to ‘create a MoneySuperMarket for students, giving them real power to make the right choice’.

A budget of £125,000 was allocated to support the winning entrants, which were expected to produce working prototypes during September and October. A few months later he announced five shortlisted companies, an additional £300,000 investment for two of the products, and the release of ‘half a million cells of data showing graduate outcomes for every university–more than has ever been published before’.

‘This is the start of an information transformation for students, which will revolutionise how students choose the right university for them’, said Gyimah. ‘I want this to pave the way for a greater use of technology in higher education, with more tools being made available to boost students’ choices and prospects’.

In other words, the competition is just a prototype of what is still to come–a government-backed marketplace of apps, platforms and other products and services to enable applicants, students and graduates to produce, interact with, and use HE data. Elsewhere, Gyimah was reported saying there is ‘clearly a market opportunity’ for services like this, even for those not awarded part of the £300,000 funding from the Department for Education.

Although the competition at this stage has only generated prototypes–only two of which will be more fully developed–all of the companies have already developed a web presence for their apps and products. A Department for Education video tweeted from the official finalists’ event also offers some glimpses of the these prototype products. This allows us to see how an expanding ‘app store’ for student choice might extend the data infrastructure in new ways.

MyEd UniPlaces app
MyEd is an existing provider of services designed to enhance choices in education institutions.

MyEdMyED provides educational choice-enhancement services. Image from https://myed.com/

MyEd already runs services supporting parent choice in nurseries, schools, colleges and universities, in particular by aggregating key data and previous reviews to enable easy user comparison and shortlisting of providers. According to its website:

Our unique reviews process is an intelligence data analysis system that has been designed to provide our users with the most relevant and digestible information to help them make the best decisions on their investment in education.

For the competition, MyEd proposed a UniPlaces app, which it pitched as a ‘web-based compatibility checker’  to assist applicants in making HE choices. Driven by a questionnaire capturing students’ achievements and preferences, the app then seeks to match them to HE options that are linked to certain job prospects.

As an established company, MyEd already compiles together information from a range of sources, including institutions, government departments, published performance tables, and agencies such as HESA and the QAA. in these ways, it is emblematic of the shift toward marketized education and choice across all sector–from early years to HE–in recent education policy.

The unique aspect of the Uni4U proposal is that it was designed by students, though the organization was founded by an entrepreneur with support from the NatWest Business Accelerator.

Uni4UUni4U is gathering additional data by surveying students and school children online. Image from http://uni4u.co.uk/

Like the other apps, Uni4U supports HE choice through the graphical presentation of data about universities, including their location, campus facilities, and graduate earnings.

While in prototype phase, Uni4U produced a website featuring two online surveys to gather further data from future students and current students. It invites future students to identify what would most help them make university choices, and current students to rate the quality of their existing provider and the support they gained in making their initial choice.

Coursematch presents itself on its website as a fully functioning app available via the Apple App store and Google Play, with a claimed 25,000 users. It was already upgraded in its current form in May 2018 and has been marketing itself on social media as ‘The #1 social network to help find your perfect university course and meet future friends!’

CoursematchCourseMatch is a social network for university choice, already available on app stores. Image from https://coursematch.io/

Perhaps the notable aspect of Coursematch is its claim to use machine learning to make the most effective matches between students and courses, twinned with a ‘swipeable’ interface design adopted from dating apps.

‘Our new look app is going to make it easier than ever to browse University courses, and find your perfect course!’ read a recent promotional Coursematch tweet. ‘We are bringing in AI techniques to recommend a selection of courses right for you, to browse through with just a simple swipe’.

Potential students are provided with projected possible earnings based on the average lower quartile, median and upper quartile for particular courses, and can also interact through the app with existing students on those courses. Coursematch is already supported by Jisc, the HE digital learning agency, and Santander Universities.

AccessEd–ThinkUni app
The ThinkUni app comes from the not-for-profit sector, with AccessEd aiming to ‘increase access to university for young people from under-served backgrounds globally. We are creating a global network of partner organisations committed to this mission, sharing with them our expertise, resources and support’.

AccessEdAccessEd supports access to university for young people from under-served backgrounds. Image from https://access-ed.ngo/

Pitched as a ‘personalized careers assistance’ service that is easy for students to use on their smartphones, ThinkUni builds on AccessEd’s previous university access work–including its ‘Brilliant Club’, the UK’s largest university access programme for 11-18 year olds.

According to the co-founder and executive chair of AccessEd, existing sources such as UCAS are huge databases and glorified spreadsheets that make decision-making difficult. With ThinkUni they can instead access details such as which universities the student could choose based on their school exam grades, and how long it would take them to pay back their student loan based on a projected graduate salary.

The Profs—That’s Life
That’s Life is the most unique of the competition finalists–it’s an education and careers simulator produced by The Profs, a successful private HE tutoring company.

ProfsThe Profs is a successful HE private tutoring company. Image from https://www.theprofs.co.uk/

The idea for the service is that it provides a ‘gamified’ simulation of the outcomes of making certain kinds of decisions, and presents projected data such as their future levels of happiness, work-life balance and income, showing students the impact of their life and course choices, including not going to university all.

The gamification and simulation aspects of That’s Life demonstrate how the logics of video games could be employed to enhance student choice, notably by offering students opportunities to experiment with different pathways and problem solving strategies. But the app’s origins in the private HE tutoring sector is also indicative of how private sector and alternative providers are being actively welcomed into public university service provision.

Scaling up the prototype
Whether apps such as those supported by government–or the earnings potential they present–actually influence student choice remains for now an empirical question. Another question is whether initial government investment will enable these app producers to scale their products. In a way, Sam Gyimah is acting like a Silicon Valley venture capitalist, seed-funding early-stage prototypes that bear a high risk of failure.

However, one existing example of a HE-facing app suggests that appetite for real venture capital investment in such products may be growing. Debut is a smartphone app for talent-matching graduates to corporate employers and labour markets. Graduate users create a profile—as with other social media platforms—and complete a psychometric personality test which can then be used for automated push notifications of appropriate jobs. Partnering corporate employers can even ‘talent spot’ and target individual users directly without requiring an application form or CV.

Debut appDebut is a machine learning based talent-matching app. Image from http://debut.careers/

But Debut is also a direct challenge to universities and the status of the academic degree.  ‘We want to unbundle that and turn our user base into a behaviour- and competency-based user base,’ its founder says. ‘The strength would be the person’s competency as opposed to academic success’. Instead, it emphasizes graduates’ ‘cognitive psychometric intelligence’, behavioural traits and competencies. ‘We have everything on students, from their cognitive background, social background, to how well they perform in a selection process’—data it is using to train machine learning algorithms ‘to make personalized recommendations and predictions’.

Debut therefore instantiates the entry of automated predictive talent analytics into UK HE, inciting students to cultivate their marketable personality and behavioural skills above their academic credentials. Users of the platform generate training data for its machine learning learning algorithm to tune and refine its subsequent job-matches and recommendations. In summer 2018 Debut also received £5 million venture capital investment led by James Caan, the entrepreneur from the TV show Dragons’ Den, and  already has 60 corporate clients, including Google, Apple and Barclays, that pay it an annual subscription to sort and organize the graduate data.

Student-powered & metrics-powered HE
As an established product, Debut is well positioned in the emerging app store of services and products to help shape students’ choices. As the DfE competition demonstrates, apps are emerging to match prospective applicants the courses based on graduate earnings data from LEO, while Debut can later then link them to employers based on a training set of graduate competency profiles and successful labour market matches.

The finalists of the DfE competition represent the governmental recognition of the potential of data presented on apps to shape choices and decisions. The prototypical app store for HE choice is, therefore, a significant extension of ongoing upgrades to the data infrastructure of HE. It raises some key issues:

  • It exemplifies government ambitions to ‘unbundle‘ and open up HE to new market providers of technologies, entrepreneurs, the private sector, and other business interests, with government itself acting as a market catalyst and seed-fund investor
  • It brings the logic of ‘swipeable’ apps and social media platforms into HE, importing the business model of platform capitalism and the extraction of value from student data into higher education
  • It utilizes persuasive design and behavioural science insights to design interfaces and visualizations that might ‘hook’ attention, ‘trigger’ behaviours, and ‘nudge’ decisions according to the ‘choice architecture’ provided
  • It continues to treat students as calculative consumers, investing in HE with the expectation of ROI in the shape of graduate outcomes and earnings, and puts pressure on institutions to focus on labour market outcomes as the main purpose of HE
  • It incites prospective and current students to see and think about HE in primarily quantitative and evaluative terms, as represented in metrics and market-like performance rankings and ratings
  • It anticipates potential long-term and real-time data monitoring of students in HE institutions, through a digital surveillance assemblage of apps, platforms and infrastructural connections, thereby making students into data transmitters of institutional qualities as well as consumers of institutional data
  • It instantiates the increasing role of algorithms, machine learning and automation into applicants’, students’ and graduates’ decision-making, with Debut even seeking to short-circuit the job application process and automatically talent-match graduate competency profiles to corporate job descriptions
  • It raises questions about the uses of student data to reinforce pre-existing governmental ideology, with the DfE recently reprimanded by statistical authorities for prioritising political messaging ahead of its statistical evidence–could students apps be designed otherwise rather than to conform to market models of cost-benefit calculation?

By releasing a huge trove of LEO data, it also demonstrates how HE is being made increasingly measurable, computable, and comparable as a competitive, market-driven sector, with Gyimah noting that ‘these new digital tools will highlight which universities and courses will help people to reach the top of their field, and shine a light on ones lagging behind’.

The governmental focus on calculating which universities are ‘lagging’ or even ‘failing’ from their data is itself a huge sector concern, with Michael Barber, chair of the Office for Students, writing in The Telegraph that ‘While student choice should drive innovation, diversity and improvement, we recognise this won’t always be enough. So where market mechanisms are not sufficient, we will regulate’. The piece, entitled ‘We should allow bad universities to fail, as long as we protect their students’, followed another Telegraph article titled ‘If the higher education market is to succeed, bad universities must be allowed to go bust’.

In this highly conservative political and media context, further amplified by think tanks such as Reform, HE is being driven both by the supposed ’empowerment’ of students and by metrics of market performance. The first perspective sees data as central to a ‘student-powered’ sector characterized by choice, value for money, and market competitiveness. The other takes a ‘metrics-powered’ perspective on universities as comparable market actors with winners and failures, as calculated by the choices of applicants to attend, indicator data on provider performance, and LEO or other student outcomes data on graduate outcomes and earnings.

These two perspectives are, however, binocular rather than oppositional. Barber’s emphasis on ‘bad universities’ and Gyimah’s enthusiasm for student-facing apps are part of the same project, with data from and about students  treated as key performance indicators for both policy officials and university applicants to assess. As Barber noted, ‘With more information at their disposal on the quality of courses and associated salary outcomes, [students] will rightly be thinking carefully about such choices. That places an onus on universities to plan realistically and respond quickly where demand is higher–or lower–than expected’.

The emerging, prototypical HE app store instantiates these demands in software. It reveals to students the best-performing universities in terms of degree awards and graduate earnings, but also reveals the ‘bad universities’ and discourages them from ‘investing’ in these institutions and their courses. In these ways, the HE app store threatens to exert dangerously performative effects. By presenting university providers as a market, these apps will shape students’ choices away from certain institutions, or prompt institutions to drop courses that don’t promise a high percentage of positive graduate outcomes, while privileging elite institutions with stronger existing performance records. The app store will speed up the ‘market failure’ of those providers presented in the data as ‘bad universities’.

Posted in Uncategorized | Tagged , , , , , , , | 1 Comment

The mutating metric machinery of higher education

Ben Williamson

ZuijlekomHigher education increasingly depends on complex data systems to process and visualize performance metrics. Image by Dennis van Zuijlekom

Contemporary culture is increasingly defined by metrics. Measures of quantitative assessment, evaluation, performance, and comparison infuse public services, commercial companies, social media, sport, entertainment, and even human bodies as people increasingly quantify themselves with wearable biometrics. Higher education is no stranger to metrics, though they are set to intensify further in coming years under the Higher Education and Research Act. The measurement, comparison, and evaluation of the performance of institutions, staff, students, and the sector as a whole is expanding rapidly with the emergence and evolution of ‘the data-intensive university’.

This post continues a series on the expanding data infrastructure of HE in the UK, part of ongoing research charting the actors, policies, technologies, funding arrangements, discourses, and metrological practices involved in contemporary HE reforms. Current reforms of HE are the result of ‘fast policy’ processes involving ‘sprawling networks of human and nonhuman actors’, and more specifically that involve human data analytics experts and complex nonhuman systems of measurement. Only by identifying and understanding the mobile policy networks and the ‘metrological machinery’ of their HE data projects is it possible to adequately apprehend how macro-forces of governmental reform are being operationalized, enacted, and accomplished in practice.

Metrological machinery
The collection and use of UK university performance data has expanded and mutated dramatically in scope over the last two decades. The metrification of HE through the ‘evaluation machinery’ of research assessment exercises, teaching evaluation frameworks, impact measurements, student satisfaction ratings, and so on, is frequently viewed as part of an ongoing process of neoliberalization and marketization of the sector. One particularly polemical critique describes a ‘pathological organizational dysfunction’ whereby neoliberal priorities and corporate models of marketization, competition, audit culture, and metrification have combined to produce ‘the toxic university’.

The narrative is that HE has been made to resemble a market in which institutions, staff and students are all positioned competitively, with measurement techniques required to assess, compare and rank their various performances. It is a compelling if unsettling narrative. But if we really want to understand the metrification, marketization, and neoliberalization of HE, then we need to train the analytical gaze more closely on the specific and ever-mutating metrological mechanisms by which these changes are being made to happen.

In previous posts I examined the market-making role of the edu-business Pearson,  and the ways the Office for Students (OfS), the HE market regulator, and the Higher Education Statistics Agency (HESA), its designated data body, intend to use student data to compare sector performance. Together, these organizations and their networks are building a complex and evolving data infrastructure that will cement metrics ever more firmly into HE, while opening up the sector to a new marketplace of technical providers of data analysis, performance measurement, comparison and evaluation.

HE technology landscapePolitical demands to make HE more data-driven have opened up a marketplace for providers of digital technologies. Image by Eduventures

In this update I continue unpacking this data infrastructure by focusing on the Quality Assurance Agency for Higher Education (QAA) and the Joint Information Services Committee (Jisc). Both of them, along with HESA, are engaging in significant metrological work in HE. In fact, HESA, QAA and Jisc together constitute the M5 Group of agencies—‘UK higher education’s data, quality and digital experts’—formed in 2016 and named after their collective proximity to the M5 motorway in southwest England. Together, the QAA, HESA and Jisc also co-organize and run the annual Data Matters conference for HE data practitioners, quality professionals and digital service specialists.

To approach these organizations, the concept of ‘metric power’ from David Beer provides a useful framing. Drawing on key theorists of statistics (Desrosières, Espeland, Foucault, Hacking, Porter, Rose etc), metric power accounts for the long-growing intensification of measurement over the last two centuries to the current mobilization of digital or ‘big’ data across diverse domains of societies. Central to metric power is the close alignment of metrics to neoliberal governance. Following the lead of Foucault and others to define neoliberalism as the ‘generalization of competition’ and the extension of the ‘model of the market’ to diverse social domains, Beer argues that ‘put simply, competition and markets require metrics’ because ‘measurement is needed for the differentiations required by competition’.

The concept of metric power, then, is potentially a useful way to approach the metrification of higher education and to explore how far this represents processes of neoliberalization and marketization. By examining the recent projects and future aspirations of agencies such as Jisc and QAA we can develop a better understanding of how a form of metric power is transforming the sector. To be clear at this point, there is nothing to suggest that either the QAA or Jisc are run by neoliberal ideologues–something more subtle is happening. The point is that both organizations, along with HESA and the OfS, are pursuing projects which potentially reinforce neoliberalizing processes by expanding the data infrastructures of HE measurement. They are ‘fast policy’ nodes in the mobile policy networks enacting the metrological machinery of HE reform.

QAA—sentimental evidence
The QAA is the sector agenda ‘entrusted with monitoring and advising on standards and quality in UK higher education’. It maintains the UK Quality Code for Higher Education used for quality assessment reviews, as well as the Subject Benchmark Statements describing the academic standards expected of graduates in specific subject areas. QAA also undertakes in-house research and produces policy briefings.

One of its major strands of activity, via the QAA Scotland office, is an ‘Evidence Enhancement Theme’ focusing on ‘the information (or evidence) used to identify, prioritise, evaluate and report’ on student satisfaction. Its priorities are:

  • Optimising the use of existing evidence: supporting staff and students to use and interpret data and identifying data that will help the sector to understand its strengths and challenges better
  • Student engagement: understanding and using the student voice, and considering concepts where there is no readily available data, such as student community, identity and belonging
  • Student demographics, retention and attainment: using learning analytics to support student success, and supporting institutions to understand the links between changing demographics, retention, progression and attainment including the ways these are reported

The Evidence Enhancement program is unfolding collaboratively across all Scottish HE providers and is intended to result in sector-wide improvements in data use related to student satisfaction.

More experimentally, the QAA released a 2018 study into student satisfaction using data scraped from social media. The student sentiment scraping study, entitled The Wisdom of Students: Monitoring quality through student reviews, was based on a large sample of over 200,000 student reviews of higher education provision to produce a ‘collective-judgment’ score for each provider. These data were then compared with other sources such as TEF and NSS, and found to have a strong positive association. Crowdsourced big data from the web, it suggested, were as reliable as large-scale student surveys and bureaucratic quality assessment exercises as student experience metrics.

The QAA project is a clear example of how big data methodologies of sentiment analysis and discovery using machine learning and web-scraping are being explored for HE insights. For the QAA, taking such an approach is necessary because, as the sector has become more marketized and focused on the experience of the student in a ‘consumer-led system’ regulated by the ‘data-driven’ Office for Students, there has been ‘a gradual reduction in the remit of QAA in assessing and assuring teaching and learning quality in providers, and the rise in the perception of student experience and employment outcomes’ data as more accurately indicating excellence in higher education provision’. As such, measuring student experience in a timely, low-burden and cost-effective fashion has become a new policy priority, while existing instruments such as the TEF and NSS remain annual, retrospective, and potentially open to ‘gaming’ by institutions.

In contrast, collecting ‘unsolicited student feedback’ from reviews on social media platforms is seen by the QAA as a way of ‘securing timely, robust, low-burden and insightful data’ about student experience. In particular, the study involved collecting student reviews from Facebook, Whatuni.com and Studentcrowd.com, with Twitter data to be included in future research. The study authors found that 365 HE providers have Facebook pages with the ‘reviews’ function available, as well as  many pages relating to departments, schools, institutes, faculties, students’ unions, and career services.

Perhaps most significantly, given the constraints of TEF and NSS, the scraping methodology allowed the QAA to come up with collective judgment scores for each provider on any given day. In other words, it allowed for the student experience to be calculated as time-series data, and opened up the possibility of ‘near real-time’ monitoring of provider performance in terms of delivering a positive student experience, which could then be used by providers to specify need for action. The advantages of the approach, according to the QAA, are that it makes year-round real-time feedback feasible ‘based on what students deem to be important to them, rather than on what the creator of surveys or evaluation forms would like to know about’; reduces the data-collection burden; minimizes providers’ ‘opportunities to influence or sanitise the feedback’; and opens up ‘the ability to explore sector-wide issues, such as feedback relating to free speech, contact hours, or vice-chancellor pay’.

In sum, the report concludes, ‘the timely and reliable extraction of the student collective-judgement is an important method to facilitate quality improvement in higher education’. The QAA intends to pilot the methodology with ten HE providers late in 2018.

The QAA concern with sentiment analysis of student experience needs to be understood not just as an artefact of HE marketization and consumerization, but as part of a wider turn to ‘feelings’, ‘sensation’ and ‘emotion’ in contemporary metric cultures. As William Davies notes, ‘Emotions can now be captured and algorithmically analysed (“sentiment analysis”) thanks to the behavioural data that digital technologies collect’, and these data are increasingly of interest as sources of intelligence to be harnessed for political purposes by authorities such as government departments or agencies. Scraping student sentiment from social media replicates the logic of psychological and behavioural forms of governance within HE, and has potential to make the sector ever-more responsive to real-time traces of the study body’s emotional pulse.

QAA dashboardThe QAA-led Provider Healthcheck Dashboard allows institutions to monitor and compare their performance through data visualizations. Image from HESA

The medicalized metaphor of tracing pulses can be carried further in relation to another QAA project. In collaboration with its M5 Group partners Jisc and HESA, QAA led the production of a data visualization package called the ‘Provider Healthcheck Dashboard’. The purpose of the tool is to allow providers to perform ‘in-house healthchecks’ by comparing their institutional performances, on many metrics, against competitors. The metrics used in the Healthcheck dashboard include TEF ratings, QAA quality measurements, NSS scores, Guardian league tables, percentage of 1st or 2:1 degree rankings, and graduate employment performance over five years.

These metrics are presented on the dashboard as if they constitute the ‘vital signs’ of a provider’s medical health and their comparison with norms of performance, as depicted visually as percentage differences from benchmarks. The provider healthcheck acts as a kind of medical read-out of the competitive health of an institution, demonstrating in visual, easy-to-read format how an individual provider is situated in the wider market, and catalyzing relevant treatments to strengthen its performance.

Jisc—predicting performance
Jisc is a membership organization providing ‘digital solutions for UK education and research’. Its strategic ‘vision is for the UK to be the most digitally-advanced higher education, further education and research nation in the world’. Besides its vision, Jisc is the sector’s key driver of learning analytics—the measurement of student learning activities—which it is circulating via its formal associations with the other M5 Group members HESA and QAA.

As a key part of its vision Jisc has conducted significant work outlining a future data-intensive HE and how to accomplish it over the coming decade. It envisages a HE landscape dominated by learning analytics and even artificial intelligence, in which students will increasingly experience a personalized education based on their unique data profiles. Jisc’s chief executive has described ‘the potential of Education 4.0‘ as a response to the ‘fourth industrial revolution’ of AI, big data, and the internet of things. Education 4.0 would involve lecturers being displaced by technologies that ‘can teach the knowledge better’, are ‘immersive’ and ‘adaptive’ to learners’ needs, and that include ‘virtual assistants’ to ‘support students to navigate this world of choice and work with them to make decisions that will lead to future success’.

Towards this vision of an ‘AI-led’ future of HE, Jisc collaborated with Universities UK on the 2016 report Analytics in Higher Education. A key observation of the report is that existing datasets such as TEF provide very limited information for universities, policymakers or regulators to act on:

External performance assessments, such as the TEF, don’t in themselves support institutions understanding and using their data. Advanced learning analytics can allow institutions to move beyond the instrumental requirements of these assessments to a more holistic data analytic profile. Predictive learning analytics are also increasingly being used to inform impact evaluations, via outcomes data as performance metrics. Ultimately, this allows institutions to assess the return on investment in interventions.

As this excerpt indicates, Jisc has key interests in learning analytics, predictive analytics, outcomes data, performance metrics, and measuring return on investment.

It is now seeking to realize these ambitions through its launch in September 2018 of a national learning analytics service for further and higher education. According to the press release, the learning analytics service ‘uses real time and existing data to track student performance and activities’:

From libraries to laboratories, learning analytics can monitor where, when and how students learn. This means that both students and their university or college can ensure they are making the most of their learning experience. … This AI approach brings existing data together in one place to support academic staff in their efforts to enhance student success, wellbeing and retention.

The service itself consists of a number of interrelated parts. It includes cloud-based storage through Amazon Web Services so individual providers do not need to invest in commercial or in-house solutions, and ‘data explorer’ functionality ‘that brings together the data from your various sources and provides quick, flexible visualisations of VLE usage, attendance and assessment – for cohorts and individual students. … The information will help you to plan effective personal interventions with students and to identify under-performing areas of the curriculum’. A third aspect of the service is the ‘learning analytics predictor’ that helps teaching and support staff to use ‘predictive data modelling to identify students who might have problems’ and ‘to plan interventions that support students’.

The final part of the service is a student app called Study Goal, which is available for student download from major app stores. As it is described on the Google Play app store, ‘Study Goal borrows ideas from fitness apps, allowing students to see their learning activity, set targets, record their own activity amongst other things’. In addition, it encourages students to benchmark themselves against peers, and can be used to monitor attendance at lectures.

Jisc Study GoalThe Jisc Study Goal app is based on fitness devices enabling students to monitor their performance and benchmark themselves against peers. Image from Google Play

Study Goal is an especially interesting part of the Jisc learning analytics architecture because, like the provider healthcheck dashboard, it appeals to images of fitness and healthiness through self-quantification, personal analytics and self-monitoring. The logic of activity tracking and self-quantification has been translated from the biometrics of the body back to a kind of health metrics of the institution. University leaders and students alike are being responsibilized for their academic health, while their data are also made available to other parties for inspection, evaluation, prediction and potential intervention. Beyond the learning analytics service and Study Goal app, Jisc has also supported the Learnometer environmental sensing device, which ‘automatically samples your classroom environment, and makes suggestions through a unique algorithm as to what might be changed to allow students to learn and perform at their best’. Not only is student academic and emotional health understood to underpin their performance, but the environment needs to be healthy and amenable to performance-maximization too.

All of these developments indicate a significant reimagining of universities and all who inhabit them as being amenable to ever-more pervasive forms of performance sensing and medicalized inspection. Higher education is becoming a kind of experimental laboratory environment where subjects are exposed through metrics to a data-centric clinical gaze, and where everything from students’ feelings and teaching quality to classroom environment and graduate employment outcomes is a source of risk requiring quantitative anticipation, modelling, and real-time management. Positioned in this way, the political priority to make HE function as a ‘healthy market’ of self-improving competitors thoroughly depends on the metric machinery of agencies such as the QAA and Jisc, and on the expanding data infrastructure in which they are key nodes of experimentation, policy influence, and technical authority.

Metric authority
Jisc and the QAA are bringing new metric techniques into HE, such as sentiment analysis, predictive modelling, comparative data visualization and student benchmarking apps, in ways that do appear to reinforce the ongoing marketization of the sector. They are key nodal actors in the policy networks building a data infrastructure for student measurement–an infrastructure that remains unfinished and continues to evolve, mutate and expand in scope as new actors contribute to it, new analyses are made possible, and new demands are made for data, comparison and evaluation.

It is necessary to restate at this point that the QAA and Jisc are not necessarily uncritically pursuing a market-focused neoliberalizing agenda. The QAA’s sentiment analysis report appears somewhat critical of the market reform of HE under the Office for Students. The point is that these sector agencies are all now part of an expanding data infrastructure that appears almost to have its own volition and authority, and that is inseparable from political logics of competition, measurement, performance comparison, and evaluation that characterize the neoliberal style of governance. It is a data infrastructure of metric power in higher education.

David Beer rounds off his book with several key themes which he proposes as a framework for understanding metric power. These can be applied to the examples of the metrological machinery of HE being developed by the QAA and Jisc.

Limitation. According to Beer, metric power operates through setting limits on ‘what can be known’ and ‘what can be knowable’ by setting the ‘score’ and ‘the rules and norms of the game’. The QAA and Jisc have become key actors of limitation by constraining the assessment and evaluation of HE to what is measurable and calculable. Through learning analytics, Jisc is pushing a particular set of calculative techniques that make students knowable in quantitative terms, as sets of ‘scores’ which may then be compared with norms established from vast quantities of data in order to attach evaluations to their profiles. The QAA-led dashboard similarly sets constraints on how provider performance is defined, and cements the idea that performance comparison is the only game to be played.

Visibility. Metric power is based on what can be rendered visible or left invisible—a ‘politics of visibility’ that also translates into how certain practices, objects, behaviours and so on gain ‘value’, while others are not measured or valued. Through their data visualization projects, Jisc and the QAA are involved in rendering HE visible as graphical comparisons which can be used for making value-judgments—in terms of what is deemed to be valuable market information. But such data visualization projects also render invisible anything that remains un-countable or incalculable, and inevitably make quantitative data that can be translated into graphics appear more valuable than other qualitative evaluations and professional assessments. The Study Goal app reinforces to students that certain forms of quantifiable engagement are valued and prized more highly than other qualitative modes.

Classification. Metric power works by sorting, ordering, classifying and categorizing, with ‘the capacity to order and divide, to group or to individualise, to make-us-up and to sort-us-out’. Through learning analytics pushed by Jisc, students are sorted into clusters and groupings as calculated from their individual data profiles, which might then lead, in Jisc’s ideal, to personalized intervention. Likewise, the sorting of universities by comparative healthcheck dashboards and their ordering into hierarchical league tables serves to classify some as winners and others as fallers and failures in a competitive contest for performance ranking and advantage.

Prefiguration. Metric power ‘works by prefiguring judgements and setting desired aims and outcomes’ as ‘metrics can be used in setting out horizons … and imagined futures and then using them in current decision-making processes’—and this is especially the case with the imagining and pursuit of markets and the measurement of their performance. Here Beer appears to be pointing to the performativity or productivity of data to anticipate future possibilities in ways that catalyse pre-emptive action. Clearly, with its real-time sentiment analysis, the QAA’s student-scraping study is seeking to mobilize data for purposes of prompting action and pre-emption by promoting the use of time-series data that indicate trends towards future outcomes in terms of student ratings. Institutions that can read student satisfaction near to real-time from social media sentiment ,might act to pre-empt their TEF and NSS ratings. Likewise, the Healthcheck Dashboard allows institutions to anticipate future challenges, while Jisc has specifically sought to embed predictive analytics in institutional decision-making.

Intensification. Metric power perpetuates the models of the world with which it sets out, with metrics satisfying the ‘desire for competition’, intensifying processes of neoliberalization, and expanding its models of the market into new areas. We can see with the QAA and Jisc how the market model of competitive evaluation and ranking has extended from research and teaching assessment to rating of institutions via social media scoring and user-reviews. Jisc’s Study Goal app also puts the market model under the very eyes and fingertips of students as it invites them to compare and benchmark themselves against their peers, thereby intensifying metric power through competitive peer relations and positioning students as responsible for their own market performance and prospects.

Authorization. Metric power works by ‘authenticating, verifying, legitimating, authorizing, and endorsing certain outcomes, people, actions, systems, and practices,’ with market-based models and metrics taken and trusted as sources of ‘truth production’. The dashboards and analytics advanced by QAA and Jisc are being propelled into the sector with promises of objectivity, impartiality and neutrality, free of human bias and subjective judgment. As such, these data and their visualization constitute a seemingly authoritative set of truths, yet are ultimately an artificial reality of higher education formed only from those aspects of the sector that are countable and measurable.

Automation. Metric power shapes human agency, decision-making, judgement and discretion as systems of computation and the ‘decisive outcomes of metrics’ are taken as objective, legitimate, fair, neutral and impartial, especially as ‘automated metric-based systems’ potentially take ‘decisions out of the hands of the human actors’ and ‘algorithms are making the decisions’ instead. Although QAA and Jisc are clearly not removing human judgment from the loop in HE decision-making, they are introducing limited forms of automation into the sector through algorithmic sentiment analysis, machine learning and data visualization dashboards that generate ‘decisive outcomes’ and thereby shape institutional or personal decisions.

Affective. Finally, metric power and systems of measurement induce affective responses and feelings—metrics have ‘affective capacities’ such as inducing anxiety or competitive motivation, and thereby ‘promote or produce actions, behaviours, and pre-emptive responses’, largely by prompting people to ‘perform’ in ways that can be valued, compared and judged in measurable terms. Jisc’s Study Goal is exemplary in this respect, as it is intended to incite students to benchmark themselves in order to prompt competitive action. The healthcheck dashboards, likewise, are designed to induce performance anxiety in university leaders and prompt them to take strategic action to ensure advantageous positioning in the variety of metrics by which the institution is assessed. In both examples, HE is framed in terms of ‘risk’, a highly affective state of uncertainty, as a way of catalyzing self-improvement.

As these points illustrate, through organizations such as the QAA and Jisc, HE is encompassed in the sprawling networks of actors and technologies of metric power. The data infrastructure of higher education is an accomplishment of a mobile policy network of sector agencies along with a whole host of other organizations and experts from the governmental, commercial and nonprofit sectors. A form of mobile, networked fast policy is propelling metrics across the sector, and increasingly prompting changes in organizational and individual behaviours that will transform the higher education sector to see and act upon itself as a market.

Posted in Uncategorized | Tagged , , , , , | Leave a comment

The tech elite is making a power-grab for public education

Ben Williamson

Silicon wiresSilicon Valley entrepreneurs are linking public education into their growing networks of activity and influence. Image by Steve Jurvetson


In the same week that Amazon founder Jeff Bezos announced a major move into education provision, the FBI issued a stark warning about the risks posed to children by education technologies. These two events illustrate clearly how ed-tech has become a significant site of controversy, a power struggle between hugely wealthy tech entrepreneurs and those concerned by their attempts to colonize the education sector with their imaginaries and technologies. Jeff Bezos, Mark Zuckerberg, Elon Musk, Peter Thiel, and other super-wealthy Silicon Valley actors, are forming alternative visions and approaches to education from pre-school through primary and high schooling to university. They’re the new power-elite of education and their influence is spreading.

I’ve previously written about the Silicon Valley entrepreneurs and venture capitalists making a power-grab for the education sector. Benjamin Doxtdator has also written brilliantly about their rewriting of the history of public education as a social problem requiring urgent correction for the future. Here I just want to compile some recent developments of Silicon Valley intervention at each stage of education, to illustrate the growing scale of their influence as they continue linking public education into their networks of technical development.

The Amazon pre-school network
Amazon’s Jeff Bezos announced via a letter on Twitter his plans to invest $2billion in support for homeless families and a ‘network of new, non-profit, tier-one preschools’. The ‘Academies Fund’ will create ‘Montessori-inspired’ preschools through a new organization to ‘learn, invent and improve’ based on ‘the same set of principles that have driven Amazon’. Most notably, Bezos added, ‘the child will be the customer’ in these schools, with a ‘genuine, intense customer obsession’.

While many will admire the philanthropic efforts of the world’s richest man to support early years education, the idea of Amazon-style pre-schools that see children as customers problematically positions education as a commercialized service in ‘personalized learning’. Bezos is not the first tech sector entrepreneur to announce or invest in pre-schooling, and as Audrey Watters commented,

The assurance that ‘the child will be the customer’ underscores the belief – shared by many in and out of education reform and education technology – that education is simply a transaction: an individual’s decision-making in a ‘marketplace of ideas. … This idea that ‘the child will be the customer’ is, of course, also a nod to ‘personalized learning’…. As the customer, the child will be tracked and analyzed, her preferences noted so as to make better recommendations to up-sell her on the most suitable products.

The image of data-intensive startup pre-schools with young children receiving ‘recommended for you’ content as infant customers of ed-tech products is troubling. It suggests that from their earliest years children will become targets of intensive datafication and consumer-style profiling. As Michelle Willson argues in her article on algorithmic profiling and prediction of children, they ‘portend a future for these children as citizens and consumers already captured, modelled, managed by and normalised to embrace algorithmic manipulation’.

Primary Spaces
Primary schooling has been a strong focus for Silicon Valley for several years. Notable examples include Mark Zuckerberg’s The Primary School and Max Ventilla’s Altschool, two of the most high-profile startup schools to embed personalized learning technologies and approaches within the whole pedagogic apparatus. Less is known about Ad Astra, the hyper-exclusive private school project set up by Tesla boss Elon Musk within his SpaceX headquarters, although it too emphasizes students pursuing personal projects, problem-solving, and STEM subjects.

Space XElon Musk’s Ad Astra school is located in the HQ of Space X. Image by Steve Jurvetson

However, the globally-popular ed-tech company ClassDojo recently announced a partnership with Ad Astra to create new content for primary school age children. Building on the success of its previous content partnerships on ‘growth mindset’ and ‘empathy’, ClassDojo has worked with Ad Astra to create a set of resources focused on ‘conundrums’ that involve ‘open-ended critical thinking and ethics challenges’. The resources are not intended to be used at Ad Astra itself, but will be released to teachers and schools later in 2018.

The ClassDojo partnership means that Ad Astra’s focus on problem-solving and ethical challenges will be mobilized into classrooms at potentially huge scale. ClassDojo already claims millions of users, and is fast expanding as a major social media platform and content platform for primary schools in many countries. The conundrums ClassDojo and Ad Astra have created pose problems that are considered foundational to ‘building liberal society’. This suggests that the kind of ‘liberal society’ assumed by entrepreneurs such as Elon Musk is a vision to be pursued through the mass inculcation of children’s critical thinking and problem-solving.

Given that Musk, like Amazon’s Bezos, is also investing in space exploration, their efforts in young children’s education raise significant questions about what kind of future world and liberal society they are imagining and seeking to build. What kind of child are they trying to construct to take part in a future society that, for Bezos and Musk, may well be distributed into space?

Super High Schools
High schools are the focus for Laurene Powell Jobs’ XQ Super School project, which is a ‘community of people mobilizing America to reimagine public high school’. The project previously awarded philanthropic funding through a competition to 18 US high schools, including Summit School, one of a chain sponsored by Facebook’s Mark Zuckerberg.

XQ Super School is not just a competition though—it is seeking to produce a glossy blueprint for the future of public high school itself in its new guise as a ‘community’ or ‘network’ of reform. Its updated website features a variety of resources, videos, guidance, partnership opportunities and other materials to stimulate imaginative thought across the education sector. It also now features highly developed learner goals for schools to aspire to, including problem-solving, collaboration, invention, and the cultivation of ‘growth mindset’–mindset being the preferred success-psychology of Silicon Valley right now, and itself developed and propagated from Stanford University, itself the original academic home of many of the valley’s most successful entrepreneurs.

XQ super school projectXQ Super School marketing. Image from XQ Super School

It is easy to view XQ Super School as a commercial takeover of public education. Perhaps more subtly, though, what XQ are others are accomplishing is a reimagining of high school through the cultural lens of Silicon Valley. These entrepreneurs are pursuing a future vision based on their own politics, their own psychological theories, and their own discourse—of community, of problem-solving, of invention, of growth mindset—and propelling it into the remaking of public education at large.

Intelligent Universities
The contemporary university is also being reimagined by the tech power-elite. Peter Thiel—the co-founder of PayPal alongside Elon Musk—for example, established the Thiel Fellowship as an alternative to higher education for ambitious young technology entrepreneurs. Higher education itself has become the target for a massive growth in the educational technology market, part of what David Berry terms the new ‘data-intensive university’.

The social media platform LinkedIn has become one of the most significant players in the data-intensive HE market. Since being acquired by Microsoft for more than $26bn in 2016, Janja Komljenovic argues that LinkedIn is increasingly targeting the HE sector with particular features that are generated explicitly for students, graduates and universities. These features include student profiles, university branded pages, and the capacity for students to search universities based on graduate career outcomes.

According to Komljenovic, ‘LinkedIn moves beyond the passivity of advertising to its users towards actively structuring digital labour markets, in which it strategically includes universities and its constituents’, and argues that it is using its ‘qualification altmetrics’ to build ‘a global marketplace for skills to run in parallel to, or instead of university degrees’.

In this sense, LinkedIn is fundamentally transforming and challenging HE by making students and universities into ‘prosumers’ in ‘data markets’, where ‘the data they produce is monetised and repackaged to become governing devices for their own sector’, and is reframing ‘meanings in the HE sector about quality of universities and degrees; graduates and their diplomas; and skills in relation to employability’. As such, increasingly LinkedIn’s algorithms hold potential to match individuals, skills and jobs as gaps are revealed in labour markets, and appear to challenge the project of higher education to become more outcomes- and skills-focused as a result.

HE technology landscapeThe 2018 higher education technology landscape. Infographic by Eduventures

Amazon, too, is seeking position in higher education. It recently announced that it was installing Amazon Echo Dot devices in all student dormitories at St Louis University as part of its Alexa for Business offering. The move, it was reported, is ‘among the largest smart speaker deployments at a university and could help Amazon to establish smart speakers and the voice interface as typical among younger users’.

Beyond its clear business goals, with the partnership Amazon is marking the entrance of AI into HE, with Alexa becoming an automated student experience assistant. It is hard to imagine that Alexa won’t have a place in Jeff Bezos’s preschool network too, not least as voice assistants may make a better interface than screens with children who have yet to learn to read or write. Amazon is entering public education at both preschool and postsecondary phases, with massive implications for institutions, staff and students of all ages.

The FBI and the ‘ed-techlash’
The tech elite now making a power-grab for public education probably has little to fear from FBI warnings about education technology. The FBI is primarily concerned with potentially malicious uses of sensitive student information by cybercriminals. There’s nothing criminal about creating Montessori-inspired preschool networks, using ClassDojo as a vehicle to build a liberal society, reimagining high school as personalized learning, or reshaping universities as AI-enhanced factories for producing labour market outcomes–unless you consider all of this a kind of theft of public education for private commercial advantage and influence.

The FBI intervention does, however, at least generate greater visibility for concerns about student data use. The tech power-elite of Zuckerberg, Musk, Thiel, Bezos, Powell Jobs, and the rest, is trying to reframe public education as part of the tech sector, and subject it to ever-greater precision in measurement, prediction and intervention. These entrepreneurs are already experiencing a ‘techlash‘ as people realize how much they have affected politics, culture and social life. Maybe the FBI warning is the first indication of a growing ‘ed-techlash’, as the public becomes increasingly aware of how the tech power-elite is seeking to remake public education to serve its own private interests.

Posted in Uncategorized | Tagged , , , , , , | 1 Comment

Genetics, big data science, and postgenomic education research

Ben Williamson

Emily Willoughby_Genetics of educational attainment_2018A diagram visualizing the genetic variants associated with educational attainment. Image by Emily Willoughby.

An international consortium of genetics researchers has established a link between genes and educational attainment from a study of over a million people. One of the largest genetics studies ever published in a science journal, it represents a significant step forward for the emerging field of educational genetics. The growth of genetics expertise in education also, however, raises substantial concerns about biological determinism and new forms of eugenics, and reanimates long-standing debates about the genetic inheritance of intelligence and cognitive ability.

In this post I outline some key findings of the study, but primarily focus on the significant implications and issues it raises for education research more widely. The implications of the study are that it: (1) establishes genetics as a powerful new front in educational knowledge production; (2) positions big data science as a methodological apparatus for future educational studies; (3) surfaces extreme political polarization regarding genetic factors in education that will be difficult to reconcile as genetics enters education policy debates; (4) potentially opens up a new market for commercial educational genetics products; and (5) reveals the need for new social scientific forms of engagement with, and critique of, genetics research and postgenomic science in the education field.

Gene discovery
Published in Nature Genetics at the end of July 2018 by the international Social Science Genetic Association Consortium (SSGAC) in collaboration with the consumer genetics company 23andMe, the paper ‘Gene discovery and polygenic prediction from a genome-wide association study of educational attainment in 1.1 million individuals’ reports findings showing that genetic patterns across a large population are associated with years spent in school. According to its 80 authors, ‘educational attainment is moderately heritable and an important correlate of many social, economic and health outcomes,’ and is therefore an important focus in a number of educational genetics studies.

Specifically, the scientists  identified over a thousand genetic variants linked with educational attainment, particularly those involved in brain-development processes and the formation of neuronal connections in foetuses and newborns. These biological factors, the scientists claim, influence psychological development, which in turn affects how far and for how long people continue at school.

The SSGAC has been careful in reporting the results. They do not claim to have identified any single genes for education, and the data don’t predict educational attainment for individuals. The research also found that genetic variants have a far weaker effect than environmental influences on educational attainment, and was restricted to analysis of a homogeneous sample people aged in their 40s and 50s of white European descent (the study failed with a sample of African-Americans). The authors produced a massive Q&A document—longer than the paper itself—to help explain and clarify the results, methods and conclusions, while downplaying the policy and practical implications of its findings. As such, the paper has been carefully published in acknowledgement of the potential controversy it could cause, and to anticipate misinterpretation and misreporting of its findings.

Nonetheless, the paper has catalysed significant media interest and social media commentary. Three days after publication, the paper had been Tweeted 1000 times, blogged multiple times, and reported in news media around the world—picking up an enormous Altmetric score in the process. There is useful coverage in the New York Times, Atlantic and MIT Technology Review reporting the key findings.

Clearly the paper is a massive advance for genetics science, in education and beyond. For those education researchers and social scientists outside of the genetics field, however, it has major implications in terms of knowledge production, methods, policy influence, and the commercialization of educational genetics.

Powerful genetic knowledge
Along with other recent advances in genetics in education, the SSGAC study instantiates the emergence of a powerful new field of knowledge production. Such research is only possible now owing to the complete sequencing of the human genome–the entire genetic structure of human DNA–over a decade ago, and since then studies in human genomics have expanded rapidly. As a result, science studies researchers claim we are now in a postgenomic age.

As a research field, educational genomics seeks to unpack the genetic factors involved in individual differences in learning ability, behavior, motivation, and achievement. Importantly, researchers of educational genomics do not assume either that there is any single genetic factor that determines learning ability, cognition or intelligence, or that genetic factors entirely explain the complexity of learning. Identifying an individual’s genotype—the full heritable genetic identity of a person—and its relationship to learning, intelligence or educational outcomes remains complex. Practitioners of educational genomics and behavioural genetics look for patterns in huge numbers of genetic factors that might explain behaviours and achievements in individuals, by studying the interaction of genotypes and environmental influences on phenotypical behaviours and traits (such as intelligence etc).

The SSGAC has positioned itself as a leading consortium for such postgenomic education science with the publication of their paper, but another key figure bringing genomics research into education is the behavioural geneticist Robert Plomin, co-author of the controversial G is for Genes: The Impact of Genetics on Education and Achievement. Plomin has extensively studied the links between genes and attainment using ‘genome-wide polygenic scoring’ (GPS), a method also employed in the SSGAC study. A polygenic score is produced by analysing huge number of genetic markers, and their interactions with environmental factors, in order to predict a particular behavioural or psychological trait. As computer processing power, data storage capacity, and data analytics technologies have advanced in recent years, it has become possible to correlate huge quantities of genotypical data with a host of phenotypical traits.

Under the banner of a ‘new genetics of intelligence’, Plomin and colleagues have used polygenic scores to predict academic achievement in schools. The substantial increase in heritability they found ‘represents a turning point in the social and behavioural sciences because it makes it possible to predict educational achievement for individuals directly from their DNA,’ thereby ‘moving us closer to the possibility of early intervention and personalized learning.’

While the SSGAC avoids calling for interventions based on its data, the results open up possibilities for further studies and analyses. These include: studies that control for genetic influences in order to generate credible estimates of how changes in school policy influence health outcomes; study why specific genetic variants predict educational attainment; and study how the effects of genes on education differ across environmental contexts. As such, the research itself is a catalyst for further educational genomics studies.

Although educational genomics remains in its infancy, it seems likely to advance considerably in coming years, linking genotypes to phenotypical traits, behaviours and other outcomes. It will link more closely with psychology and neuroscience as associations are further established between genes and neurons, personality traits and so on. As more findings emerge, further support will grow for evidence-based scientific perspectives on learning. New forms of genetic and genomic expertise in educational matters are already emerging, and challenging existing forms of social scientific and philosophical educational research which have challenged the biological determinism of genetics for decades.

Big data science
The methodological apparatus of the SSGAC study, and other research in educational genomics and behavioural genetics, is huge—it dwarfs the technical, methodological, financial and expert resources of other forms of educational research. The SSGAC study itself is the accomplishment of a well-funded international team of 80 scientists working in departments of psychology, sociology, behavioural genetics, behavioural science, neurogenomics, economics, biosciences, health sciences, and many others. A core part of the team included more than 20 scientists from the commercial organization 23andMe, the Silicon Valley company backed by Google. The research, then, was distributed across public universities and commercial labs at huge scale and significant cost.

Beyond the big size of the team and its funding, the study is also typical of the big data methods of genetic science. The data on its sample of over a million people was from two sources. One was the UK Biobank, a huge open access health resource based on a living population of over 500,000 volunteer participants, which was established by the Medical Research Council and the Wellcome Trust and opened up to scientists in 2012. One of many biobanking projects worldwide, it opens up unprecedented access to large samples of genetic data for analysis. The other data was sourced from 23andMe itself, the consumer genetics company offering health and ancestry services on a profit-making basis.

The methods described in the appendix to the SSGAC study demonstrate the quantitative and computational complexity of such large-scale genetics research. The study depends on a range of statistical methods, tests, mathematical formulae, algorithms, data visualizations, software platforms with names such as METAL and PLINK, and bioinformatics platforms called DEPICT, MTAG, PANTHER and MAGMA.

As such, the paper published in Nature Genetics is the end-result of the activities of a huge interdisciplinary science team, generous financial funding, enormous databanks from both the non-for-profit and private sectors, and highly sophisticated big data analytics methods, all powered by a vast infrastructure of bioinformatics technologies, statistical software analysis packages, data analytics and visualization. The scale of the scientific infrastructure of knowledge production is miles away from the norms of educational research.

Yet we may expect further education research to locate itself within such infrastructures of professional expertise, labs, databanks, analytics methods and software. Already, scientists are beginning to propose new multidisciplinary experimentation and intervention under the heading of ‘precision education’. Genetics and neuroscience are spectacular new fronts of big data-driven scientific research, and related subfields of educational genomics and educational neuroscience are growing fast, with the support of wealthy foundations and commercial partners. As a result, studies such as that by the SSGAC and other educational genetics teams position big data science as a new frontier of innovative and interdisciplinary education research.

Policy sciences
Researchers in the field of Science and Technology Studies (STS) have long maintained that science and politics are inseparable, and often focus their attention on scientific controversies. This is particularly the case when science enters into official policy, and is translated and manipulated to fit political agendas and policymakers’ requirements. The new genetics of education are an ideal illustration of an emerging scientific controversy in education.

The SSGAC research represents the potential for a significant shift in emphasis in education policy to embrace genetics expertise. Though the SSGAC reports no direct policy implications from its study, it is clear that policymakers seeking explanations for educational attainment would be interested in the results. As Kalervo Gulson and P. Taylor Webb have argued, new kinds of ‘bio-edu-policy-science actors’ may be emerging as authorities in educational policy, ‘not only experts on intervening on social bodies such as a school, but also in intervening in human bodies’. And science writer Antonio Regalado pointed out that one of the SSGAC authors had previously stated that once polygenic scores could be used to predict IQ, it would trigger a ‘serious policy debate’ about ‘personal eugenics’.

Commenting on the SSGAC study, John Warner cautions about how conservative economists might seek to translate the results into policy proposals. ‘How long before schools subject to performance funding as determined by graduation metrics begin to discriminate against students with low polygenic educational attainment scores?’ he asks. ‘When will automated human resources algorithms start weighing polygenic educational attainment scores when sorting through job applicants?’ These questions point to the possibility of students being grouped and clustered together by their polygenic scores, and the potential for enforcing new kinds of ‘biosocial collectivity’ within schools.

A significant problem with the potential translation of educational genomics into education policy is that genetics in education is extremely controversial and politicized. The publication in the mid-90s of The Bell Curve rekindled old debates about genetic determinism, eugenics and racialized discrimination in relation to IQ testing and the political uses of intelligence data. Concerns persist about this ‘new geneism’, and help account for the very careful, actively depoliticised packaging of the SSGAC study. A recent article in The New Statesman on the genetics of education identified deep polarization between right-wing advocates of genetics and left-wing critics, with the former preferring explanations based in biology and the latter seeking environmental explanations. A column reporting on the SSGAC study in the New York Times argued ‘progressives should embrace the genetics of education’, suggesting that ‘the power of the genomic revolution [can] be harnessed to create a more equal society’ while berating the ‘long tradition of left-wing thinkers who considered biological research inimical to the goal of social equality’.

Matters aren’t helped by the fact that some of the most outspoken advocates of genetic explanations for attainment, achievement and intelligence are divisive public figures such as Toby Young and Charles Murray (co-author of The Bell Curve). In a recent Spectator article titled ‘The left is heading for a reckoning with the new genetics’, Young attacked what he saw as liberal progressives’ ‘environmental determinism’ as ‘scientifically indefensible’. ‘Like Marx,’ he argued, ‘post-modernists believe that man’s true nature is reducible to the totality of social relations, that individuals are nothing more than the embodiments of particular class-relations and class-interests, and that everything comes down to the struggle for power. I wouldn’t expect an uncritical acceptance of the new genetics from that quarter’.

Drawing on an interview with Charles Murray, Young also speculated that left wing sociologists in particular would likely become irrelevant unless they embraced the new genetics by the mid-2020s. For Murray, this was even a source of deep concern, since he thought ‘once left-wing intellectuals finally let go of environmental determinism they may veer too far in the opposite direction and embrace gene editing technologies like CRISPR-Cas9 to try to create the perfect socialist citizen’.

Given Young’s proximity to education policymakers and politicians unde the current UK Conservative government, his comments on genetics have caused widespread alarm among academic and educators. Generating policy proposals based on educational genomics in this tense environment, then, is likely to be a continuing source of deep controversy and irreconcilable political suspicions. It appears that education policy in coming years will have to engage in significant debate about genetics and even personal eugenics, requiring informed participation by social scientists whose views on the matter are currently subject to attack and ridicule by conservative commentators. Education policy studies of this scientific and political controversy will be essential.

Genetic exploitation
With growing awareness of the increasing power of genetic science in education, it is highly likely that commercial organizations will seek to exploit the opportunity to build an educational genetics market of services and products.

Consumer companies such as Google-backed 23andMe have already exploited the opportunities made available by the sequencing of the human genome to launch genetic testing services as commercial products. As 23andMe make up part of the team behind the SSGAC study, this commercial outfit has now not only positioned itself as part of the apparatus of education research, but potentially could stand to gain from extending to the provision of further educational genetics products. In the same week the SSGAC study was released, 23andMe also released details of a deal with big pharmaceutical company GlaxoSmithKline to use data from its 5 million customers of home genetics testing kits to design new drugs. The $300million deal will see GSK and 23andMe  applying artificial intelligence and machine learning to the medical discovery process, analysing genetic data from 23andMe and other sources such as UK Biobank. As a private company with vast genetic databanks, 23andMe is clearly positioning itself as a key part of the infrastructure of genetic science in pharmaceuticals and education.

Other companies are likely to see market potential in educational genetic testing products too. Already, concerns are emerging about startup companies seeking to exploit advances in human genomics research to produce genetic IQ tests. Cheap DNA kits for IQ testing in schools, in the shape of ‘intelligence apps’ or other genetic ed-tech products, may be feasible in the not-too-distant future, though considerable and understandable concern exists about their usefulness and ethics. Robert Plomin has proposed that DNA analysis devices such as ‘learning chips’ could make reliable genetic predictions of heritable differences in academic achievement, and it is easy to speculate how consumer-DNA companies could extend in this direction.

Major risks would emerge from the expansion of an educational genetics markets. One is that as genetic predictions become accepted  as forecasts of a child’s future ability, new approaches may emerge to ‘artificially select future generations’–a ‘eugenics 2.0‘ for selecting ‘smarter kids’. While embryo screening programs probably remain unlikely in the West, large-scale efforts are already underway elsewhere to find the genetic code for high IQ. This raises the possibility for selective-intelligence to become attractive to wealthy parents seeking genetic advantage for their children.

The merging of genetic science, big data and commercial speculation in education could lead to a new form of ‘platform scientism’, where the logics of capital accumulation and data analytics combine to push genetic testing and other profiling services in schools. The danger of such a scenario, as detailed in The Atlantic, is that obsession with these ‘slippery genetic predictions could turn people’s attention away from other things that influence how children do in school and beyond — things like their family’s wealth, the stress in their neighborhoods, the quality of the schools themselves’.

Critical postgenomic education research
The acceleration and expansion of educational genetics research as a big data science of attainment, achievement and even intelligence raises distinctive challenges for social scientific education research. Straightforward critique and rejection of genetics represents a possible form of resistance. However, within the wider field of sociology and STS research on postgenomics, researchers have begun to propose different forms of analysis and critique, with some educational researchers also working to get beyond simplistic critical reactions to new biological thinking in productive new ways.

Contemporary postgenomic science, with its emphasis on gene-environment interaction, offers an invitation for social scientists to explore how the biological and the social constitute each other. Biosocial studies, for example, acknowledge that the body, biology and brain are shaped by their social circumstances and environmental contexts. Commenting on contemporary postgenomic science, biosocial researchers argue that the social world gets ‘under the skin’ to impress upon the biological. They insist that bodies are influenced by power structures in society, becoming tangled with social, political and cultural structures and environments.

Biosocial work in education is just beginning to emerge. Developing a ‘biosocial education’ agenda, Deborah Youdell argues that learning may be best understood as the result of ‘social and biological entanglements.’ Biosocial education research therefore takes biology seriously, but also digs critically into the ways scientists have conceptualized the body and thereby made it amenable to experimentation and intervention.

A biosocial approach would seek to understand educational genetics in both biological and social scientific terms by appreciating that the social environments in which learning takes place do in fact inscribe themselves on bodies and brains. The genetic and neural data of contemporary postgenomics would have to be understood from a biosocial view as data about social processes, not only biological processes.

Since genetics is a highly data-intensive and software-saturated field of experimentation and knowledge production, a biosocial perspective would also address the implications of data processing of students’ genetic and neural details. Taking further cues from STS, it would acknowledge that data are always a partial selection, that their analysis through vast data infrastructures of methods and software packages matters a great deal to the results produced, and that the results can influence what happens in educational settings. Is the ‘quantified human’ held in a database and represented by a polygenic score really detailed enough to yield insights to intervene upon students? Additionally, biosocial research would be alive to the possible consequences of for-profit commercial companies building software platforms for collecting and analysing students’ genetic and neural information.

The million-sample SSGAC study is clearly a landmark in postgenomic education science. It is a field of experimentation and knowledge production requiring novel forms of social scientific and philosophical analysis. A biosocial approach may be one way forward, but it is clear that educationalists need to develop a range of concepts and methods in order to perform critical postgenomic education research as the genetic science of education expands and accelerates.

Posted in Uncategorized | Tagged , , , | 1 Comment

Edu-business as usual—market-making in higher education

Ben Williamson

Nido buildingThe Nido Spitalfields Tower, the world’s tallest student accommodation, sits on the boundary of the  financial district of London. Image by UggBoyUggGirl

The global education business Pearson has established itself as a major player in higher education around the world. With core business interests in digital online courses and alternative models of HE provision, Pearson is currently making significant in-roads into British universities and the HE sector more widely. From a critical perspective, Pearson’s ongoing business activities appear symptomatic of the further marketization and privatization of contemporary HE under current government policy and regulation. A ‘neoliberal takeover of higher education’–the subtitle of a tight little book by Lawrence Busch–means universities are increasingly focused on achieving market value through competition, performance metric ranking, consumer demand, and return on investment. However, we need a better understanding of the specific role of edu-businesses such as Pearson in remaking higher education as a market.

Newspaper coverage in The Telegraph in June 2018 reported on a new law passed by government permitting the Office for Students—the HE regulatory body—to share student data with Pearson, HMRC, the Student Loans Company, and the Competition and Market Authority. Headlined ‘University students’ data to be shared with private companies’, the article focused on the risk of student data being exploited for profit, threats to student privacy, and potential re-use or sale of the data for undeclared purposes by Pearson. Response to the news demonstrated considerable  concern about Pearson’s role in furthering business interests in HE through its use of student data. Pearson is a major, multi-billion dollar market actor with a huge global business, and participating in the expanding data infrastructure of the UK’s higher education system too–but questions remain about exactly how Pearson participates in marketization of HE itself. This work-in-progress is an attempt to think these issues through–a working paper rather than a blog post–and the third part of a series on key actors in the expanding data infrastructure of higher education (the first was on the Higher Education Statistics Agency, the second on the Office for Students).

Market-making micro-processes
To understand Pearson’s role, I adopt a framework from Susan Robertson and Janja Komljenovic to analyse marketization in HE. They have adapted it from Çalışkan and Callon, who define ‘marketization as the entirety of efforts aimed at describing, analysing and making intelligible the shape, constitution and dynamics of a market’. For Çalışkan and Callon, markets ‘organize the conception, production and circulation of goods’. Importantly, though, markets depend on a complex arrangement of rules and conventions, technical devices, metric systems, calculating equipment, logistical infrastructures, texts, technical and scientific knowledge, and human competencies and skills—all of which are engaged in power struggles over the definition and valuation of goods. Their definition and approach to marketization—as effort and coordination among people, institutions and things—applies across a diversity of markets.

Following this approach to study the effort of marketization in the HE context, Robertson and Komljenovic therefore argue ‘markets do not simply appear’ as the outcome of market ideology, but instead ‘are both made and remade, as new products and services, frontiers and spaces, are imagined, invented, implemented, inventoried, vetted and vetoed.’ In particular, they focus on how the formerly non-market space of higher education has been reframed and re-made as an ‘education services market’, and subsequently how these HE markets work. The market-making process in HE involves considerable ‘investment’ at the macro-level by policymakers, politicians, investment advisors, education firms, and universities to imagine higher education as a market to be opened up and exploited. At the micro-level it also involves the ‘nuts and bolts’ of creating higher education products and services that can be exchanged in a range of marketplaces. As such, understanding HE marketization requires not just macro analysis of neoliberal political ideology, but micro analysis of the practical, material, technical and discursive effort of market-making and maintenance.

To better understand the micro-processes involved in HE market-making, Robertson and Komljenovic—via Çalışkan and Callon—identify processes of (1) pacifying goods, (2) marketizing agencies, (3) market encounters, (4) price-setting, and (5) market design and maintenance. These analytical categories provide a useful way to think about the emerging role of edu-businesses such as Pearson in the everyday practices and processes of contemporary universities.

Pacifying goods
The first micro-process of pacifying goods refers to how things and services are represented as describable and predictable ‘packages’ with fixed qualities to which value and price can be attached. Robertson and Komljenovic offer the examples of a university being packaged as an object for investment, ‘student experience’ as a product with distinctive elements for students to consume, or ‘business intelligence’ as information software worth purchasing to assist strategic decision-making by university managers.

Pearson’s core business model within the higher education sector depends on the production of packages of goods and services in which it hopes universities will invest. Behind the recent Telegraph coverage of Pearson’s data-sharing agreement with the Office for Students is a longer history of Pearson involvement in producing services and products for HE. As an alternative HE provider, it established Pearson College London in 2012, the only FTSE 100 company in the UK to design and deliver degrees (validated by the universities of Kent and Bradford). Being a higher education provider, Pearson has legitimate reasons—like other providers—to require data access from the OfS for these purposes (as it did previously via HEFCE).

Pearson also offers online degree programs, with several UK universities entering into long-term 10-year deals with the company to deliver courses (at present these are King’s College London, Leeds, Manchester Metropolitan, and Sussex, with others in the US too). Through its ‘full-service approach to creating online degree programs or individual learning solutions’, Pearson’s online learning services are presented as streamlined technical systems and standardized program management packages for universities to purchase in order to ‘help you expand access, reach each student, and improve achievement’.

The process of rendering its services and products as standardized packages within HE markets, however, has required significant investment of company effort to justify government registration and student fees as HE provider. Pearson College London advertises itself as ‘powered by industry experience’ and, through ‘work with industry giants from Unilever, L’Oreal, WPP and IBM, to Framestore, Double Negative, MPC and The Mill’, it has established itself as a distinctive market provider which is ‘transforming higher education’. As such, these industry partners have become part of the package of Pearson’s HE market offer to fee-paying students.

In order to further expand the model as a viable and marketable package, Pearson also released Demand Driven Education: Merging work and learning to develop the human skills that matter predicting a shift in ‘future skills’ requirements for students (based on data from the Future Skills project collaboration between Pearson, Nesta and the Oxford Martin School). Its authors concluded  a transformation in HE would be needed to achieve these future skills. If earlier HE reforms had focused on widening access and improving academic success, ‘demand driven education’ would ‘focus more strongly than ever on ensuring graduates are job-ready and have access to rewarding careers over the course of their lifetime’.

Pearson Future SkillsThe Future Skills landscape, mapped by Pearson, Nesta & Oxford Martin School

As these examples indicate, Pearson has sought to ‘pacify’ its goods and justify them for investment—by universities and prospective students—both by appealing discursively to the ‘widening access’ priority of the university sector, and by actively prompting a shift toward industry-led, future-skills-focused, and demand-driven higher education through the material circulation of glossy reports and websites. It has produced technical systems and logistical infrastructure for program management to ease universities into the online learning market too.

In these important ways, Pearson is participating in making an increasingly competitive HE market in which it is itself a competitor, with an alternative provision that sets it apart from the conventional degree provision of most established universities. At the same time, the model of flexible, technology-infused provision it offers is also increasingly the model pursued by existing HE institutions, indicating how the commercial online learning model is becoming the focus of market competition among universities themselves. Along with its competitors, Pearson has standardized, stabilized and packaged online learning to create a market within UK higher education. Along the way, students have been packaged in terms of marketable ‘future skills’ whose development universities need to invest in as human capital, and universities have been reframed as market providers of ‘valuable’ demand-driven education services.

Marketizing agencies
Marketizing agencies refer to the actors competing to define what is a valuable good or service, which takes place among people, technologies, laws and forms of calculation. As such, marketizing agencies within HE include human actors such as market analysts, data managers and business intelligence officers, but also computer software, business strategies, and private company support.

Pearson has established itself as a powerful marketizing agency in HE, carefully defining through its glossy reports and brochures such as Demand Driven Education what are valuable goods and services for contemporary universities to offer. Through its ‘full service’ online learning packages, it offers its expertise as a global ed-tech courseware and platform provider in ways that have produced conviction in its offerings among university leaders, the Department for Education and the Office for Students. Indeed, the law itself has been changed through the statutory instrument signed off by the DfE’s HE minister Sam Gyimah to enable Pearson and the OfS to share data.

Pearson is also bringing novel kinds of practical ‘know-how’ and expertise into HE—both human experts who know how to engage with complex digital technologies and data, and nonhuman technologies of expertise that can enhance universities’ engagements with their data. It has, for example, positioned itself as a leading centre of expertise in digital data analytics for education (including performance metrics and comparative methodologies) at a global scale, across both the schools and universities sectors. It has developed specific technologies such as data dashboard software packages to allow university leaders and administrators to measure institutional performance through metrics and indicators. The development of these technologies positions it both as a market provider with product, services and expertise to sell and share, and as a market-maker, seeking to prompt universities to see themselves in quantitative terms as performance rivals and competitors with other providers.

Pearson Demand 2Pearson puts higher education under the microscope in Demand Driven Education

As a marketizing agency, what Pearson can do depends on its computer and mathematical equipment, as well as on the cognitive activities of its experts—its software developers, data analysts, education advisers, courseware designers and so on. This hybrid of human expertise and nonhuman equipment enables Pearson to function as a marketizing agency.

However, Pearson is of course in a struggle with other agencies to define what counts as a valuable service—indeed, to define the value of higher education itself. Universities themselves are marketizing agencies, as are the Department for Education and the Office for Students. A key actor in the marketizing agencies involved in HE market-making is Sir Michael Barber, the Chief Education Adviser for Pearson from 2012 before taking up the post of Chair of the Office for Students in 2017. A former senior adviser in the Prime Minister’s Delivery Unit under Tony Blair and education adviser to David Blunkett, he was also a member of the review group for the Browne Review of university funding in 2009-10, and served as a partner of the consultancy McKinsey’s, heading its Global Education Practice programme. Barber physically embodies a meeting-point between agencies, rendering porous the boundaries between government agency, consultancy and private company. He represents effectively how the capacities of agencies across the private and public sectors, such as Pearson and the OfS, have begun to dominate HE institutions, imposing their market model of HE as a valuable consumer commodity upon the sector. This exercise of power is at the core of contemporary struggles by many university employees over the purposes and practices of the university.

Market encounters
Market encounters then refer to how agencies and goods meet one another, such as at higher education fairs, conferences, seminars and other events, as well as through social media, web pages and other online and material arrangements. One might say that the Pearson online learning environment is a key site of market encounter. It brings together the commercial provider, the university, students, and staff into a shared space where diverse investments are made in each other and value is produced for each agency through relations made possible by the software service.

More straightforwardly, Pearson invests considerable effort in staging market encounters with the HE sector. Barber himself, in his prior Pearson role, contributed to various events and material publications promoting a transformative model of HE. His co-authored report An Avalanche is Coming (published by the IPPR think tank) made the argument that:

University leaders need to take control of their own destiny and seize the opportunities open to them through technology – Massive Open Online Courses (MOOCs) for example – to provide broader, deeper and more exciting education. Leaders will need to have a keen eye toward creating value for their students. Each university needs to be clear which niches or market segments it wants to serve and how. The traditional multipurpose university with a combination of a range of degrees and a modestly effective research programme has had its day. The traditional university is being unbundled.

The report particularly emphasized competition between universities and online providers, ensuring education for employment, supporting alternative providers and the future of work, and recognition that the ‘new student consumer is king’. Universities not adapting to these challenges and opportunities risked being swept away by the avalanche of change brought by technology—or, in other terms, market failure. Five years on, the report reads like a template for HE market reform under the Higher Education and Research Act 2017 and the regulatory strategy of the OfS under Barber. As a result, consensus is growing among UK government departments and agencies for the model of HE promoted and offered by Pearson for consumption in the HE market, as its growing presence in the sector demonstrates.

This consensus and market-consolidation is also demonstrated by the Department for Education announcement of an open data competition allowing software developers access to longitudinal student employment and earnings outcomes data in order to create apps or online services to help prospective students choose courses and institutions. (On its launch, HE Minister Sam Gyimah tweeted: ‘We want students to be better informed about degree choices & the returns–today, we’re officially launching a competition for tech companies to take graduate data & create a MoneySuperMarket for students, giving them real power to make the right choice’). The logic of the competition is that student choice is best made on the basis of future earnings, in ways highly similar to Pearson’s own emphasis on career-readiness courses and demand-driven education. But an additional feature of the competition is that it forces prospective students to think of HE as a marketplace, and to see themselves as future ‘human capital’ whose choices about which universities to attend and courses to study are a form of self-investment which will affect their future prospects and value in labour markets.


The eventual products of the competition—whether they are apps or other types of MoneySuperMarket-style online price-comparison services—will themselves become mediating sites of market encounter between students and universities. They will act as sites where the value of a degree, as a pacified good, becomes a matter of calculation. Universities will have to calculate about how best to present the value of their service, algorithms will calculate the data for national comparison and visualization, and students will have to calculate about how to choose in their best interests. As such, these apps and services will be key market-making devices.

Setting a price for a good or service is established through struggles between the different agencies that encounter each other, such as determining how much to sell or buy a service or product. Pearson is an important actor in price-setting in HE because it is offering alternative degree pathways and full service online provision; as such, it is itself in a competitive market among other online HE providers for university customers. It is also interested in how students, seen by its CEO John Fallon as ‘the Spotify generation’, may themselves ‘pay for use. They don’t want to buy to own, and they only want to pay to use things that are directly relevant to their course and their outcomes’. So its price-setting model is adapting to the market logics of online streaming services, and treating students as a direct-to-consumer market.

Deciding how much to pay for a service is a key aspect of market-making in the university sector (this of course is the heart of government disappointment in the failure of universities to differentiate fees in England). Within universities, though, as Robertson and Komljenovic observe, administrators routinely have to make budgetary decisions regarding the purchase of goods or services, but that price is sometimes secondary to personal relations or trust in certain suppliers. Through high-profile partnerships with UK universities that speak the same language as Pearson—emphasizing ‘flexible online study’ and meeting ‘the demands of the evolving labour markets’—the company is establishing itself as a value-for-money provider in an emerging marketplace where traditional universities are hybridizing with alternative providers. It is not the only provider in this space, and is in a struggle with competitors through formal processes such as tendering and procurement.

Pearson reportsA selection of recent Pearson reports focusing on digital transformations in education

The hybrid model of the traditional university partnering with a private company to offer online courses is an interesting example of a particular kind of market. Robertson and Komljenovic note that many universities are involved in ‘inside-out and for-profit’ activities where they are involved in market exchanges by selling services to others for profit—such as selling ‘student experience’ in the shape of study programmes to overseas students. Universities are also involved in ‘outside-in and for-profit’ activities where they act as buyers and contribute to the profits of other actors—such as software vendors, data suppliers, and other outsourced providers of services. The hybridized partnership model that Pearson is establishing creates a market that is both inside-out and outside-in at the same time, with the university gaining advantage from investing in Pearson (in terms of profit-turning distance student fees), and Pearson gaining an advantage through returns on its investment in the shape of paying student customers.

Universities, driven by the imperative to increase revenue, are increasingly seeking ways of recruiting international students their online offerings, thus opening up the market to multiple players and catalysing a price-setting competition between different online providers. At the same time, the price-setting is mirrored by the revenue generation promises of the online service for the university, and by the return on investment projections for the provider. Reputation for all the agencies involved is also a factor–Pearson gains reputational advantage from being embedded in elite institutions, while institutions gain reputational advantage from appearing innovative in digital delivery of future-focused, demand-driven services.

Market design and maintenance
The last micro-process of market-making—design, implementation, management, and maintenance—describes how various elements are brought into being and reproduced to enable ongoing stability, continued extraction of profits, and efficient value-for-money use of resources.

In order to maintain its own market position, Pearson has established a ten-year partnership model with a number of universities to provide online learning services, which is establishing its longevity in UK HE as well as scaling up its online learning services, one of the fastest growing parts of the business. It has built long-term relations of trust with its partner institutions. Through the provision of well-packaged products, providing expertise, staging encounters with the sector, and establishing agreements over price and value of its provision, Pearson is seeking to build and maintain the market for its products to ensure its long-term stability and profitability. In this sense—as Çalışkan and Callon observe in relation to markets more generally—Pearson and its partners don’t just trust each other, but invest considerable hope in the market relationship they are developing. There are market emotions at play in these efforts to implement, manage, and maintain a market of online learning services.

Through reports such as An Avalanche is Coming and Demand Driven Education, Pearson is also involved in market design. It is establishing a discourse and an imaginary of the reformed, transformative future of HE in ways which are closely aligned with the governmental objectives of the Department for Education and the Office for Students as a market regulator. For example, as one of the most powerful figures in British HE, Michael Barber is highly involved in shaping the sector, and is pursuing the same vision as chair of the OfS as he did in his role as chief education adviser at Pearson.

Here it is important to return to the data-sharing issue raised in The Telegraph. While Pearson may well have legitimate reasons to access OfS data as a HE provider, it also has ambitious plans around the use of digital data analytics in HE in ways that reinforce the data-led reforms represented by the OfS. The two organizations share an imaginary for the future design of HE. One obvious point of congruence is that Pearson’s online learning services will be able to provide the kind of fine-grained student data that conventional universities cannot. These data will be available on dashboards for university teachers and administrators to inspect in order to assess and evaluate the performance of courses, staff and students, in ways which reflect the OfS emphasis on performance metrics.

Pearson’s data-led ambitions go beyond performance dashboards however. Demand Driven Education, for example, highlights the potential of using AI and ‘predictive talent analytics’ to match students to career paths. This idea is highly congruent with the DfE’s software competition linking students and courses to earnings potential. Additionally, Pearson has invested considerably in data-driven digital technologies for use in the HE sector, including learning analytics and adaptive learning platforms that require access to huge quantities of past student data and real-time data from student activities on digital courses. It even has a partnership with IBM Watson to embed ‘AI tutors’ in digital courseware that can constantly track a student’s actions and progress, and then ‘interact’ to ‘improve student performance’.

Clearly, these kind of data analytics and AI technologies will require access to vast databases of student information. Their rollout would create a new market of student data that would be valuable to AI systems in a market exchange where students surrender their information in exchange for personalized learning support. As such, a clear and shared imaginary of a technology-intensive, demand-driven, skills-focused HE infuses both governmental and commercial ambitions to design and maintain a highly marketized higher education sector.

Edu-business as usual
The marketization of contemporary higher education has been brought into being and sustained through a range of processes, many of which Pearson is involved in. Of course, Pearson is not alone in making HE into a market, but it is a significant actor as a private company and a provider of digital technologies required by universities to compete in the imagined HE landscape of the future. Contemporary universities are increasingly involved in different kinds of markets and market exchanges, all of which involve considerable social activity, technical involvement, and effort to make, manage and maintain. Pearson is moving its business considerably into the making of HE markets, and establishing ‘edu-business as usual’ as the reformatory model for the future of higher education in the UK and beyond.

Posted in Uncategorized | Tagged , , , , , , , , | 1 Comment

Comments on ClassDojo controversy

Ben Williamson

ClassDojo Class Story Picture

The educational app ClassDojo has been the target of articles in several British newspapers. The Times reported on data privacy risks raised by the offshoring of UK student data to the US company–a story The Daily Mail re-reported. The Guardian then focused on ClassDojo promoting competition in classrooms. All three pieces have generated a stream of public comments. At the current time, there are 56 comments on the Mail piece, 78 at The Times, and 162 on The Guardian. I’ve been researching and writing about ClassDojo for a couple of years, on and off, and was asked some questions by The Times and The Guardian. So the content of the articles and the comments and tweets about them raise issues and questions worth their own commentary–a response to key points of controversy that also speak to wider issues  with the current expansion of educational technology across public education, policy and practice. ClassDojo has also now released its own response and reaffirmation of its privacy policy.

ClassDojo is highly divisive. Online newspaper comments often degenerate into polarized hectoring, but it is apparent (from both the comments and Twitter reactions) that the  expansion of ClassDojo has both enthused some teachers and appalled others. More subtly, some teachers dislike the reward app but like the social media aspects of it, which allow them to streamline messaging to parents and upload photos, videos and examples of student work. Other teachers appear to find the parent messaging a burden, as it makes them available to parents on-demand at all times. These tensions in themselves are reason for some caution regarding ClassDojo marketing claims that the product creates ‘happier classrooms’ and ‘connects teachers with students and parents to build amazing classroom communities.’ More pressingly, they point to real tensions over ed-tech apps among the teaching profession, and the potential of substantial non-use and resistance, as education becomes increasingly digitized.

Teachers’ views about ClassDojo have not been sought. Some comments pointed out that while the newspapers consulted experts and pundits (and ‘PC snowflakes’), none asked teachers about ClassDojo. As I pointed out to The Guardian, there simply is not a body of evidence of how ClassDojo is being used in practice (unless I’ve missed it). This is going to be a large research task since, as many comments pointed out, ClassDojo is used in very different ways as teachers adapt it to their own practices. It’s also in use around the world, in multiple languages. Nonetheless, detailed studies of the situated and contextualized uses of ClassDojo need to be undertaken to listen to teachers’ voices, observe how the app slips into classroom practices, and trace out the effects on children. While I would welcome more teachers’ voices about ClassDojo in the press, too, it’s important to be aware that ClassDojo recruits its own teacher ‘mentors’ and has a ‘Mentor Community’ of early-adopters. The mentors act as advocates for the app, with support from the company, to spread the word to other teachers (as explained in this interview from 28 minutes in). Although it appears ClassDojo has benefited from grassroots momentum, it has choreographed its bottom-up growth too. So selecting teacher voices to cut past its arms-length marketing community would be important.

Is adequate informed consent being sought and secured? As noted in The Times, the privacy policy for ClassDojo is 12,000 words, raising concerns that neither teachers nor parents are likely to fully understand the implications of signing children up to it. With the introduction of GDPR, this could raise problems—probably not for ClassDojo, which has a dedicated team of privacy consultants to ensure its compliance, but for schools if found to be breaching data laws. Ultimately, it is schools and teachers that collect and use the data, that are responsible for gaining informed consent for parents, that opt children in to ClassDojo or agree to parents’ opt-out wishes–again, we have too little evidence of school procedures to know the risks here. One comment at The Guardian reported resentment at the claim that the app had extended into teachers’ hands before awareness of the risks it raised had been considered. But this point was not an attack on teachers. It reflects a concern that teachers are being positioned as data privacy, security and consent experts when it is highly unlikely these are part of their initial professional education or continuing development. Nor, really, should teachers be expected to shoulder such responsibilities, especially if they carry legal consequences. Nonetheless, I think the lack of clarity here should trigger efforts to define what kind of ‘data literacy’ teachers, school leaders and governors may need in order to decide whether to use a free online ed-tech app or service, and what paperwork needs to be completed to ensure its use is ethical and legal. ClassDojo isn’t alone in raising difficult issues about consent. Pearson came under fire recently for experimental uses of student data without seeking their consent too.

Data privacy and protection concerns remain. ClassDojo has been dealing with privacy concerns since its inception, and it has well-rehearsed responses. Its reply to The Times was: ‘No part of our mission requires the collection of sensitive information, so we don’t collect any. … We don’t ask for or receive any other information [such as] gender, no email, no phone number, no home address.’ But this possibly misses the point. The ‘sensitive information’ contained in ClassDojo is the behavioural record built up from teachers tapping reward points into the app. ClassDojo has a TrendSpotter feature to allow analysis of those points over time. School leaders can view it. The behavioural points can follow children from one class to the next. Parent email addresses are required and are stored. While there is currently no indication of any kind of leak or breach from ClassDojo, there has been a steady increase in school cybersecurity incidents which raise wider questions regarding the security of student data. Even the well-resourced education platform EdModo was hacked recently, with the theft of 77 million users’ details. As reported in The Times, just like the commercial, financial and health sectors, ed-tech is not impervious to data security and privacy breaches.

Is ClassDojo monetizing student data? ClassDojo’s founders have stated clearly they will never sell student data for advertising. How it intends to make a profit and secure return on investment for its generous funders, however, remains unclear, giving rise to concerns about its monetization of student data. It has in the past suggested it could use those data to sell behavioural reports back to schools or even local authorities. It has also suggested it could sell ‘Education Bundles’ to parents (see from 51mins here). Its response to issues raised by the press confirmed it was seeking to produce saleable premium features. These are business proposals at present, and easily give rise to concerns about how the data may in future be used to make profit. As one commenter to The Guardian pointed out, ClassDojo needs to reassure teachers and parents by issuing clear and unambiguous statements about how it uses or intends to use the vast database of student behaviours it holds. It is not hard to imagine behaviourally-targeted premium content becoming feasible as it seeks to monetize the platform. Such fears may be unfounded. But it has to provide a return on investment for its investors at some point. It seems unlikely it will do so through sales of cuddly branded toys alone. Another way of securing a return on investment might be to sell the company, which would mean all ClassDojo data coming under its new owner’s privacy policy. Parents would be given 30 days to delete their child’s data in the event of a sale.

ClassDojo is Big Brother with a jolly green face. Not only does ClassDojo capture student behavioural information through the reward app; it also gathers photos, videos, digital portfolios of work, and permits messaging between teachers and parents. The company has slowly shifted from the behaviour app to become more like a social media platform for schools–even the rewards mechanism is similar to social media ‘liking.’ Just as Facebook presents itself as a platform for communities, ClassDojo’s founders and funders see it as the platform for building ‘amazing communities’ of children, teachers, schools and parents. The addition of ‘school-wide’ functionality makes it into the main communication mechanism for many schools, and a way for school leaders to have oversight of class data. Whether ClassDojo is really building ‘amazing communities’ is an empirical question. Researchers of  social media have identified the commercial imperatives and surveillance mechanisms behind their ‘community’ ideals. ClassDojo has subtly worked its way into the central systems of schooling, shaping how teachers think about and monitor student behaviour, reconfiguring how teachers and parents communicate, giving headteachers new ways of observing behavioural trends, and giving parents ‘real-time’ ability to track and watch their children in the classroom. It is shaping what a school community should (ideally) be and how it can connect, with student behaviour metrics at its core. Many commenters on the newspaper stories raised fears about the effects of constantly monitoring and quantifying children. Studies of ClassDojo as a platform would help to reveal its community-building effects, and interrogate to what extent it extends surveillance in schools.

ClassDojo is offshoring student data. Both The Times and Mail reported that ClassDojo offshores sensitive student information to the US. My understanding from the ClassDojo website is all information it collects is stored by Amazon Web Services—so it could be in Dublin, somewhere in mainland Europe, or in the US. Amazon currently has no cloud storage facility in the UK. But AWS is now part of the backbone of the web (as well as government intelligence), so ClassDojo offshoring data is not unique. AWS has also made it extremely cheap to set up social media sites as it drastically reduces costs of data storage and access. In this sense, ClassDojo is part of the massive expansion of Amazon power across the internet and worldwide web, and emblematic of how individuals’ personal information is increasingly distributed, offshored and scattered in cloud computing centres. It does raise the question of just how much influence and commercial gain Amazon may be developing in public education though.

Third party data use. AWS is just one of many third party services employed to help run ClassDojo. The Times latched on to DataDog due to a data breach a couple of years ago, and noted Google and Facebook too. As I understand, Google supplies web analytics—the kind of data that permits ClassDojo to monitor user numbers, visitors to the site, frequency of use of the service and so on. The newspaper coverage may have led readers to understand sensitive student data was being shared with these third parties—or even sold to them. Some commenters immediately presumed the data was being sold to Facebook and Google for targeted advertising (the phrase ‘if the product is free, you’re the product’ was repeated in a lot of the more critical comments). ClassDojo have constantly reiterated that selling student data for advertising is not their business model, and The Guardian reported that too.

ClassDojo is just a digital ‘sticker chart’ or ‘house points’. There are, of course, continuities between ClassDojo and older practices of rewarding and disciplining students. The difference from sticker charts to ClassDojo is that the awarding or deduction of points can be viewed by parents, that the points become a persistent behavioural timeline that can be viewed for trends by teachers and/or school leaders, and that records can be carried across as children move from one class to another. It is much more sticky than sticker charts, which is why, as The Guardian reported, it raises concerns about labelling students in behavioural terms.

ClassDojo is behaviourist & promotes competition. As The Times and Mail reported, ClassDojo promotes ‘gamification’ by ranking students by number of points, which potentially incentivizes students to seek further points through actions they know the teacher will reward—rather than out of interest in the topic of study itself. The Guardian suggested this could make classrooms overly competitive. Of course, there are issues here of the reproduction of existing inequalities. Is the awarding of dojo points equally distributed across socio-economic, ethnic and gender categories? It also raises issues about the central behaviourist mechanism of ClassDojo, which is based on theories of positive reinforcement of ‘correct behaviours’ through issuing rewards and punishments. But who says what’s ‘correct’ behaviour, and on what basis? Apps like ClassDojo appear to be ‘nudging’ students to conform to the behavioural ideals that their designers have programmed in to the software.

ClassDojo exemplifies the growth of positive psychology education. The ClassDojo company is quite clear what ‘correct’ behaviour looks like—it’s behaviour that indicates a student is developing a growth mindset, grit and character. Its founders always talk about these ideas in media interviews, and cite as their major influences the psychologists Carol Dweck (growth mindset) and Angela Duckworth (grit). ClassDojo even ran a ‘Big Ideas’ series of animations teaching children and teachers about growth mindset and how it can be observed in students’ behaviours. Growth mindset in particular is now a hugely popular idea in education, but it’s not uncontested. A recent meta-analysis of growth mindset studies showed very small effects on student achievement, which seems to suggest that claims about the benefits have been overblown and oversold. One Twitter comment likened ClassDojo to ‘corporate Buddhism.’

Ed-tech is taking over the classroom. The ClassDojo controversy exemplifies wider recognition of the influence and impact of the ed-tech industry in shaping what happens in schools, as some comments noted. The ed-tech industry has circulated the idea that public schooling is broken—too much one-size-fits-all teaching and high-stakes testing leads to disengaged and stressed kids—and that their apps and analytics can fix it by ‘personalizing’ learning and thereby support the development of students’ resilient growth mindsets. Such a view has helped the ed-tech industry promote itself as the solution to public problems, and to begin inserting itself actively within the daily routines of schools. ClassDojo has expanded through social media network effects as a free app into the hands of teachers in schools all over the world, ultimately transmitting its company vision of what classroom behaviours should be like into the actions of teachers and students. In many ways, this appears profoundly undemocratic, as responsibility for defining the aims and purposes of public education around the world is assumed by tech-sector entrepreneurs according to their own readings of popular psychological and behavioural theory.

ClassDojo promotes neoliberal, individualized responsibility. From an overtly sociological perspective, ClassDojo is part of a movement in education policy, technology and practice to hold individuals responsible for their behaviours while completely ignoring all the contextual, cultural, socio-economic and political factors that shape students’ behaviours. For sociologists of ‘character education,’ for example, the idealized student under contemporary neoliberal austerity is an entrepreneurial, resilient and self-transforming individual who can take personal responsibility for dealing with chronic hardship and worsening insecurity. As part of the movement to enhance student character and mindset, ClassDojo may be reproducing this ideal, inciting teachers to issue positive reinforcement rewards for behaviours that indicate the development of entrepreneurial characteristics and individual self-responsibility.

Data-danger is a new media genre. The risks of ‘data-danger’ for children reported in the articles about ClassDojo doubtless need to be viewed through the wider lens of media interest in social media data misuses following the Facebook/Cambridge Analytica scandal. This presents opportunities and challenges. It’s an opportunity to raise awareness and perhaps prompt efforts to tighten up student privacy and data protection, where necessary, as GDPR comes into force. ClassDojo’s response to the controversy raised by the press confirmed it was working on GDPR compliance and would update its privacy policy accordingly. Certainly 2018 is shaping up as a year of public awareness about uses and misuses of personal data. It’s a challenge too, though, as media coverage tends to stir up overblown fears that risk obscuring the reality, and  that may then easily be dismissed as paranoid conspiracy theorizing. It’s important to approach ed-tech apps like ClassDojo–and all the rest–cautiously and critically, but to be careful not to get swept up in media-enhanced public outrage.

Image: ClassDojo
Posted in Uncategorized | Tagged , , , | 1 Comment

The Office for Students as the data scientist of the higher education sector

Ben Williamson

Office for students

Data play a huge role in British higher education. The new regulator for the sector, the Office for Students, will escalate data collection and use in HE in years to come. Improving student information and data to ‘help students make informed decisions’ is one of its four key strategic priorities, but it has also raised concerns about its use of student data to increase competition and market pressure. The Labour Party has recently tried to block it in a bid to prevent the further entrenchment of market-oriented higher education policy in the system. However, there remains a need to focus in close detail on how the OfS will use data in its remit as a market regulator.

The cover of the recently published OfS regulatory framework for higher education in England gives some indication of how the new regulator sees itself as a data-centred site of sectoral expertise. It features a scientist peering into a microscope with apparent satisfaction about what she sees. The scientist, naturally, is the OfS, performing experiments and observing the results; the microscope is the technical and methodological apparatus that allows it to see the sector; and (out of shot) is the university, flattened on to glass for inspection–made legible as data to be zoomed in on, scrolled across, examined and compared with other samples from the sector.

The idea of the OfS as a scientist of the sector–or more specifically, as a data scientist of the sector–is intriguing. It smacks of assumptions of scientific rigor, objectivity, and innovation. This form of metric realism, which assumes data tell the truth, is the central epistemology of trends in datafication. The reality of ‘laboratory life’ inside the OfS, like all labs, is doubtless more fraught with disagreement, negotiation and compromise, as STS studies of science practices might note. Nonetheless, the OfS regulatory framework document is a key inscription device that, for the time being at least, gives us the best clues of its planned data activities over the coming years.

As part of ongoing work into the data infrastructure of higher education, I’ve spent some time with the regulatory document, trying to figure out how student data are likely to be used in future years (on uses of research data see the Real-time REF Review project). The OfS is just one of many actors involved in a project to upgrade the core infrastructure for student data collection–a decade-long project that’s been going on 7 years already and is due for national rollout in 2019/2020.

In these notes, I lay out some of the key things the OfS says about ‘data’ in the document. There are 87 uses of the word data in it, so through light-touch discourse analysis I’ve attempted to categorize the various ways the OfS approaches data. I’ve deliberately kept a lot of quotations intact with the addition of a few annotations.

Data as strategy
The first point is that ‘The OfS will develop a data strategy in 2018’ and ‘The information and data the OfS requires to fulfil its functions will be wide-ranging’ (20). This is both mundane and not. The fact that it is developing a data strategy at all–and a wide-ranging one at that–is indicative of how the OfS will make data into a central aspect of HE regulation. As Andy Youell of HESA (Higher Education Statistics Agency) has written, the framework represents a shift from ‘data informed to data led’ regulation with data analysts playing an increasingly influential role in HE policy.

The chair of the OfS, of course, is Sir Michael Barber, a long-standing advocate of metrics and performance delivery models in different aspects of government. His most recent role was as education adviser to the global educational company Pearson, where he oversaw its organizational pivot toward big data, predictive learning analytics and adaptive learning. Under his leadership, the OfS too approaches data as a core strategy for fulfilling its mandate.

Regulatory data
The OfS primary remit is to regulate HE, and it is positioning data as a core component to that work:

The use of information, including data and qualitative intelligence, will underpin how the OfS undertakes its regulatory functions. The OfS will take an information-led and proportionate approach to monitoring individual providers, ensuring that students can access reliable information to inform their decisions. (19)

Key terms here include ‘monitoring’, which confirms concerns that the OfS will possess powers of data-led performance measurement. As well as monitoring individual institutions, the OfS will ‘Monitor the sector as a whole, to understand trends and emerging risks at a sector level and work with the sector to address them’ (20). However:

This regulatory framework does not … set out numerical performance targets, or lists of detailed requirements for providers to meet. Instead it sets out the approach that the OfS will take as it makes judgements about individual providers on the basis of data and contextual evidence. (15)

From ‘monitoring’, then, comes ‘judgement’ from data and other evidence. The OfS comes across as a suspicious actor of evidence-based policy.

Improvement data
Another key use of data by the OfS is to ‘Target, evaluate and improve access and participation, and equality and diversity activities’ (20). As such, monitoring and judgement become the basis for targeted improvement plans, with HE institutions specifically singled out if underperformance is detected from the data in specific areas. As is well-known, the OfS will also ‘Operate the TEF’ (20) and take the outcomes of the 2018 statutory TEF review ‘into account as it considers the future scope and shape of the TEF’ (24). As such, it will be the main data-led judge of teaching quality and improvement in the sector.

Student choice data
As the Office for Students, driven by the political rhetoric of ‘putting students as the heart of the system’, a key ambition is to put students themselves in touch with sector data. This includes efforts to ‘improve the quality of information available to students’ (25). Two key quotes from the framework stand out:

Prospective students will be equipped with the means, underpinned by innovative and meaningful datasets and high quality information, to enable them to make informed choices about the courses that are right for them. (10)

[OfS will] ensure students can access reliable and appropriate information to inform their decisions about whether to study for a higher education qualification and, if so, identify which provider and course is most likely to meet their needs and aspirations. (20)

Here the OfS mirrors recently-announced plans by Universities Minister Sam Gyimah to support software developers to develop student-facing apps for price-comparison of university courses. It’s a controversial idea, announced as part of the renewed Teaching Excellence Framework (TEF), which requires the use of Longitudinal Educational Outcomes (LEO) datasets linking courses to earnings. It’s also controversial because it makes student choice conform with the MoneySupermarket model of product comparison based on value-for-money calculations, and further solidifies the idea of students as consumers and courses as products in a marketplace of comparable choices.

Data alignment
Alongside choice comes constraint. The OfS will ‘publish student outcomes and current and future employer needs as a way of informing student choice’ (17). This short sentence appears to carry two main messages: first, that access to outcomes data from institutions will help shape the choices of prospective students; and second that those choices should also be made through reference to ’employer needs’.

Indeed, the OfS is actively seeking to align HE outcomes to industry requirements, and will ‘Work with employers and with regional and national industry representatives to ensure that student choices are aligned with current and future needs for higher level skills’ (20). This is a very instrumentalized view of HE as part of the employment pipeline for high-skills jobs. Of course, students can choose to ignore this information. But presenting HE data in this way may itself shape the choice environment for students, with certain choices made more attractive than others.

Nudge data
Talking of choice environments, the OfS is ‘taking the latest thinking on behavioural science into account, to consider how best to present this data in a consistent and helpful way to ensure that students have access to an authoritative source of information about higher education’ (25).

Clearly, the idea of intervening in students’ choice through subtle behavioural means is not an accident; the OfS is actively engaging with the psychology of choicemaking to shape and nudge how students decide on their courses. In this sense, the OfS is seeking to instantiate the experimental methods of behavioural public policy in HE, using data to prompt or even persuade students to make ‘desirable’ choices.

But it may over time extend to logic of behavioural science to sectoral nudging at scale. According to one commentary on the regulatory framework, ‘the OfS should be encouraged to further consider behavioural theory and its various insights, such as those contained in “nudge” theory, and thus design interventions that incentivise compliance from the outset.’

Government data
Though it is notionally an arms-length agency–geographically, it’s located in the south west, along with all the other HE agencies–the OfS appears to enjoy a remarkably close and mutually reinforcing relationship with government. Not only did it emerge from BIS (now BEIS), but it will also use its expertise in HE data to:

Support the Department for Education, given its overall responsibility for the policy and funding framework in which the sector operates, and other public bodies such as UKRI in the delivery of their prescribed functions.

In contrast the role of a ‘broker‘ between government and the sector performed by HEFCE, the OfS appears to have a much more hand-holding relationship with government–despite being at arms-length, a government minister has the power to give it directions and demand advice or reports–while simultaneously strong-arming the sector into compliance.

Designated data body
In my longer project, I have focused on the work of HESA as a central agency for delivery of the new student data infrastructure for HE. HESA is part of the family of ‘official statistics’ agencies in the UK, and in 2017 applied for the position of ‘designated data body’ (DDB) to work with the OfS, a position conferred on it early in 2018 by central government ‘on the recommendation of the OfS’ (19). As such, the OfS will ‘Work with, and have oversight of, the designated data body (DDB) to coordinate, collect and disseminate information’ (17).

The DDB will collect, make available, and publish appropriate information on behalf of the OfS, and the OfS will be responsible for holding the DDB to account for the performance of those functions. (19)

As this makes clear, HESA is now subordinate to the OfS, acting on its behalf and held to account for its own performance in the statistical delivery of the data required by the OfS. As such, the work of HESA has shifted from statistical reporting to a much more politicized position, ‘play[ing] a key role in supporting and enhancing the competitive strength of the sector.’

Indicator data
Indicators are the principal power source in the OfS machinery. Through indicators, the OfS will ultimately receive regular signals of institutional performance which can then be used to assess risk or to identify need for intervention:

All providers will be monitored using lead indicators, reportable events and other intelligence…. These will be used to identify early, and close to real-time warnings that a provider risks not meeting each of its ongoing conditions of registration. (18)

The OfS will identify a small number of lead indicators that will provide signals of change in a provider’s circumstances or performance. Such change may signal that the OfS needs to consider whether the provider is at increased risk of a breach of one or more it its ongoing conditions of registration. These indicators will be based on regular flows of reliable data and information from providers and additional data sources. (49)

The mention of ‘close to real-time warnings’ is especially important, as it signals a significant acceleration in the temporality of HE data reporting, analysis and action. Under the OfS, universities are to be monitored for performance fluctuations and changes that, like economic spikes and dips, may be presented as informational flows on data dashboards to affect prompt and timely decision-making.

Longitudinal data
In addition to ‘close to real-time’ data, the OfS is seeking to expand and improve the use of longitudinal datasets and analyses:

The OfS will draw on the longitudinal education outcomes (LEO) dataset as an important source of information about graduate outcomes. Its further development will be a priority for the OfS, taking into account both its limitations and its significant potential. (25)

LEO consists of experimental statistics on employment and earnings of higher education graduates using matched data from different government departments, which has controversially been used to suggest that students can choose courses based on future earnings potential. It is also a significant methodological accomplishment, linking datasets about education, personal characteristics, employment and income, and benefits gathered from the departments of education, work and pensions, HESA and HMRC.

Comparative data
The data will also be used comparatively to assess different institutions against each other:

It is anticipated that this data will be largely quantitative and generated as a result of a provider’s existing management functions … allowing for greater consistency, comparability and objectivity when looking across a range of providers. (50-51)

Data-led comparison and benchmarking is of course at the heart of rows over HE marketization, as universities are incited to compete for prospective students and income. It drives institutions to showcase themselves as competitive, high-performing organizations, and is visible in all kinds of HE rankings such as UniStats and Complete University Guide tables.

Anticipatory data
Furthermore, the data used by the OfS will not be merely historical, real-time and comparative–it will be anticipatory too. The OfS will undertake ‘horizon scanning to understand and evaluate the health of the sector’ (17) and will use indicator data ‘to anticipate future events’ (161). In this sense, the OfS is simply mirroring the increasing use of predictive analytics in HE, with institutions in the UK already using data to forecast student progress or identify students at-risk of drop-out or non-completion. The use of predictive data practices by the OfS, however, will be applied to institutions and the sector as a whole–to predict, for example, providers at-risk of underperformance or financial difficulty.

Data burden
All this data collection and analysis activity sounds like it will be a heavy burden on institutions, and the OfS admits:

The implementation of the OfS’s data strategy may initially increase regulatory burden, but the long term aim is to use data to reduce regulatory burden. Such data requirements are not therefore intended as a regulatory burden on providers but to provide the information that allows the OfS to be an effective and proportionate regulator.

Perhaps, however, the heaviest burden will be the threat of punitive action based on constant OfS investigation of institutional data.

Data auditing & investigation
Regimes of audit and inspection are of course familiar across many sectors, and the OfS will make ‘data audits’ a part of the HE landscape:

The OfS will assess, as part of its routine monitoring activities, the quality, reliability and timeliness of information supplied by a provider including through scheduled or ad hoc data audit activity. If the OfS has reason to believe that information received is not reliable, it may choose to investigate the matter. (131)

It may even, in certain cases, ‘require information to be re-audited by a specified auditor, where the OfS has reasonable concern that the audit opinion does not provide the necessary assurance’ (56). It therefore appears that the OfS will demand new forms of meta-auditing of existing audit data.

Targeted action
Finally, the OfS proposes to use data as the basis for taking targeted action on institutions and the sector:

The OfS may also take targeted action if it needs to establish the facts before reaching a judgement about whether there is, or is likely to be, a breach of one or more ongoing conditions of registration.

May require the provider to take particular co-operative action by a specified deadline – these actions may include access to, information (including data), records or people, to enable the OfS to investigate any concerns effectively and efficiently. (56)

All in all, the OfS will instantiate a new regime of data in HE, emphasizing an empiricist faith in the ‘truth-telling’ capacities of digitally generated information. It is positioning itself as a source of data scientific expertise in the sector, treating universities as samples to be observed, students as specimens to be nudged to make choices based on data, and the sector as a whole as a laboratory for its experiment in data-led regulation.

Image: Office for Students
Posted in Uncategorized | Tagged , , , | 1 Comment