Automating mistrust

Ben Williamson

Exam by XaviTurnitin can now analyse students’ individual writing styles to tackle ‘contract cheating’. Image by Xavi

The acquisition of plagiarism detection company Turnitin for US$1.75 billion, due to be completed later this year, demonstrates how higher education has become a profitable market for education technology companies. As concern grows about student plagiarism and ‘contract cheating’, Turnitin is making ‘academic fraud’ into a market opportunity to extend its automated detection software further. It is monetizing students’ writing while manufacturing mistrust between universities and students, and is generating some perverse side effects.

Cheating software
Turnitin’s acquisition is one of the biggest deals ever signed in the edtech field. Its new owner, Advance Publications, is a global media conglomerate with a portfolio including the Conde Nast company. With traditional media forms losing audiences, the deal indicates how technology and media businesses have begun to view education as a potentially valuable investment market.

The profitability of Turnitin, and attraction to Advance, derives from the assignments that students provide for free to its platform. Its plagiarism detection algorithm is constantly fine-tuned as millions of essays are added, analysed and cross-checked against each other and other sources. The ‘world’s largest comparison database’ of student writing, it consists of 600+ million student papers, 155,000+ published works and 60+ billion web pages. Similar to social media companies profiting from user-generated content, value for Turnitin comes from analysing students’ uploaded essays against that database, and securing purchases from universities based on the analysis.

Students can even pay to upload their essays prior to submission to Turnitin’s WriteCheck service, in order to check for similar sentences and phrases, missing or inaccurate citations, and spelling or grammatical inaccuracies. WriteCheck uses the same techniques as common standardized English language tests, and offers an online Professional Tutor Service through a partnership with Pearson.

The company has had years to grow and finesse its services and its ‘Similarity Score’ algorithm. In the UK, the original version of Turnitin, then known as iParadigms, was first paid for on behalf of the HE sector by Jisc (the digital learning agency) from 2002 to 2005, giving it an inbuilt cost advantage over competitors. It also gave it an inbuilt data advantage to train its plagiarism detection algorithm on a very large population of students’ assignments. Nonetheless, studies have repeatedly shown its plagiarism detection software is inaccurate. It both mistakenly brands some students as cheats while completely missing other clear instances of plagiarism, with an error rate that suggests its automated plagiarism reports should be trusted  less than its commercial valuation and market penetration indicates.

With the announcement of its acquisition by Advance, critics say the $1.75bn deal also amounts to the exploitation of students’ intellectual property. ‘This is a pretty common end game for tech companies, especially ones that traffic in human data’, commented Jesse Stommel of the University of Mary Washington. Turnitin’s business model, he added, is to ‘create a large base of users, collect their data, monetize that data in ways that help assess its value, [and] leverage that valuation in an acquisition deal’.

The tension between students’ intellectual property and Turnitin’s profit-making is not new. In many universities, it is compulsory for all student assignments to be submitted to Turnitin, with their intellectual effort then contributing to its growing commercial valuation without their informed knowledge. Ten years ago, four US college students tried to sue Turnitin for taking their assignments against their will and then profiting from them.

Manufacturing mistrust
Beyond its monetization strategy, Turnitin is also reshaping relationships between universities and students. Students are treated by default as potential essay cheats by its plagiarism detection algorithm. This is not a new concern. Ten years ago Sean Zwagerman argued that  plagiarism detection software is a ‘surveillance technology’ that ‘treats writing as a product, grounds the student-teacher relationship in mistrust, and requires students to actively comply with a system that marks them as untrustworthy’. Turnitin’s continued profitability depends on manufacturing and maintaining mistrust between students and academic staff, while also foregrounding its automated algorithm over teachers’ professional expertise.

In the book Why They Can’t Write, John Warner argues that students’ writing abilities have been eroded by decades of standardized curriculum and assessment reforms. Turnitin is yet another technology that treats writing as a rule-based game. ‘It signals to students that the writing is a game meant to please an algorithm rather than an attempt to convey an idea to an interested audience’, Warner has noted. ‘It incentivizes assignments which can be checked by the algorithm, which harms motivation’.

Turnitin also changes how students practice academic writing. One of the leading critical researchers of Turnitin, Lucas Introna, argues it results in the ‘algorithmic governance’ of students’ academic writing practices. Moreover, he suggests that ‘what the algorithms often detect is the difference between skilful copiers and unskilful copiers’, and as a result that it privileges students ‘who conceive of “good” writing practice as the composition of undetectable texts’.

The new deal will open opportunities for Turnitin to develop and promote new features that will further intervene in students’ writing. One is its new service to scan essays to detect an individual’s unique writing style, launched to the HE market in March just a week after announcing its acquisition. This could then be used to identify ‘ghostwriting’—when students hire someone else to write their essays or purchase made-to-order assignments.

Turnitin contract cheatingTurnitin has published expert guidance for universities to identify and combat contract cheating

The new Authorship Investigate service extends Turnitin from the analysis of plagiarism to students’ writing ability, using students’ past assignments, document metadata, forensic linguistic analysis, machine learning algorithms and Natural Language Processing to identify if a student has submitted work written by someone else. It reinforces the idea that the originality, value and quality of student writing should first be assessed according to the criteria of the detection algorithm, and treats all student writing as potential academic piracy. It is also likely to require students to submit extensive writing samples to train the algorithm to make reliable assessments of their writing style, thereby further enhancing the monopoly hold of Turnitin over data about student writing.

Turnitin has bred suspicion and mistrust between students and academics, while affecting how students value and practice academic writing. Yet this mistrust is itself a market opportunity, as the company seeks to offer more solutions services to the perceived problem of increased student plagiarism and contract cheating. As suspicions about student cheating have continued to grow since it was launched nearly 20 years ago, Turnitin has been able to capitalize to dramatically profitable results. Its ghostwriter detection service, of course, is a solution to one of the very problems Turnitin created–because plagiarism has become so detectable, the huge essay mills industry has emerged to produce original on-demand content for students to order. As a result, Turnitin is automating mistrust as it erodes relationships between students and universities, devalues teacher judgment, and reduces student motivation.

Plagiarism police
However damaging and inaccurate it may be, the Advance acquisition will enable Turnitin to further expand its market share and product portfolio. For Turnitin, the timing is ideal, as universities and HE policymakers are collectively beginning to address the rise of online ‘essay mills’ and their erosion of ‘academic integrity’. Government education departments in the UK and Australia have begun to tackle contract cheating more seriously, including through advocating increased use of innovative plagiarism detection software.

In a speech to the Universities UK International higher education forum in March, universities minister Chris Skidmore identified essay mills as one of the issues that needed to be tackled to protect and improve the quality of higher education in England and ensure that it retained its reputation for excellence.

UK academic leaders, HE agencies and ministers have already asked PayPal to stop processing payments to essay mills, and Google and YouTube to block online ads, in an effort to close down the $1 billion annual market in made-to-order assignments. These moves to prevent contract cheating also affect university students and graduates in Kenya, a ‘hotspot‘ for essay mill companies and writers, who rely on contract academic writing as a major source of income. So while Turnitin is set to profit from the detection of contract cheating in Global North contexts, it is disrupting a significant source of employment in specific Global South contexts. In Kenya, for example, where unemployment is high, ‘participants think of their jobs as providing a service of value, not as helping people to cheat. They see themselves as working as academic writers.’

Turnitin’s website now prominently markets its ghostwriter detection service along with a series of free-to-download ebooks to help universities identify contract cheating and develop strategies and tactics to combat it. It’s positioning itself not just as a technical solutions vendor, but as an expert source of insight and authority on ‘upholding academic integrity’. At the same time, Authorship Investigate will allow Turnitin to become the market leader in the fight against essay mills.

The launch of Authorship Investigate has coincided with a Times Higher Education report on the ‘surprising level of support’ among academics for contract cheating services to be made illegal and for ‘the criminalising of student use of these services’. This would appear to raise the prospect of algorithmic identification of students for criminal prosecution. Though there’s nothing to indicate quite such a hard punitive line being taken, the UK Department for Education has urged universities to address the problem, commenting to the THE, ‘universities should also be taking steps to tackle this issue, by investing in detection software and educating students on the severe consequences they face if caught cheating’.

Turnitin is the clear market-leader to solve the essay mills problem that the department has now called on universities to tackle. Its technical solution, however, does not address the wider reasons—social, institutional, psychological, financial or pedagogic—for student cheating, or encourage universities to work proactively with students to resolve them. Instead, it acts as a kind of automated ‘plagiarism police force’ to enforce academic integrity, which at the same time is also set to further disadvantage young people in countries such as Kenya where preparing academic texts for UK and US students is seen as a legitimate and lucrative service by students and graduates.

Robotizing higher education
Like many other technology organizations in education, Turnitin is increasing automation in the sector. Despite huge financial pressures, universities are investing in Turnitin to automate plagiarism and ghostwriting detection as a way of combating academic fraud. The problem of essay mills that politicians are now fixated upon is the ideal market opportunity for Turnitin to grow its business and its authority over student writing even further. In so doing, it also risks standardizing students’ writing practices to conform to the rules of the algorithm–ultimately contributing to the algorithmic governance, and even ‘robotization’, of academic writing.

The real problem is that universities are being motivated to invest in these robotized, data-crunching edtech products for multiple complex reasons. As universities have to seek larger student enrolments for their financial security, algorithmic services become efficient ways of handling huge numbers of student assignments. They satisfy government demands for action to be taken to raise standards, boost student performance, and preserve academic integrity. But automated software is a weak, robotic, and error-prone substitute for the long-term development of trusting pedagogic relationships between teachers and students.

A version of this post was previously published on Research Professional with the title ‘Manufacturing mistrust‘ on 12 June 2019.
Advertisements
Posted in Uncategorized | Tagged , , , , , , , | 1 Comment

Learning from surveillance capitalism

Ben Williamson

Fraction collectorSurveillance capitalism combines data analytics, business strategy, and human behavioural experimentation. Image: “Fraction collector” by proteinbiochemist

‘Surveillance capitalism’ has become a defining concept for the current era of smart machines and Silicon Valley expansionism. With educational institutions and practices increasingly focused on data collection and outsourcing to technology providers, key points from Shoshana Zuboff’s The Age of Surveillance Capitalism can help explore the consequences the field of education. Mindful of the need for much more careful studies of the intersections of education with commercially-driven data-analytic strategies of ‘rendition’ and ‘behavioural modification’, here I simply outline a few implications of surveillance capitalism for how we think about education policy and about learning.

Data, science and surveillance
Zuboff’s core argument is that tech businesses such as Google, Microsoft, Facebook and so on have attained unprecedented power to monitor, predict, and control human behaviour through the mass-scale extraction and use of personal data. These aren’t especially novel insights—Evgeny Morozov has a 16,000 word essay on the book’s analytical and stylistic shortcomings—but Zuboff’s strengths are in the careful conceptualization and documentation of some of the key dynamics that have made surveillance capitalism possible and practical. As James Bridle argued in his review of the book, ‘Zuboff has written what may prove to be the first definitive account of the economic – and thus social and political – condition of our age’.

Terms such as ‘behavioural surplus’, ‘prediction products’, ‘behavioural futures markets’, and ‘instrumentarian power’ provide a useful critical language for decoding what surveillance capitalism is, what it does, and at what cost. Some of the most interesting documentary material Zuboff presents include precedents such as the radical behaviourism of BF Skinner and the ‘social physics’ of MIT Media Lab pioneer Sandy Pentland. For Pentland, quoted by Zuboff, ‘a mathematical, predictive science of society … has the potential to dramatically change the way government officials, industry managers, and citizens think and act’ (Zuboff, 2019, 433) through ‘tuning the network’ (435). Surveillance capitalism is not and was never simply a commercial and technical task, but deeply rooted in human psychological research and social experimentation and engineering. This combination of tech, science and business has enabled digital companies to create ‘new machine processes for the rendition of all aspects of human experience into behavioural data … and guarantee behavioural outcomes’ (339).

Zuboff has nothing to say about education specifically, but it’s tempting straight away to see a whole range of educational platforms and apps as condensed forms of surveillance capitalism (though we might just as easily invoke ‘platform capitalism’). The classroom behaviour monitoring app ClassDojo, for example, is a paradigmatic example of a successful Silicon Valley edtech business, with vast collections of student behavioural data that it is monetizing by selling premium features for use at home and offering behaviour reports to subscribing parents. With its emphasis on positive behavioural reinforcement through reward points, it represents a marriage of Silicon Valley design with Skinner’s aspiration to create ‘technologies of behaviour’. ClassDojo amply illustrates the combination of behavioural data extraction, behaviourist psychology and monetization strategies that underpin surveillance capitalism as Zuboff presents it.

Perhaps more pressingly from the perspective of education, however, Zuboff makes a number of interesting observations about ‘learning’ that are worth unpacking and exploring.

Learning divided
The first point is about the ‘division of learning in society’ (the subject of chapter 6, and drawing on her earlier work on the digital transformation of work practices). By this term Zuboff means to demarcate a shift in the ‘ordering principles’ of the workplace from the ‘division of labour’ to a ‘division of learning’ as workers are forced to adapt to an ‘information-rich environment’. Only those workers able to develop their intellectual skills are able to thrive in the new digitally-mediated workplace. Some workers are enabled (and are able) to learn to adapt to changing roles, tasks and responsibilities, while others are not. The division of learning, Zuboff argues, raises questions about (1) the distribution of knowledge and whether one is included or excluded from the opportunity to learn; (2) about which people, institutions or processes have the authority to determine who is included in learning, what they are able to learn, and how they are able to act on their knowledge; and (3) about what is the source of power that undergirds the authority to share or withhold knowledge (181).

But this division of learning, according to Zuboff, has now spilled out of the workplace to society at large. The elite experts of surveillance capitalism have given themselves authority to know and learn about society through data. Because surveillance capitalism has access to both the ‘material infrastructure and expert brainpower’ (187) to transform human experience into data and wealth, it has created huge asymmetries in knowledge, learning and power. A narrow band of ‘privately employed computational specialists, their privately owned machines, and the economic interests for who sake they learn’ (190) has ultimately been authorized as the key source of knowledge over human affairs, and empowered to learn from the data in order to intervene in society in new ways.

Sociology of education researchers have, of course, asked these kinds of questions for decades. They are ultimately questions about the reproduction of knowledge and power. But in the context of surveillance capitalism such questions may need readdressing, as authority over what constitutes valuable and worthwhile knowledge for learning passes to elite computational specialists, the commercial companies they work for, and even to smart machines. As data-driven knowledge about individuals grows in predictive power, decisions about what kinds of knowledge an individual learner should receive may even be largely decided by ‘personalized learning platforms’–as current developments in learning analytics and adaptive learning already illustrate. The prospect of smart machines as educational engines of social reproduction should be the subject of serious future interrogation.

Learning collectives
The second key point is about the ‘policies’ of smart machines as a model for human learning (detailed in chapter 14). Here Zuboff draws on a speech by a senior Microsoft executive talking about the power of combined cloud and Internet of Things technologies for advanced manufacturing and construction. In this context, Zuboff explains, ‘human and machine behaviours are tuned to pre-established parameters determined by superiors and referred to as “policies”’ (409). These ‘policies’ are algorithmic rules that

substitute for social functions such as supervision, negotiation, communication and problem solving. Each person and piece of equipment takes a place among an equivalence of objects, each one “recognizable” to the “system” through the AI devices distributed across the site. (409)

In this example, the ‘policy’ is then a set of algorithmic rules and a template for collective action between people and machines to operate in unison to achieve maximum efficiency and optimal outcomes. Those ‘superiors’ with the authority to determine the policies, of course, are those same computational experts and machines that have benefitted from the division of learning. This gives them unprecedented powers to ‘apply policies’ to people, objects, processes and activities alike, resulting in a ‘grand confluence in which machines and humans are united as objects in the cloud, all instrumented and orchestrated in accordance with the “policies” … that appear on the scene as guaranteed outcomes to be automatically imposed, monitored and maintained by the “system”’ (410). These new human-machine learning collectives represent the future for many forms of work and labour under surveillance capitalism, according to Zuboff.

Zuboff then goes beyond human-machine confluences in the workplace to consider the instrumentation and orchestration of other types of human behaviour. Drawing parallels with the behaviourism of Skinner, she argues that digitally-enforced forms of ‘behavioral modification’ can operate ‘just beyond the threshold of human awareness to induce, reward, goad, punish, and reinforce behaviour consistent with “correct policies”’, where ‘corporate objectives define the “policies” toward which confluent behaviour harmoniously streams’ (413). Under conditions of surveillance capitalism, Skinner’s behaviourism and Pentland’s social physics spill out of the lab into homes, workplaces, and all the public and private space of everyday life–ultimately turning the world into a gigantic data science lab for social and behavioural experimentation, tuning and engineering.

And the final point she makes here is that humans need to become more machine-like to maximize such confluences. This is because machines connected to the IoT and the cloud work through collective action by each learning what they all learn, sharing the same understanding and ‘operating in unison with maximum efficiency to achieve the same outcomes’ (413). This model of collective learning, according to surveillance capitalists, can learn faster than people, and ‘empower us to better learn from the experiences of others’:

The machine world and the social world operate in harmony within and across ‘species’ as humans emulate the superior learning processes of the smart machines. … [H]uman interaction mirrors the relations of the smart machines as individuals learn to think and act by emulating one another…. In this way, the machine hive becomes the role model for a new human hive in which we march in peaceful unison toward the same direction based on the same ‘correct’ understanding in order to construct a world free of mistakes, accidents, and random messes. (414)

For surveillance capitalists human learning is inferior to machine learning, and urgently needs to be improved by gathering together humans and machines into symbiotic systems of behavioural control and management.

Learning in, from, or for surveillance capitalism?
These key points from The Age of Surveillance Capitalism offer some provocative starting places for further investigations into the future shape of education and learning amid the smart machines and their smart computational operatives. Three key points stand out.

1) Cultures of computational learning. One line of inquiry might be into the cultures of learning of those computational experts who have gained from the division of learning. And I mean this in two ways. How are they educated? How are they selected into the right programs? What kinds of ongoing training provides the kinds of privilege to learn about society through mass-scale behavioural data? These are questions about new and elite forms of workforce preparation and professional education. How, in short, are these experts educated, qualified and socialized to do data analytics and behaviour modification—if that is indeed what they do? In other words, how is one educated to become a surveillance capitalist?

The other way of approaching this concerns what is actually involved in ‘learning’ about society through its data. This is both a pedagogic and a curricular question. Pedagogically, education research would benefit from a much better understanding of the kinds of workplace education programmes underway inside the institutions of surveillance capitalism. From a curricular perspective, this would also require an engagement with the kinds of knowledge assumptions and practices that flow through such spaces. As mentioned earlier, sociology of education has long been concerned with how aspects of culture are ‘selected’ for reproduction by transmission through education. As tech companies and related academic labs become increasingly influential, they are producing new ‘social facts’ that might affect how people both within and outside those organizations come to understand the world. They are building new knowledge based on a computational, mathematical, and predictive style of thinking. What, then, are the dynamics of knowledge production that generate these new facts, and how do they circulate to affect what is taught and learnt within these organizations? As Zuboff notes, pioneers such as Sandy Pentland have built successful academic teaching programs at institutes like MIT Media Lab to reproduce knowledge practices such as ‘social physics’.

2) Human-machine learning confluences. The second key issue is what it means to be a learner working in unison with the Internet of Things. Which individuals are included in the kind of learning that is involved in becoming part of this ‘collective intelligence? When smart machines and human workers are orchestrated together into ‘confluence’, and human learning is supposed to emulate machine learning, how do our existing theories and models of human learning hold up? Machine learning and human learning are not obviously comparable, and the tech firms surveyed by Zuboff appear to hold quite robotic notions of what constitutes learning. Yet if the logic of extreme instrumentation of working environments develops as Zuboff anticipates, this still raises significant questions about how one learns to adapt to work in unison with the smart machines, who gets included in this learning, who gets excluded, how those choices and decisions are made, and what kinds of knowledge and skills are gained from inclusion. Automation is likely to lead to both further divisions in learning and more collective learning at the same time–with some individuals able to exercise considerable autonomy over the networks they’re part of, and others performing the tasks that cannot yet be automated.

In the context of concerns about the role of education in relation to automation, intergovernmental organizations such as the OECD and World Economic Forum have begun encouraging governments to focus on ‘noncognitive skills’ and ‘social-emotional learning’ in order to pair human emotional intelligence with the artificial cognitive intelligence of smart machines. Those unique human qualities, so the argument goes, cannot be quantified whereas routine cognitive tasks can. Classroom behaviour monitoring platforms such as ClassCraft have emerged to measure those noncognitive skills and offer ‘gamified’ positive reinforcement for the kind of ‘prosocial behaviours’ that may enable students to thrive in a future of increased automation. Being emotionally intelligent, by these accounts, would seem to allow students to enter into ‘confluent’ relations with smart machines. Rather than competing with automation, they would complement it as collective intelligence. ‘Human capital’ is no longer a sufficient economic goal to pursue through education—it needs to produce ‘human-computer capital’ too.

3) Programmable policies. A third line of inquiry would be into the idea of ‘policies’. Education policy studies have long engaged critically with the ways government policies circumscribe ‘correct’ forms of educational activity, progress, and behaviour. With the advance of AI-based technologies into schools and universities, policy researchers may need to start interrogating the policies encoded in the software as well as the policies inscribed in government texts. These new programmable policies potentially have a much more direct influence on  ‘correct’ behaviours and maximum outcomes by instrumenting and orchestrating activities, tasks and behaviours in educational institutions.

Moreover, researchers might shift their attention to the kind of programmable policies that are enacted in the instrumented workplaces where, increasingly, much learning happens. Tech companies have long bemoaned the adequacy of school curricula and university degrees to deliver the labour market skills they require. With the so-called ‘unbundling’ of the university in particular, higher education may be moving further towards ‘demand driven’ forms of professional learning and on-the-job industry training provided by private companies. When education moves into the smart workplace, learning becomes part of the confluence of humans and machines, where all are equally orchestrated by the policies encoded in the relevant systems. Platforms and apps using predictive analytics and talent matching algorithms are already emerging to link graduates to employers and job descriptions. The next step, if we accept the likeliness of the direction of travel of surveillance capitalism, might be to match students directly to smart machines on-demand as part of the collective human-machine intelligence required to achieve maximum efficiency and optimized outcomes for capital accumulation. In this scenario, the computer program would be the dominant policy framework for graduate employability, actively intervening in professional learning by sorting individuals into appropriate networks of collective learning and then tuning those networks to achieve best effects.

All of this raises one final question, and a caveat. First the caveat. It’s not clear that ‘surveillance capitalism’ will sustain as an adequate explanation for the current trajectories of high-tech societies. Zuboff’s account is not uncontested, and it’s in danger of becoming an explanatory shortcut for deployment anywhere that data analytics and business interests intersect (as ‘neoliberalism’ is sometimes evoked as a shortcut for privatization and deregulation). The current direction of travel and future potential described by Zuboff are certainly not desirable, and should not be accepted as inevitable. If we do accept Zuboff’s account of surveillance capitalism, though, the remaining question is whether we should be addressing the challenges of learning in surveillance capitalism, or the potential for whole education systems to learn from surveillance capitalism and adapt to fit its template. Learning in surveillance capitalism at least assumes a formal separate of education from these technological, political and economic conditions. Learning from it, however, suggests a future where education has been reformatted to fit the model of surveillance capitalism–indeed, where a key purpose of education is for surveillance capitalism.

Zuboff, S. 2019. The Age of Surveillance Capitalism: The fight for a human future at the new frontier of power. London: Profile.
Posted in Uncategorized | Tagged , , , , , , , | Leave a comment

Education for the robot economy

Ben Williamson

Robot by Saundra CastanedaRobotization is driving coding and emotional skills development in education. Image by Saundra Castaneda

Automation, coding, data and emotions are the new keywords of contemporary education in an emerging ‘robot economy’. Critical research on education technology and education policy over the last two decades has unpacked the connections of education to the so-called ‘knowledge economy’, particularly as it was projected globally into education policy agendas by international organizations including the OECD, World Economic Forum and World Bank. These organizations, and others, are now shifting the focus to artificial intelligence and the challenges of automation, and pushing for changes in education systems to maximize the new economic opportunities of robotization.

Humans & robots as sources of capital
In the language of the knowledge economy, the keywords were globalization, innovation, networks, creativity, flexibility, multitasking and multiskilling—what social theorists variously called ‘NewLiberalSpeak’ and the ‘new spirit of capitalism’. With knowledge a new source of capital, education in the knowledge economy was therefore oriented towards the socialization of students into the practices of ICT, communication, and teamwork that were seen as the necessary requirements of the new ‘knowledge worker’.

In the knowledge economy, learners were encouraged to see themselves as lifelong learners, constantly upskilling and upgrading themselves, and developing metacognitive capacities and the ability to learn how to learn in order to adapt to changing economic circumstances and occupations. Education policy became increasingly concerned with cultivating the human resources or ‘human capital’ necessary for national competitive advantage in the globalizing economy. Organizations such as the OECD provided the international large scale assessment PISA to enable national systems to measure and compare their progress in the global knowledge economy, treating young people’s test scores as indicators of human capital development.

The steady shift of the knowledge economy into a robot economy, characterized by machine learning, artificial intelligence, automation and data analytics, is now bringing about changes in the ways that many influential organizations conceptualize education moving towards the 2020s. Although this is not an epochal or decisive shift in economic conditions, but rather a slow metamorphosis involving machine intelligence in the production of capital, it is bringing about fresh concerns with rethinking the purposes and aims of education as global competition is increasingly linked to robot capital rather than human capital alone.

Automation
According to many influential organizations, it is now inevitable that automated technologies, artificial intelligence, robotization and so on will pose a major threat to many occupations in coming years. Although the evidence of automation causing widespread technological unemployment is contested, many readings of this evidence adopt a particularly determinist perspective. The robots are coming, the threat of technology is real and unstoppable, and young people are going to be hit hardest because education is largely still socializing them for occupations that the robots will replace.

The OECD has produced findings reporting on the skill areas that automation could replace. A PriceWaterhouseCoopers report concluded that ‘less well educated workers could be particularly exposed to automation, emphasising the importance of increased investment in lifelong learning and retraining’. Pearson and Nesta, too, collaborated on a project to map the ‘future skills’ that education needs to promote to prepare nations for further automation, globalization, population ageing and increased urbanization over the next 10 years. The think tank Brookings has explicitly stated, ‘To develop a workforce prepared for the changes that are coming, educational institutions must de-emphasize rote skills and stress education that helps humans to work better with machines—and do what machines can’t’.

For most of these organizations, the solution is not to challenge the encroachment of automation on jobs, livelihoods and professional communities. Instead, the robot economy can be even further optimized by enhancing human capabilities through reformed institutions and practices of education. As such, education is now being positioned to maximize the massive economic opportunities of robotization.

Two main conclusions flow from the assumption that young people’s future jobs and labour market prospects are under threat, and that the future prospects of the economy are therefore uncertain, unless education adapts to the new reality of automation. The first is that education needs to de-emphasize rote skills of the kind that are easy for computers to replace and stress instead more digital upskilling, coding and computer science. The second is that humans must be educated to do things that computerization cannot replace, particularly by upgrading their ‘social-emotional skills’.

Coding
Learning to code, programming and computer science have become the key focus for education policy and curriculum reform around the world. Major computing corporations such as Google and Oracle have invested in coding programs alongside venture capitalists and technology philanthropists, while governments have increasingly emphasized new computing curricula and encouraged the involvement of both ed-tech coding products and not-for-profit coding organizations in schools.

The logic of encouraging coding and computer science education in the robot economy is to maximize the productivity potential of the shift to automation and artificial intelligence. In the UK, for example, artificial intelligence development is at the centre of the government’s industrial strategy, which made computer programming in schools an area for major investment. Doing computer science in schools, it is argued, equips young people not just with technical coding skills, but also new forms of computational thinking and problem-solving that will allow them to program and instruct the machines to work on their behalf.

This emphasis on coding is also linked to wider ideas about digital citizenship and entrepreneurship, with the focus on preparing children to cope with uncertainty in an AI age. A recent OECD podcast on AI and education, for example, put coding, entrepreneurship and digital literacy together with concerns over well-being and ‘learning to learn’. Coding our way out of technological unemployment, by upskilling young people to program, work with, and problem-solve with machines, then, is only one of the proposed solutions for education in the robot economy.

Emotions
The other solution is ‘social-emotional skills’. Social-emotional learning and skills development is a fast-growing policy agenda with significant buy-in by international organizations. The World Economic Forum has projected a future vision for education that includes the development and assessment of social-emotional learning through advanced technologies. Similarly, the World Bank has launched a program of international teacher assessment that measures the quality of instruction in socioemotional skills.

The OECD has perhaps invested the most in social-emotional learning and skills, as part of its long-term ‘Skills for Social Progress’ project and its Education 2030 framework. The OECD’s Andreas Schleicher is especially explicit about the perceived strategic importance of cultivating social-emotional skills to work with artificial intelligence, writing that ‘the kinds of things that are easy to teach have become easy to digitise and automate. The future is about pairing the artificial intelligence of computers with the cognitive, social and emotional skills, and values of human beings’.

Moreover, he casts this in clearly economic terms, noting that ‘humans are in danger of losing their economic value, as biological and computer engineering make many forms of human activity redundant and decouple intelligence from consciousness’. As such, human emotional intelligence is seen as complementary to computerized artificial intelligence, as both possess complementary economic value. Indeed, by pairing human and machine intelligence, economic potential would be maximized.

Intuitively, it makes sense for schools to focus on the social and emotional aspects of education, rather than wholly on academic performance. Yet this seemingly humanistic emphasis needs to be understood as part of the globalizing move by the OECD and others to yet again reshape the educational agenda to support economic goals.

Data
The fourth keyword is data, and it refers primarily to how education must be ever more comprehensively measured to assess progress in relation to the economy. Just as the OECD’s PISA has become central to measuring progress in the knowledge economy, the OECD’s latest international survey, the Study of Social and Emotional Skills—a computer-based test for 10 and 15 year-old young people that will report its first findings in 2020—will allow nations and cities to assess how well their ‘human capital’ is equipped to complement the ‘robot capital’ of automated intelligent machines.

If the knowledge economy demanded schools help produce measurable quantities of human capital, in the robot economy schools are made responsible for helping the production of ‘human-computer capital’–the value to be derived from hybridizing human emotional life with AI. The OECD has prepared the test to measure and compare data on how well countries and cities are progressing towards this goal.

While, then, automation does not immediately pose a threat to teachers–unless we see AI_based personalized learning software as a source of technological unemployment in the education sector–it is likely to affect the shape and direction of education systems in more subtle ways in years to come. The keywords of the knowledge economy have been replaced by the keywords of the robot economy. Even if robotization does not pose an immediate threat to the future jobs and labour market prospects of students today, education systems are being pressured to change in anticipation of this economic transformation.

The knowledge economy presented urgent challenges for research; its metamorphosis into an emergent robot economy, driving policy demands for upskilling students with coding skills and upgraded emotional competencies, demands much further research attention too.

Posted in Uncategorized | Tagged , , , , , , , , | Leave a comment

Learning lessons from data controversies

Ben Williamson

This is a talk delivered at OEB2018 in Berlin on 7 December 2018, with links to key sources. A video recording is also available (from about 51mins mark)

Ten years ago ‘big data’ was going to change everything and solve every problem—in health, business, politics, and of course education. But, a decade later, we’re now learning some hard lessons from the rapid expansion of data analytics, algorithms, and AI across society.

DCMS Zuckerberg      Data controversies became the subject of international government attention in 2018

Data doesn’t seem quite so ‘cool’ now that it’s at the centre of some of society’s most controversial events. By ‘controversy’ here I mean those moments when science and technical innovation come into conflict with the public or political concerns.

Internationally, politicians have already begun to ask hard questions, and are looking for answers to recent data controversies. The current level of concern about companies like Facebook, Google, Uber, Huawei, Amazon and so on is now so acute that some commentators say we’re witnessing a ‘tech-lash’—a backlash of public opinion and political sentiment to the technology sector.

The tech sector is taking this on board, such as the Centre for Humane Technology seeking to stop tech from ‘hijacking our minds and society’. Universities that nurture the main tech talent, such as MIT, have begun to recognize their wider social responsibility and are teaching their students about the power of future technologies, and their potentially controversial effects. The AI Now research institute just launched a new report on the risks of algorithms, AI and analytics, calling for tougher regulation.

TES-algorithms-printPrint article on AI & robotization in teaching, from Times Education Supplement, 26 May 2017

We’re already seeing indications in the education media of a growing concern that AI and algorithms are ‘gonna get you’—as it said in the teachers’ magazine the Times Education Supplement last year.

In the states the FBI even issued a public service announcement warning that the collection of sensitive data by ‘edtech’ could result in social engineering, bullying, tracking, identity theft, or other means for targeting children’. An ‘edtech-lash’ has begun.

The UK Children’s Commissioner has also warned of the risks of ‘datafying children’ both at home and at school. ‘We simply do not know what the consequences of all this information about our children will be,’ she argued, ‘so let’s take action now to understand and control who knows what about our children’.

And books like Weapons of Math Destruction and The Tyranny of Metrics have become surprise non-fiction successes, both drawing attention to the damaging effects of data use in schools and universities.

So, I want to share some lessons from data controversies in education in the last couple of years—things we can learn from to avoid damaging effects in the future.

Software can’t ‘solve’ educational ‘problems’ 
One recent moment of data controversy was the protest by US students against the Mark Zuckerberg-supported Summit Public Schools model of ‘personalized learning’. Summit is originally a charter school chain with an adaptive learning platform—partly built by Facebook engineers—that’s scaled up across many high school sites in the US.

But in November, students staged walkouts in protest at the educational limitations and data privacy implications of the personalized learning platform. Student protestors even wrote a letter to Mark Zuckerberg in The Washington Post, claiming assignments on the Summit Learning Platform required hours alone at a computer and didn’t prepare them for exams.

They also raised flags about the huge range of personal information the Summit program collected without their knowledge or consent.

‘Why weren’t we asked about this before you and Summit invaded our privacy in this way?’ they asked Zuckerberg. ‘Most importantly’, they wrote, ‘the entire program eliminates much of the human interaction, teacher support, and discussion and debate with our peers that we need in order to improve our critical thinking…. It’s severely damaged our education.’

So our first lesson is that education is not entirely reducible to a ‘math problem’, nor can it be ‘solved’ with software—it exceeds the increase in data available from teaching and learning processes. For many educators and students alike, education is more than the numbers in an adaptive, personalized learning platform, and includes non-quantifiable relationships, interactions, discussion, and thinking.

Global edtech influence raises public concern
Google, too, has become a controversial data company in education. Earlier this year it launched its Be Internet Awesome resources for digital citizenship and online safety. But the New York Times questioned whether the public should accept Google as a ‘role model’ for digital citizenship and good online conduct when it is seriously embattled by major data controversies.

Google NY TimesThe New York Times questioned Google positioning itself as a trusted authority in schools

Through its education services, it’s also a major tracker of student data and is shaping its users as lifelong Google customers, said the Times. Being ‘Internet Awesome’ is also about buying into Google as a user and consumer.

In fact, Google was a key target of a whole series of Times articles last year revealing Silicon Valley influence in public education. Silicon Valley firms, it appears, have become new kinds of ‘global education ministries’—providing hardware and software infrastructure, online resources and apps, curricular materials and data analytics services to make public education more digital and data-driven.

This is what we might call ‘global policymaking by digital proxy’ as the tech influences public education at speeds and international scale conventional policy approaches cannot achieve.

The lesson here is that students, the media and public may have ideas, perceptions and feelings about technology, and the companies behind it, that are different to companies’ aspirations—claims of social responsibility compete with feelings of ‘creepiness’ about commercial tracking and concern about private sector influence in public education.

Data leaks break public trust
Data security and privacy is perhaps the most obvious topic for a data controversy lesson—but it remains an urgent one as educational institutions and companies are increasingly threatened by cybersecurity attacks, hacks, and data breaches.

K12 cybermapThe K12 Cyber Incident map has catalogued hundreds of school data security incidents

The K-12 Cyber Incident Map is doing great work in the US to catalogue school hacks and attacks, importantly raising awareness in order to prompt better protection. And then there’s the alarming news of really huge data leaks from the likes of EdModo and SchoolZilla—raising fears this is surely only going to get worse as more data is collected and shared about students.

The key lesson here is that data breaches and student privacy leaks also break students’, parents’, and the public’s trust in education companies. This huge increase in data security threats risks exposing the ed-tech industry to media and government attack. We’re supposed to protect children, they might say, but we’re exposing their information to the dark web instead!

Algorithmic mistakes & encoded politics cause social consequences 
Then there’s the problem of educational algorithms being wrong. Earlier this year, the English Testing Service revealed results from a check of whether international students were cheating an English language proficiency test. To discover how many students had cheated, ETS used voice biometrics to analyze tens of thousands of recorded oral tests, looking for repeated voices.

What it found? According to reports, 20% of the time the algorithm was getting the voice matching wrong. That’s a huge error rate, with massive consequences.

Around 5000 international students in the UK wrongly had their visas revoked and were threatened with deportation, all related to the UK’s ‘hostile environment’ immigration policy. Many have subsequently launched legal challenges, and many have won.

Data lesson 4, then, is that poor quality algorithms and data can lead to life-changing outcomes and consequences for students—even raising the possibility of legal challenges to algorithmic decision-making. This example also shows the problem with ascribing too much objectivity and accuracy to data and algorithms—in reality, they’re the products of ‘humans in the room’ whose own assumptions, and potential biases and mistakes can be coded into the software that’s used to make life-changing decisions.

Let’s not forget, either, that the test wouldn’t even have existed except the UK government was seeking to root out and deport unwanted immigrants—the algorithm was programmed with some nasty politics.

Transparency, not algorithmic opacity, is key to building trust with users
The next lesson is about secrecy and transparency. The UK government’s Nudge Unit, for example, revealed this time last year that it had piloted a school-evaluating algorithm for school inspection, which could identify where a school might be failing from its existing data.

Many headteachers and staff are already fearful of the human school inspector. The automated school-inspecting algorithm secretly crawling around in their servers and spreadsheets, if not their corridors, offices and classrooms, hasn’t made them any less concerned. Especially as it can only rate their performance from the numbers, rather than qualitatively assessing the impact of local context on how they perform.

A spokesperson for the National Association of Headteachers said to BBC News, ‘We need to move away from a data-led approach to school inspection. It is important that the whole process is transparent and that schools can understand and learn from any assessment. Leaders and teachers need absolute confidence that the inspection system will treat teachers and leaders fairly’.

The lesson to take from the Nudge Unit experiment is that secrecy and lack of transparency in use of data analytics and algorithms do not win trust in the education sector—teacher unions and education press are likely to reject AI and algorithmic assistance if not believed to be transparent, fair, or context-sensitive.

Psychological surveillance raises fears of emotional manipulation
My last three lessons focus on educational data controversies that are still emerging. These relate to the idea that the ‘Internet of Bodies’ has arrived in the shape devices for tracking the ‘intimate data’ of your body, emotions and brain.

For example, ‘emotion AI’ is emerging as a potential focus of educational innovation—such as biometric engagement sensors, emotion learning analytics, and facial vision algorithms that can determine students’ emotional response to teaching styles, materials, subjects, and different teachers.

Emotive computingEmotionAI is being developed for use in education, according to EdSurge

Among others, EdSurge and the World Economic Forum have endorsed systems to run facial analytics and wearable biometrics of students’ emotional engagement, legitimizing the idea that invisible signals of learning can be detected through skin.

Emotion AI is likely to be controversial because it prioritizes the idea of constant psychological surveillance—the monitoring of intimate feelings and perhaps intervening to modify those emotions. Remember when Facebook got in trouble for its ‘emotional contagion’ study? Fears of emotional manipulation inevitably follow from emotionAI–and the latest AI Now report highlighted this as a key area of concern.

Facial coding and engagement biometrics with emotion AI could even be seen to treat teaching and learning as ‘infotainment’—pressuring teachers to ‘entertain’ and students to appear ‘engaged’ when the camera is recording or the biometric patch is attached.

‘Reading the brain’ poses risks to human rights 
The penultimate lesson is about brain-scanning with neurotechnology. Educational neurotechnologies are already beginning to appear—for example, the BrainCo Focus One brainwave-sensing neuroheadset and application spun out of Harvard University.

Such educational neurotechnologies are based on the idea that the brain has become ‘readable’ through wearable headsets that can detect neural signals of brain activity, then convert those signals into digital data for storage, comparison, analysis and visualization via the teacher’ brain-data dashboard. It’s a way of seeing through the thick protective barrier of the skull to the most intimate interior of the individual.

BrainCo 1The BrainCo Focus One neuroheadset reads EEG signals of learning & presents them on a dashboard

But ‘brain surveillance’ is just the first step as ambitions advance to not only read from the brain but to ‘write back’ into it or ‘stimulate’ its ‘plastic’ neural pathways for more optimal learning capacity.

Neurotechnology is going to be extraordinarily controversial, especially as it is applied to scanning and sculpting the plastic learning brain. ‘Reading’ the brain for signals, or seeking to ‘write back’ into the plastic learning brain, raises huge ethical and human rights challenges—‘brain leaks’, neural security, cognitive freedom, neural modification—with prominent neuroscientists, neurotechnologists and neuroethics councils already calling for new frameworks to protect the readable and writable brain.

Genetic datafication could lead to dangerous ‘Eugenics2.0’
I’ve saved the biggest controversy for last: genetics, and the possibility of predicting a child’s educational achievement, attainment, cognitive ability, and even intelligence from DNA. Researchers of human genomics now have access to massive DNA datasets in the shape of ‘biobanks’ of genetic material and information collected from hundreds of thousands of individuals.

The clearest sign of the growing power of genetics in education was the recent publication of a huge, million-sample study of educational attainment which concluded the number of years you spend in education can be partly predicted genetically.

The study of the ‘new genetics of intelligence’, based on very large sample studies and incredibly advanced biotechnologies, is also already leading to ever-stronger claims of the associations between genes, achievement and intelligence. And these associations are already raising the possibility of new kinds of markets of genetic IQ testing of children’s mental abilities.

Many of you will also have heard the news last week that a scientist claimed to have bred the first ever genetically edited babies, raising a massive debate about re-programming human life itself.

Basically, it is becoming more and more possible to study digital biodata related to education, to develop genetic tests to measure students’ ‘mental rating’, and perhaps even to recode, edit or rewrite the instructions for human learning.

It doesn’t get more controversial than genetics in education. So what data lesson can we learn? Genetic biodata risks reproducing dangerous ideas about the biologically determined basis of achievement, while genetic ‘intelligence’ tests are a step towards genetic selection, brain-rating, and gene-editing for ‘smarter kids’—raising risks of genetic discrimination, or ‘Eugenics 2.0’.

Preventing data controversies 
So why are these data lessons important? They’re important because governments are increasingly anxious to sort out the messes that overenthusiastic data use and misuse has got societies into.

In the UK we have a new government centre for data ethics, and a current inquiry and call for evidence on data ethics in education. Politicians are now asking hard questions about algorithmic bias in edtech, accuracy of data models, risk of data breaches in analytics systems, and the ethics of surveillance of students.

Data and its controversies are under the microscope in 2018 for reasons that were unimaginable during the big data hype of 2008. Data in education is already proving controversial too.

In Edinburgh, we are trying to figure out how to build productive collaborations between social science researchers of data, learning scientists, education technology developers, and policymakers—in order to pre-empt the kind of controversies that are now prompting politicians to begin asking those hard questions.

By learning lessons from past controversies with data in education, and anticipating the controversies to come, we can ensure we have good answers to these hard questions. We can also ensure that good, ethical data practices are built in to educational technologies, hopefully preventing problems before they become full-blown public data controversies.

Posted in Uncategorized | Tagged , , , , , | Leave a comment

The app store for higher education

Ben Williamson

young people phoneA government competition aims to make choosing a degree as easy as swiping a smartphone. Image by Garry Knight

App stores are among the most significant aspects of contemporary cultures. Commercial environments where consumers choose digital products, they are also important spaces where app producers and platform businesses first come into contact with users. As the shopping centres of platform capitalism, app stores enable users to become sources of data collection and value extraction.

Apps for higher education have become a key focus of government investment, and have the potential to become significant intermediaries bringing students, applicants and other publics into contact with HE data. This post continues ongoing research documenting the expanding data infrastructure of HE in the UK, which has already explored the policy context, data-led regulatory approach, data-centred sector agencies, and involvement of data-driven edu-businesses. New apps for shaping student choice bring small businesses, edtech startups, and the not-for-profit sector into the expanding infrastructure, and are introducing the idea that student choice can be shaped (or ‘nudged’) through the interactive presentation of data on apps, price-comparison websites, and social media-style services that indicate the quality of a provider’s performance.

An ‘information revolution’ in student choice
Universities Minister Sam Gyimah announced a competition in summer 2018 for small businesses to create new apps or online services to assist young people in making choices about going to university. Controversially to many in the sector, he claimed the competition would allow tech companies to use graduate earnings data—taken from the Longitudinal Educational Outcomes (LEO) dataset—to ‘create a MoneySuperMarket for students, giving them real power to make the right choice’.

A budget of £125,000 was allocated to support the winning entrants, which were expected to produce working prototypes during September and October. A few months later he announced five shortlisted companies, an additional £300,000 investment for two of the products, and the release of ‘half a million cells of data showing graduate outcomes for every university–more than has ever been published before’.

‘This is the start of an information transformation for students, which will revolutionise how students choose the right university for them’, said Gyimah. ‘I want this to pave the way for a greater use of technology in higher education, with more tools being made available to boost students’ choices and prospects’.

In other words, the competition is just a prototype of what is still to come–a government-backed marketplace of apps, platforms and other products and services to enable applicants, students and graduates to produce, interact with, and use HE data. Elsewhere, Gyimah was reported saying there is ‘clearly a market opportunity’ for services like this, even for those not awarded part of the £300,000 funding from the Department for Education.

Although the competition at this stage has only generated prototypes–only two of which will be more fully developed–all of the companies have already developed a web presence for their apps and products. A Department for Education video tweeted from the official finalists’ event also offers some glimpses of the these prototype products. This allows us to see how an expanding ‘app store’ for student choice might extend the data infrastructure in new ways.

MyEd UniPlaces app
MyEd is an existing provider of services designed to enhance choices in education institutions.

MyEdMyED provides educational choice-enhancement services. Image from https://myed.com/

MyEd already runs services supporting parent choice in nurseries, schools, colleges and universities, in particular by aggregating key data and previous reviews to enable easy user comparison and shortlisting of providers. According to its website:

Our unique reviews process is an intelligence data analysis system that has been designed to provide our users with the most relevant and digestible information to help them make the best decisions on their investment in education.

For the competition, MyEd proposed a UniPlaces app, which it pitched as a ‘web-based compatibility checker’  to assist applicants in making HE choices. Driven by a questionnaire capturing students’ achievements and preferences, the app then seeks to match them to HE options that are linked to certain job prospects.

As an established company, MyEd already compiles together information from a range of sources, including institutions, government departments, published performance tables, and agencies such as HESA and the QAA. in these ways, it is emblematic of the shift toward marketized education and choice across all sector–from early years to HE–in recent education policy.

Uni4U
The unique aspect of the Uni4U proposal is that it was designed by students, though the organization was founded by an entrepreneur with support from the NatWest Business Accelerator.

Uni4UUni4U is gathering additional data by surveying students and school children online. Image from http://uni4u.co.uk/

Like the other apps, Uni4U supports HE choice through the graphical presentation of data about universities, including their location, campus facilities, and graduate earnings.

While in prototype phase, Uni4U produced a website featuring two online surveys to gather further data from future students and current students. It invites future students to identify what would most help them make university choices, and current students to rate the quality of their existing provider and the support they gained in making their initial choice.

Coursematch
Coursematch presents itself on its website as a fully functioning app available via the Apple App store and Google Play, with a claimed 25,000 users. It was already upgraded in its current form in May 2018 and has been marketing itself on social media as ‘The #1 social network to help find your perfect university course and meet future friends!’

CoursematchCourseMatch is a social network for university choice, already available on app stores. Image from https://coursematch.io/

Perhaps the notable aspect of Coursematch is its claim to use machine learning to make the most effective matches between students and courses, twinned with a ‘swipeable’ interface design adopted from dating apps.

‘Our new look app is going to make it easier than ever to browse University courses, and find your perfect course!’ read a recent promotional Coursematch tweet. ‘We are bringing in AI techniques to recommend a selection of courses right for you, to browse through with just a simple swipe’.

Potential students are provided with projected possible earnings based on the average lower quartile, median and upper quartile for particular courses, and can also interact through the app with existing students on those courses. Coursematch is already supported by Jisc, the HE digital learning agency, and Santander Universities.

AccessEd–ThinkUni app
The ThinkUni app comes from the not-for-profit sector, with AccessEd aiming to ‘increase access to university for young people from under-served backgrounds globally. We are creating a global network of partner organisations committed to this mission, sharing with them our expertise, resources and support’.

AccessEdAccessEd supports access to university for young people from under-served backgrounds. Image from https://access-ed.ngo/

Pitched as a ‘personalized careers assistance’ service that is easy for students to use on their smartphones, ThinkUni builds on AccessEd’s previous university access work–including its ‘Brilliant Club’, the UK’s largest university access programme for 11-18 year olds.

According to the co-founder and executive chair of AccessEd, existing sources such as UCAS are huge databases and glorified spreadsheets that make decision-making difficult. With ThinkUni they can instead access details such as which universities the student could choose based on their school exam grades, and how long it would take them to pay back their student loan based on a projected graduate salary.

The Profs—That’s Life
That’s Life is the most unique of the competition finalists–it’s an education and careers simulator produced by The Profs, a successful private HE tutoring company.

ProfsThe Profs is a successful HE private tutoring company. Image from https://www.theprofs.co.uk/

The idea for the service is that it provides a ‘gamified’ simulation of the outcomes of making certain kinds of decisions, and presents projected data such as their future levels of happiness, work-life balance and income, showing students the impact of their life and course choices, including not going to university all.

The gamification and simulation aspects of That’s Life demonstrate how the logics of video games could be employed to enhance student choice, notably by offering students opportunities to experiment with different pathways and problem solving strategies. But the app’s origins in the private HE tutoring sector is also indicative of how private sector and alternative providers are being actively welcomed into public university service provision.

Scaling up the prototype
Whether apps such as those supported by government–or the earnings potential they present–actually influence student choice remains for now an empirical question. Another question is whether initial government investment will enable these app producers to scale their products. In a way, Sam Gyimah is acting like a Silicon Valley venture capitalist, seed-funding early-stage prototypes that bear a high risk of failure.

However, one existing example of a HE-facing app suggests that appetite for real venture capital investment in such products may be growing. Debut is a smartphone app for talent-matching graduates to corporate employers and labour markets. Graduate users create a profile—as with other social media platforms—and complete a psychometric personality test which can then be used for automated push notifications of appropriate jobs. Partnering corporate employers can even ‘talent spot’ and target individual users directly without requiring an application form or CV.

Debut appDebut is a machine learning based talent-matching app. Image from http://debut.careers/

But Debut is also a direct challenge to universities and the status of the academic degree.  ‘We want to unbundle that and turn our user base into a behaviour- and competency-based user base,’ its founder says. ‘The strength would be the person’s competency as opposed to academic success’. Instead, it emphasizes graduates’ ‘cognitive psychometric intelligence’, behavioural traits and competencies. ‘We have everything on students, from their cognitive background, social background, to how well they perform in a selection process’—data it is using to train machine learning algorithms ‘to make personalized recommendations and predictions’.

Debut therefore instantiates the entry of automated predictive talent analytics into UK HE, inciting students to cultivate their marketable personality and behavioural skills above their academic credentials. Users of the platform generate training data for its machine learning learning algorithm to tune and refine its subsequent job-matches and recommendations. In summer 2018 Debut also received £5 million venture capital investment led by James Caan, the entrepreneur from the TV show Dragons’ Den, and  already has 60 corporate clients, including Google, Apple and Barclays, that pay it an annual subscription to sort and organize the graduate data.

Student-powered & metrics-powered HE
As an established product, Debut is well positioned in the emerging app store of services and products to help shape students’ choices. As the DfE competition demonstrates, apps are emerging to match prospective applicants the courses based on graduate earnings data from LEO, while Debut can later then link them to employers based on a training set of graduate competency profiles and successful labour market matches.

The finalists of the DfE competition represent the governmental recognition of the potential of data presented on apps to shape choices and decisions. The prototypical app store for HE choice is, therefore, a significant extension of ongoing upgrades to the data infrastructure of HE. It raises some key issues:

  • It exemplifies government ambitions to ‘unbundle‘ and open up HE to new market providers of technologies, entrepreneurs, the private sector, and other business interests, with government itself acting as a market catalyst and seed-fund investor
  • It brings the logic of ‘swipeable’ apps and social media platforms into HE, importing the business model of platform capitalism and the extraction of value from student data into higher education
  • It utilizes persuasive design and behavioural science insights to design interfaces and visualizations that might ‘hook’ attention, ‘trigger’ behaviours, and ‘nudge’ decisions according to the ‘choice architecture’ provided
  • It continues to treat students as calculative consumers, investing in HE with the expectation of ROI in the shape of graduate outcomes and earnings, and puts pressure on institutions to focus on labour market outcomes as the main purpose of HE
  • It incites prospective and current students to see and think about HE in primarily quantitative and evaluative terms, as represented in metrics and market-like performance rankings and ratings
  • It anticipates potential long-term and real-time data monitoring of students in HE institutions, through a digital surveillance assemblage of apps, platforms and infrastructural connections, thereby making students into data transmitters of institutional qualities as well as consumers of institutional data
  • It instantiates the increasing role of algorithms, machine learning and automation into applicants’, students’ and graduates’ decision-making, with Debut even seeking to short-circuit the job application process and automatically talent-match graduate competency profiles to corporate job descriptions
  • It raises questions about the uses of student data to reinforce pre-existing governmental ideology, with the DfE recently reprimanded by statistical authorities for prioritising political messaging ahead of its statistical evidence–could students apps be designed otherwise rather than to conform to market models of cost-benefit calculation?

By releasing a huge trove of LEO data, it also demonstrates how HE is being made increasingly measurable, computable, and comparable as a competitive, market-driven sector, with Gyimah noting that ‘these new digital tools will highlight which universities and courses will help people to reach the top of their field, and shine a light on ones lagging behind’.

The governmental focus on calculating which universities are ‘lagging’ or even ‘failing’ from their data is itself a huge sector concern, with Michael Barber, chair of the Office for Students, writing in The Telegraph that ‘While student choice should drive innovation, diversity and improvement, we recognise this won’t always be enough. So where market mechanisms are not sufficient, we will regulate’. The piece, entitled ‘We should allow bad universities to fail, as long as we protect their students’, followed another Telegraph article titled ‘If the higher education market is to succeed, bad universities must be allowed to go bust’.

In this highly conservative political and media context, further amplified by think tanks such as Reform, HE is being driven both by the supposed ’empowerment’ of students and by metrics of market performance. The first perspective sees data as central to a ‘student-powered’ sector characterized by choice, value for money, and market competitiveness. The other takes a ‘metrics-powered’ perspective on universities as comparable market actors with winners and failures, as calculated by the choices of applicants to attend, indicator data on provider performance, and LEO or other student outcomes data on graduate outcomes and earnings.

These two perspectives are, however, binocular rather than oppositional. Barber’s emphasis on ‘bad universities’ and Gyimah’s enthusiasm for student-facing apps are part of the same project, with data from and about students  treated as key performance indicators for both policy officials and university applicants to assess. As Barber noted, ‘With more information at their disposal on the quality of courses and associated salary outcomes, [students] will rightly be thinking carefully about such choices. That places an onus on universities to plan realistically and respond quickly where demand is higher–or lower–than expected’.

The emerging, prototypical HE app store instantiates these demands in software. It reveals to students the best-performing universities in terms of degree awards and graduate earnings, but also reveals the ‘bad universities’ and discourages them from ‘investing’ in these institutions and their courses. In these ways, the HE app store threatens to exert dangerously performative effects. By presenting university providers as a market, these apps will shape students’ choices away from certain institutions, or prompt institutions to drop courses that don’t promise a high percentage of positive graduate outcomes, while privileging elite institutions with stronger existing performance records. The app store will speed up the ‘market failure’ of those providers presented in the data as ‘bad universities’.

Posted in Uncategorized | Tagged , , , , , , , | 1 Comment

The mutating metric machinery of higher education

Ben Williamson

ZuijlekomHigher education increasingly depends on complex data systems to process and visualize performance metrics. Image by Dennis van Zuijlekom

Contemporary culture is increasingly defined by metrics. Measures of quantitative assessment, evaluation, performance, and comparison infuse public services, commercial companies, social media, sport, entertainment, and even human bodies as people increasingly quantify themselves with wearable biometrics. Higher education is no stranger to metrics, though they are set to intensify further in coming years under the Higher Education and Research Act. The measurement, comparison, and evaluation of the performance of institutions, staff, students, and the sector as a whole is expanding rapidly with the emergence and evolution of ‘the data-intensive university’.

This post continues a series on the expanding data infrastructure of HE in the UK, part of ongoing research charting the actors, policies, technologies, funding arrangements, discourses, and metrological practices involved in contemporary HE reforms. Current reforms of HE are the result of ‘fast policy’ processes involving ‘sprawling networks of human and nonhuman actors’, and more specifically that involve human data analytics experts and complex nonhuman systems of measurement. Only by identifying and understanding the mobile policy networks and the ‘metrological machinery’ of their HE data projects is it possible to adequately apprehend how macro-forces of governmental reform are being operationalized, enacted, and accomplished in practice.

Metrological machinery
The collection and use of UK university performance data has expanded and mutated dramatically in scope over the last two decades. The metrification of HE through the ‘evaluation machinery’ of research assessment exercises, teaching evaluation frameworks, impact measurements, student satisfaction ratings, and so on, is frequently viewed as part of an ongoing process of neoliberalization and marketization of the sector. One particularly polemical critique describes a ‘pathological organizational dysfunction’ whereby neoliberal priorities and corporate models of marketization, competition, audit culture, and metrification have combined to produce ‘the toxic university’.

The narrative is that HE has been made to resemble a market in which institutions, staff and students are all positioned competitively, with measurement techniques required to assess, compare and rank their various performances. It is a compelling if unsettling narrative. But if we really want to understand the metrification, marketization, and neoliberalization of HE, then we need to train the analytical gaze more closely on the specific and ever-mutating metrological mechanisms by which these changes are being made to happen.

In previous posts I examined the market-making role of the edu-business Pearson,  and the ways the Office for Students (OfS), the HE market regulator, and the Higher Education Statistics Agency (HESA), its designated data body, intend to use student data to compare sector performance. Together, these organizations and their networks are building a complex and evolving data infrastructure that will cement metrics ever more firmly into HE, while opening up the sector to a new marketplace of technical providers of data analysis, performance measurement, comparison and evaluation.

HE technology landscapePolitical demands to make HE more data-driven have opened up a marketplace for providers of digital technologies. Image by Eduventures

In this update I continue unpacking this data infrastructure by focusing on the Quality Assurance Agency for Higher Education (QAA) and the Joint Information Services Committee (Jisc). Both of them, along with HESA, are engaging in significant metrological work in HE. In fact, HESA, QAA and Jisc together constitute the M5 Group of agencies—‘UK higher education’s data, quality and digital experts’—formed in 2016 and named after their collective proximity to the M5 motorway in southwest England. Together, the QAA, HESA and Jisc also co-organize and run the annual Data Matters conference for HE data practitioners, quality professionals and digital service specialists.

To approach these organizations, the concept of ‘metric power’ from David Beer provides a useful framing. Drawing on key theorists of statistics (Desrosières, Espeland, Foucault, Hacking, Porter, Rose etc), metric power accounts for the long-growing intensification of measurement over the last two centuries to the current mobilization of digital or ‘big’ data across diverse domains of societies. Central to metric power is the close alignment of metrics to neoliberal governance. Following the lead of Foucault and others to define neoliberalism as the ‘generalization of competition’ and the extension of the ‘model of the market’ to diverse social domains, Beer argues that ‘put simply, competition and markets require metrics’ because ‘measurement is needed for the differentiations required by competition’.

The concept of metric power, then, is potentially a useful way to approach the metrification of higher education and to explore how far this represents processes of neoliberalization and marketization. By examining the recent projects and future aspirations of agencies such as Jisc and QAA we can develop a better understanding of how a form of metric power is transforming the sector. To be clear at this point, there is nothing to suggest that either the QAA or Jisc are run by neoliberal ideologues–something more subtle is happening. The point is that both organizations, along with HESA and the OfS, are pursuing projects which potentially reinforce neoliberalizing processes by expanding the data infrastructures of HE measurement. They are ‘fast policy’ nodes in the mobile policy networks enacting the metrological machinery of HE reform.

QAA—sentimental evidence
The QAA is the sector agenda ‘entrusted with monitoring and advising on standards and quality in UK higher education’. It maintains the UK Quality Code for Higher Education used for quality assessment reviews, as well as the Subject Benchmark Statements describing the academic standards expected of graduates in specific subject areas. QAA also undertakes in-house research and produces policy briefings.

One of its major strands of activity, via the QAA Scotland office, is an ‘Evidence Enhancement Theme’ focusing on ‘the information (or evidence) used to identify, prioritise, evaluate and report’ on student satisfaction. Its priorities are:

  • Optimising the use of existing evidence: supporting staff and students to use and interpret data and identifying data that will help the sector to understand its strengths and challenges better
  • Student engagement: understanding and using the student voice, and considering concepts where there is no readily available data, such as student community, identity and belonging
  • Student demographics, retention and attainment: using learning analytics to support student success, and supporting institutions to understand the links between changing demographics, retention, progression and attainment including the ways these are reported

The Evidence Enhancement program is unfolding collaboratively across all Scottish HE providers and is intended to result in sector-wide improvements in data use related to student satisfaction.

More experimentally, the QAA released a 2018 study into student satisfaction using data scraped from social media. The student sentiment scraping study, entitled The Wisdom of Students: Monitoring quality through student reviews, was based on a large sample of over 200,000 student reviews of higher education provision to produce a ‘collective-judgment’ score for each provider. These data were then compared with other sources such as TEF and NSS, and found to have a strong positive association. Crowdsourced big data from the web, it suggested, were as reliable as large-scale student surveys and bureaucratic quality assessment exercises as student experience metrics.

The QAA project is a clear example of how big data methodologies of sentiment analysis and discovery using machine learning and web-scraping are being explored for HE insights. For the QAA, taking such an approach is necessary because, as the sector has become more marketized and focused on the experience of the student in a ‘consumer-led system’ regulated by the ‘data-driven’ Office for Students, there has been ‘a gradual reduction in the remit of QAA in assessing and assuring teaching and learning quality in providers, and the rise in the perception of student experience and employment outcomes’ data as more accurately indicating excellence in higher education provision’. As such, measuring student experience in a timely, low-burden and cost-effective fashion has become a new policy priority, while existing instruments such as the TEF and NSS remain annual, retrospective, and potentially open to ‘gaming’ by institutions.

In contrast, collecting ‘unsolicited student feedback’ from reviews on social media platforms is seen by the QAA as a way of ‘securing timely, robust, low-burden and insightful data’ about student experience. In particular, the study involved collecting student reviews from Facebook, Whatuni.com and Studentcrowd.com, with Twitter data to be included in future research. The study authors found that 365 HE providers have Facebook pages with the ‘reviews’ function available, as well as  many pages relating to departments, schools, institutes, faculties, students’ unions, and career services.

Perhaps most significantly, given the constraints of TEF and NSS, the scraping methodology allowed the QAA to come up with collective judgment scores for each provider on any given day. In other words, it allowed for the student experience to be calculated as time-series data, and opened up the possibility of ‘near real-time’ monitoring of provider performance in terms of delivering a positive student experience, which could then be used by providers to specify need for action. The advantages of the approach, according to the QAA, are that it makes year-round real-time feedback feasible ‘based on what students deem to be important to them, rather than on what the creator of surveys or evaluation forms would like to know about’; reduces the data-collection burden; minimizes providers’ ‘opportunities to influence or sanitise the feedback’; and opens up ‘the ability to explore sector-wide issues, such as feedback relating to free speech, contact hours, or vice-chancellor pay’.

In sum, the report concludes, ‘the timely and reliable extraction of the student collective-judgement is an important method to facilitate quality improvement in higher education’. The QAA intends to pilot the methodology with ten HE providers late in 2018.

The QAA concern with sentiment analysis of student experience needs to be understood not just as an artefact of HE marketization and consumerization, but as part of a wider turn to ‘feelings’, ‘sensation’ and ‘emotion’ in contemporary metric cultures. As William Davies notes, ‘Emotions can now be captured and algorithmically analysed (“sentiment analysis”) thanks to the behavioural data that digital technologies collect’, and these data are increasingly of interest as sources of intelligence to be harnessed for political purposes by authorities such as government departments or agencies. Scraping student sentiment from social media replicates the logic of psychological and behavioural forms of governance within HE, and has potential to make the sector ever-more responsive to real-time traces of the study body’s emotional pulse.

QAA dashboardThe QAA-led Provider Healthcheck Dashboard allows institutions to monitor and compare their performance through data visualizations. Image from HESA

The medicalized metaphor of tracing pulses can be carried further in relation to another QAA project. In collaboration with its M5 Group partners Jisc and HESA, QAA led the production of a data visualization package called the ‘Provider Healthcheck Dashboard’. The purpose of the tool is to allow providers to perform ‘in-house healthchecks’ by comparing their institutional performances, on many metrics, against competitors. The metrics used in the Healthcheck dashboard include TEF ratings, QAA quality measurements, NSS scores, Guardian league tables, percentage of 1st or 2:1 degree rankings, and graduate employment performance over five years.

These metrics are presented on the dashboard as if they constitute the ‘vital signs’ of a provider’s medical health and their comparison with norms of performance, as depicted visually as percentage differences from benchmarks. The provider healthcheck acts as a kind of medical read-out of the competitive health of an institution, demonstrating in visual, easy-to-read format how an individual provider is situated in the wider market, and catalyzing relevant treatments to strengthen its performance.

Jisc—predicting performance
Jisc is a membership organization providing ‘digital solutions for UK education and research’. Its strategic ‘vision is for the UK to be the most digitally-advanced higher education, further education and research nation in the world’. Besides its vision, Jisc is the sector’s key driver of learning analytics—the measurement of student learning activities—which it is circulating via its formal associations with the other M5 Group members HESA and QAA.

As a key part of its vision Jisc has conducted significant work outlining a future data-intensive HE and how to accomplish it over the coming decade. It envisages a HE landscape dominated by learning analytics and even artificial intelligence, in which students will increasingly experience a personalized education based on their unique data profiles. Jisc’s chief executive has described ‘the potential of Education 4.0‘ as a response to the ‘fourth industrial revolution’ of AI, big data, and the internet of things. Education 4.0 would involve lecturers being displaced by technologies that ‘can teach the knowledge better’, are ‘immersive’ and ‘adaptive’ to learners’ needs, and that include ‘virtual assistants’ to ‘support students to navigate this world of choice and work with them to make decisions that will lead to future success’.

Towards this vision of an ‘AI-led’ future of HE, Jisc collaborated with Universities UK on the 2016 report Analytics in Higher Education. A key observation of the report is that existing datasets such as TEF provide very limited information for universities, policymakers or regulators to act on:

External performance assessments, such as the TEF, don’t in themselves support institutions understanding and using their data. Advanced learning analytics can allow institutions to move beyond the instrumental requirements of these assessments to a more holistic data analytic profile. Predictive learning analytics are also increasingly being used to inform impact evaluations, via outcomes data as performance metrics. Ultimately, this allows institutions to assess the return on investment in interventions.

As this excerpt indicates, Jisc has key interests in learning analytics, predictive analytics, outcomes data, performance metrics, and measuring return on investment.

It is now seeking to realize these ambitions through its launch in September 2018 of a national learning analytics service for further and higher education. According to the press release, the learning analytics service ‘uses real time and existing data to track student performance and activities’:

From libraries to laboratories, learning analytics can monitor where, when and how students learn. This means that both students and their university or college can ensure they are making the most of their learning experience. … This AI approach brings existing data together in one place to support academic staff in their efforts to enhance student success, wellbeing and retention.

The service itself consists of a number of interrelated parts. It includes cloud-based storage through Amazon Web Services so individual providers do not need to invest in commercial or in-house solutions, and ‘data explorer’ functionality ‘that brings together the data from your various sources and provides quick, flexible visualisations of VLE usage, attendance and assessment – for cohorts and individual students. … The information will help you to plan effective personal interventions with students and to identify under-performing areas of the curriculum’. A third aspect of the service is the ‘learning analytics predictor’ that helps teaching and support staff to use ‘predictive data modelling to identify students who might have problems’ and ‘to plan interventions that support students’.

The final part of the service is a student app called Study Goal, which is available for student download from major app stores. As it is described on the Google Play app store, ‘Study Goal borrows ideas from fitness apps, allowing students to see their learning activity, set targets, record their own activity amongst other things’. In addition, it encourages students to benchmark themselves against peers, and can be used to monitor attendance at lectures.

Jisc Study GoalThe Jisc Study Goal app is based on fitness devices enabling students to monitor their performance and benchmark themselves against peers. Image from Google Play

Study Goal is an especially interesting part of the Jisc learning analytics architecture because, like the provider healthcheck dashboard, it appeals to images of fitness and healthiness through self-quantification, personal analytics and self-monitoring. The logic of activity tracking and self-quantification has been translated from the biometrics of the body back to a kind of health metrics of the institution. University leaders and students alike are being responsibilized for their academic health, while their data are also made available to other parties for inspection, evaluation, prediction and potential intervention. Beyond the learning analytics service and Study Goal app, Jisc has also supported the Learnometer environmental sensing device, which ‘automatically samples your classroom environment, and makes suggestions through a unique algorithm as to what might be changed to allow students to learn and perform at their best’. Not only is student academic and emotional health understood to underpin their performance, but the environment needs to be healthy and amenable to performance-maximization too.

All of these developments indicate a significant reimagining of universities and all who inhabit them as being amenable to ever-more pervasive forms of performance sensing and medicalized inspection. Higher education is becoming a kind of experimental laboratory environment where subjects are exposed through metrics to a data-centric clinical gaze, and where everything from students’ feelings and teaching quality to classroom environment and graduate employment outcomes is a source of risk requiring quantitative anticipation, modelling, and real-time management. Positioned in this way, the political priority to make HE function as a ‘healthy market’ of self-improving competitors thoroughly depends on the metric machinery of agencies such as the QAA and Jisc, and on the expanding data infrastructure in which they are key nodes of experimentation, policy influence, and technical authority.

Metric authority
Jisc and the QAA are bringing new metric techniques into HE, such as sentiment analysis, predictive modelling, comparative data visualization and student benchmarking apps, in ways that do appear to reinforce the ongoing marketization of the sector. They are key nodal actors in the policy networks building a data infrastructure for student measurement–an infrastructure that remains unfinished and continues to evolve, mutate and expand in scope as new actors contribute to it, new analyses are made possible, and new demands are made for data, comparison and evaluation.

It is necessary to restate at this point that the QAA and Jisc are not necessarily uncritically pursuing a market-focused neoliberalizing agenda. The QAA’s sentiment analysis report appears somewhat critical of the market reform of HE under the Office for Students. The point is that these sector agencies are all now part of an expanding data infrastructure that appears almost to have its own volition and authority, and that is inseparable from political logics of competition, measurement, performance comparison, and evaluation that characterize the neoliberal style of governance. It is a data infrastructure of metric power in higher education.

David Beer rounds off his book with several key themes which he proposes as a framework for understanding metric power. These can be applied to the examples of the metrological machinery of HE being developed by the QAA and Jisc.

Limitation. According to Beer, metric power operates through setting limits on ‘what can be known’ and ‘what can be knowable’ by setting the ‘score’ and ‘the rules and norms of the game’. The QAA and Jisc have become key actors of limitation by constraining the assessment and evaluation of HE to what is measurable and calculable. Through learning analytics, Jisc is pushing a particular set of calculative techniques that make students knowable in quantitative terms, as sets of ‘scores’ which may then be compared with norms established from vast quantities of data in order to attach evaluations to their profiles. The QAA-led dashboard similarly sets constraints on how provider performance is defined, and cements the idea that performance comparison is the only game to be played.

Visibility. Metric power is based on what can be rendered visible or left invisible—a ‘politics of visibility’ that also translates into how certain practices, objects, behaviours and so on gain ‘value’, while others are not measured or valued. Through their data visualization projects, Jisc and the QAA are involved in rendering HE visible as graphical comparisons which can be used for making value-judgments—in terms of what is deemed to be valuable market information. But such data visualization projects also render invisible anything that remains un-countable or incalculable, and inevitably make quantitative data that can be translated into graphics appear more valuable than other qualitative evaluations and professional assessments. The Study Goal app reinforces to students that certain forms of quantifiable engagement are valued and prized more highly than other qualitative modes.

Classification. Metric power works by sorting, ordering, classifying and categorizing, with ‘the capacity to order and divide, to group or to individualise, to make-us-up and to sort-us-out’. Through learning analytics pushed by Jisc, students are sorted into clusters and groupings as calculated from their individual data profiles, which might then lead, in Jisc’s ideal, to personalized intervention. Likewise, the sorting of universities by comparative healthcheck dashboards and their ordering into hierarchical league tables serves to classify some as winners and others as fallers and failures in a competitive contest for performance ranking and advantage.

Prefiguration. Metric power ‘works by prefiguring judgements and setting desired aims and outcomes’ as ‘metrics can be used in setting out horizons … and imagined futures and then using them in current decision-making processes’—and this is especially the case with the imagining and pursuit of markets and the measurement of their performance. Here Beer appears to be pointing to the performativity or productivity of data to anticipate future possibilities in ways that catalyse pre-emptive action. Clearly, with its real-time sentiment analysis, the QAA’s student-scraping study is seeking to mobilize data for purposes of prompting action and pre-emption by promoting the use of time-series data that indicate trends towards future outcomes in terms of student ratings. Institutions that can read student satisfaction near to real-time from social media sentiment ,might act to pre-empt their TEF and NSS ratings. Likewise, the Healthcheck Dashboard allows institutions to anticipate future challenges, while Jisc has specifically sought to embed predictive analytics in institutional decision-making.

Intensification. Metric power perpetuates the models of the world with which it sets out, with metrics satisfying the ‘desire for competition’, intensifying processes of neoliberalization, and expanding its models of the market into new areas. We can see with the QAA and Jisc how the market model of competitive evaluation and ranking has extended from research and teaching assessment to rating of institutions via social media scoring and user-reviews. Jisc’s Study Goal app also puts the market model under the very eyes and fingertips of students as it invites them to compare and benchmark themselves against their peers, thereby intensifying metric power through competitive peer relations and positioning students as responsible for their own market performance and prospects.

Authorization. Metric power works by ‘authenticating, verifying, legitimating, authorizing, and endorsing certain outcomes, people, actions, systems, and practices,’ with market-based models and metrics taken and trusted as sources of ‘truth production’. The dashboards and analytics advanced by QAA and Jisc are being propelled into the sector with promises of objectivity, impartiality and neutrality, free of human bias and subjective judgment. As such, these data and their visualization constitute a seemingly authoritative set of truths, yet are ultimately an artificial reality of higher education formed only from those aspects of the sector that are countable and measurable.

Automation. Metric power shapes human agency, decision-making, judgement and discretion as systems of computation and the ‘decisive outcomes of metrics’ are taken as objective, legitimate, fair, neutral and impartial, especially as ‘automated metric-based systems’ potentially take ‘decisions out of the hands of the human actors’ and ‘algorithms are making the decisions’ instead. Although QAA and Jisc are clearly not removing human judgment from the loop in HE decision-making, they are introducing limited forms of automation into the sector through algorithmic sentiment analysis, machine learning and data visualization dashboards that generate ‘decisive outcomes’ and thereby shape institutional or personal decisions.

Affective. Finally, metric power and systems of measurement induce affective responses and feelings—metrics have ‘affective capacities’ such as inducing anxiety or competitive motivation, and thereby ‘promote or produce actions, behaviours, and pre-emptive responses’, largely by prompting people to ‘perform’ in ways that can be valued, compared and judged in measurable terms. Jisc’s Study Goal is exemplary in this respect, as it is intended to incite students to benchmark themselves in order to prompt competitive action. The healthcheck dashboards, likewise, are designed to induce performance anxiety in university leaders and prompt them to take strategic action to ensure advantageous positioning in the variety of metrics by which the institution is assessed. In both examples, HE is framed in terms of ‘risk’, a highly affective state of uncertainty, as a way of catalyzing self-improvement.

As these points illustrate, through organizations such as the QAA and Jisc, HE is encompassed in the sprawling networks of actors and technologies of metric power. The data infrastructure of higher education is an accomplishment of a mobile policy network of sector agencies along with a whole host of other organizations and experts from the governmental, commercial and nonprofit sectors. A form of mobile, networked fast policy is propelling metrics across the sector, and increasingly prompting changes in organizational and individual behaviours that will transform the higher education sector to see and act upon itself as a market.

Posted in Uncategorized | Tagged , , , , , | Leave a comment

The tech elite is making a power-grab for public education

Ben Williamson

Silicon wiresSilicon Valley entrepreneurs are linking public education into their growing networks of activity and influence. Image by Steve Jurvetson

 

In the same week that Amazon founder Jeff Bezos announced a major move into education provision, the FBI issued a stark warning about the risks posed to children by education technologies. These two events illustrate clearly how ed-tech has become a significant site of controversy, a power struggle between hugely wealthy tech entrepreneurs and those concerned by their attempts to colonize the education sector with their imaginaries and technologies. Jeff Bezos, Mark Zuckerberg, Elon Musk, Peter Thiel, and other super-wealthy Silicon Valley actors, are forming alternative visions and approaches to education from pre-school through primary and high schooling to university. They’re the new power-elite of education and their influence is spreading.

I’ve previously written about the Silicon Valley entrepreneurs and venture capitalists making a power-grab for the education sector. Benjamin Doxtdator has also written brilliantly about their rewriting of the history of public education as a social problem requiring urgent correction for the future. Here I just want to compile some recent developments of Silicon Valley intervention at each stage of education, to illustrate the growing scale of their influence as they continue linking public education into their networks of technical development.

The Amazon pre-school network
Amazon’s Jeff Bezos announced via a letter on Twitter his plans to invest $2billion in support for homeless families and a ‘network of new, non-profit, tier-one preschools’. The ‘Academies Fund’ will create ‘Montessori-inspired’ preschools through a new organization to ‘learn, invent and improve’ based on ‘the same set of principles that have driven Amazon’. Most notably, Bezos added, ‘the child will be the customer’ in these schools, with a ‘genuine, intense customer obsession’.

While many will admire the philanthropic efforts of the world’s richest man to support early years education, the idea of Amazon-style pre-schools that see children as customers problematically positions education as a commercialized service in ‘personalized learning’. Bezos is not the first tech sector entrepreneur to announce or invest in pre-schooling, and as Audrey Watters commented,

The assurance that ‘the child will be the customer’ underscores the belief – shared by many in and out of education reform and education technology – that education is simply a transaction: an individual’s decision-making in a ‘marketplace of ideas. … This idea that ‘the child will be the customer’ is, of course, also a nod to ‘personalized learning’…. As the customer, the child will be tracked and analyzed, her preferences noted so as to make better recommendations to up-sell her on the most suitable products.

The image of data-intensive startup pre-schools with young children receiving ‘recommended for you’ content as infant customers of ed-tech products is troubling. It suggests that from their earliest years children will become targets of intensive datafication and consumer-style profiling. As Michelle Willson argues in her article on algorithmic profiling and prediction of children, they ‘portend a future for these children as citizens and consumers already captured, modelled, managed by and normalised to embrace algorithmic manipulation’.

Primary Spaces
Primary schooling has been a strong focus for Silicon Valley for several years. Notable examples include Mark Zuckerberg’s The Primary School and Max Ventilla’s Altschool, two of the most high-profile startup schools to embed personalized learning technologies and approaches within the whole pedagogic apparatus. Less is known about Ad Astra, the hyper-exclusive private school project set up by Tesla boss Elon Musk within his SpaceX headquarters, although it too emphasizes students pursuing personal projects, problem-solving, and STEM subjects.

Space XElon Musk’s Ad Astra school is located in the HQ of Space X. Image by Steve Jurvetson

However, the globally-popular ed-tech company ClassDojo recently announced a partnership with Ad Astra to create new content for primary school age children. Building on the success of its previous content partnerships on ‘growth mindset’ and ‘empathy’, ClassDojo has worked with Ad Astra to create a set of resources focused on ‘conundrums’ that involve ‘open-ended critical thinking and ethics challenges’. The resources are not intended to be used at Ad Astra itself, but will be released to teachers and schools later in 2018.

The ClassDojo partnership means that Ad Astra’s focus on problem-solving and ethical challenges will be mobilized into classrooms at potentially huge scale. ClassDojo already claims millions of users, and is fast expanding as a major social media platform and content platform for primary schools in many countries. The conundrums ClassDojo and Ad Astra have created pose problems that are considered foundational to ‘building liberal society’. This suggests that the kind of ‘liberal society’ assumed by entrepreneurs such as Elon Musk is a vision to be pursued through the mass inculcation of children’s critical thinking and problem-solving.

Given that Musk, like Amazon’s Bezos, is also investing in space exploration, their efforts in young children’s education raise significant questions about what kind of future world and liberal society they are imagining and seeking to build. What kind of child are they trying to construct to take part in a future society that, for Bezos and Musk, may well be distributed into space?

Super High Schools
High schools are the focus for Laurene Powell Jobs’ XQ Super School project, which is a ‘community of people mobilizing America to reimagine public high school’. The project previously awarded philanthropic funding through a competition to 18 US high schools, including Summit School, one of a chain sponsored by Facebook’s Mark Zuckerberg.

XQ Super School is not just a competition though—it is seeking to produce a glossy blueprint for the future of public high school itself in its new guise as a ‘community’ or ‘network’ of reform. Its updated website features a variety of resources, videos, guidance, partnership opportunities and other materials to stimulate imaginative thought across the education sector. It also now features highly developed learner goals for schools to aspire to, including problem-solving, collaboration, invention, and the cultivation of ‘growth mindset’–mindset being the preferred success-psychology of Silicon Valley right now, and itself developed and propagated from Stanford University, itself the original academic home of many of the valley’s most successful entrepreneurs.

XQ super school projectXQ Super School marketing. Image from XQ Super School

It is easy to view XQ Super School as a commercial takeover of public education. Perhaps more subtly, though, what XQ are others are accomplishing is a reimagining of high school through the cultural lens of Silicon Valley. These entrepreneurs are pursuing a future vision based on their own politics, their own psychological theories, and their own discourse—of community, of problem-solving, of invention, of growth mindset—and propelling it into the remaking of public education at large.

Intelligent Universities
The contemporary university is also being reimagined by the tech power-elite. Peter Thiel—the co-founder of PayPal alongside Elon Musk—for example, established the Thiel Fellowship as an alternative to higher education for ambitious young technology entrepreneurs. Higher education itself has become the target for a massive growth in the educational technology market, part of what David Berry terms the new ‘data-intensive university’.

The social media platform LinkedIn has become one of the most significant players in the data-intensive HE market. Since being acquired by Microsoft for more than $26bn in 2016, Janja Komljenovic argues that LinkedIn is increasingly targeting the HE sector with particular features that are generated explicitly for students, graduates and universities. These features include student profiles, university branded pages, and the capacity for students to search universities based on graduate career outcomes.

According to Komljenovic, ‘LinkedIn moves beyond the passivity of advertising to its users towards actively structuring digital labour markets, in which it strategically includes universities and its constituents’, and argues that it is using its ‘qualification altmetrics’ to build ‘a global marketplace for skills to run in parallel to, or instead of university degrees’.

In this sense, LinkedIn is fundamentally transforming and challenging HE by making students and universities into ‘prosumers’ in ‘data markets’, where ‘the data they produce is monetised and repackaged to become governing devices for their own sector’, and is reframing ‘meanings in the HE sector about quality of universities and degrees; graduates and their diplomas; and skills in relation to employability’. As such, increasingly LinkedIn’s algorithms hold potential to match individuals, skills and jobs as gaps are revealed in labour markets, and appear to challenge the project of higher education to become more outcomes- and skills-focused as a result.

HE technology landscapeThe 2018 higher education technology landscape. Infographic by Eduventures

Amazon, too, is seeking position in higher education. It recently announced that it was installing Amazon Echo Dot devices in all student dormitories at St Louis University as part of its Alexa for Business offering. The move, it was reported, is ‘among the largest smart speaker deployments at a university and could help Amazon to establish smart speakers and the voice interface as typical among younger users’.

Beyond its clear business goals, with the partnership Amazon is marking the entrance of AI into HE, with Alexa becoming an automated student experience assistant. It is hard to imagine that Alexa won’t have a place in Jeff Bezos’s preschool network too, not least as voice assistants may make a better interface than screens with children who have yet to learn to read or write. Amazon is entering public education at both preschool and postsecondary phases, with massive implications for institutions, staff and students of all ages.

The FBI and the ‘ed-techlash’
The tech elite now making a power-grab for public education probably has little to fear from FBI warnings about education technology. The FBI is primarily concerned with potentially malicious uses of sensitive student information by cybercriminals. There’s nothing criminal about creating Montessori-inspired preschool networks, using ClassDojo as a vehicle to build a liberal society, reimagining high school as personalized learning, or reshaping universities as AI-enhanced factories for producing labour market outcomes–unless you consider all of this a kind of theft of public education for private commercial advantage and influence.

The FBI intervention does, however, at least generate greater visibility for concerns about student data use. The tech power-elite of Zuckerberg, Musk, Thiel, Bezos, Powell Jobs, and the rest, is trying to reframe public education as part of the tech sector, and subject it to ever-greater precision in measurement, prediction and intervention. These entrepreneurs are already experiencing a ‘techlash‘ as people realize how much they have affected politics, culture and social life. Maybe the FBI warning is the first indication of a growing ‘ed-techlash’, as the public becomes increasingly aware of how the tech power-elite is seeking to remake public education to serve its own private interests.

Posted in Uncategorized | Tagged , , , , , , | 1 Comment