Learning from surveillance capitalism

Ben Williamson

Fraction collectorSurveillance capitalism combines data analytics, business strategy, and human behavioural experimentation. Image: “Fraction collector” by proteinbiochemist

‘Surveillance capitalism’ has become a defining concept for the current era of smart machines and Silicon Valley expansionism. With educational institutions and practices increasingly focused on data collection and outsourcing to technology providers, key points from Shoshana Zuboff’s The Age of Surveillance Capitalism can help explore the consequences the field of education. Mindful of the need for much more careful studies of the intersections of education with commercially-driven data-analytic strategies of ‘rendition’ and ‘behavioural modification’, here I simply outline a few implications of surveillance capitalism for how we think about education policy and about learning.

Data, science and surveillance
Zuboff’s core argument is that tech businesses such as Google, Microsoft, Facebook and so on have attained unprecedented power to monitor, predict, and control human behaviour through the mass-scale extraction and use of personal data. These aren’t especially novel insights—Evgeny Morozov has a 16,000 word essay on the book’s analytical and stylistic shortcomings—but Zuboff’s strengths are in the careful conceptualization and documentation of some of the key dynamics that have made surveillance capitalism possible and practical. As James Bridle argued in his review of the book, ‘Zuboff has written what may prove to be the first definitive account of the economic – and thus social and political – condition of our age’.

Terms such as ‘behavioural surplus’, ‘prediction products’, ‘behavioural futures markets’, and ‘instrumentarian power’ provide a useful critical language for decoding what surveillance capitalism is, what it does, and at what cost. Some of the most interesting documentary material Zuboff presents include precedents such as the radical behaviourism of BF Skinner and the ‘social physics’ of MIT Media Lab pioneer Sandy Pentland. For Pentland, quoted by Zuboff, ‘a mathematical, predictive science of society … has the potential to dramatically change the way government officials, industry managers, and citizens think and act’ (Zuboff, 2019, 433) through ‘tuning the network’ (435). Surveillance capitalism is not and was never simply a commercial and technical task, but deeply rooted in human psychological research and social experimentation and engineering. This combination of tech, science and business has enabled digital companies to create ‘new machine processes for the rendition of all aspects of human experience into behavioural data … and guarantee behavioural outcomes’ (339).

Zuboff has nothing to say about education specifically, but it’s tempting straight away to see a whole range of educational platforms and apps as condensed forms of surveillance capitalism (though we might just as easily invoke ‘platform capitalism’). The classroom behaviour monitoring app ClassDojo, for example, is a paradigmatic example of a successful Silicon Valley edtech business, with vast collections of student behavioural data that it is monetizing by selling premium features for use at home and offering behaviour reports to subscribing parents. With its emphasis on positive behavioural reinforcement through reward points, it represents a marriage of Silicon Valley design with Skinner’s aspiration to create ‘technologies of behaviour’. ClassDojo amply illustrates the combination of behavioural data extraction, behaviourist psychology and monetization strategies that underpin surveillance capitalism as Zuboff presents it.

Perhaps more pressingly from the perspective of education, however, Zuboff makes a number of interesting observations about ‘learning’ that are worth unpacking and exploring.

Learning divided
The first point is about the ‘division of learning in society’ (the subject of chapter 6, and drawing on her earlier work on the digital transformation of work practices). By this term Zuboff means to demarcate a shift in the ‘ordering principles’ of the workplace from the ‘division of labour’ to a ‘division of learning’ as workers are forced to adapt to an ‘information-rich environment’. Only those workers able to develop their intellectual skills are able to thrive in the new digitally-mediated workplace. Some workers are enabled (and are able) to learn to adapt to changing roles, tasks and responsibilities, while others are not. The division of learning, Zuboff argues, raises questions about (1) the distribution of knowledge and whether one is included or excluded from the opportunity to learn; (2) about which people, institutions or processes have the authority to determine who is included in learning, what they are able to learn, and how they are able to act on their knowledge; and (3) about what is the source of power that undergirds the authority to share or withhold knowledge (181).

But this division of learning, according to Zuboff, has now spilled out of the workplace to society at large. The elite experts of surveillance capitalism have given themselves authority to know and learn about society through data. Because surveillance capitalism has access to both the ‘material infrastructure and expert brainpower’ (187) to transform human experience into data and wealth, it has created huge asymmetries in knowledge, learning and power. A narrow band of ‘privately employed computational specialists, their privately owned machines, and the economic interests for who sake they learn’ (190) has ultimately been authorized as the key source of knowledge over human affairs, and empowered to learn from the data in order to intervene in society in new ways.

Sociology of education researchers have, of course, asked these kinds of questions for decades. They are ultimately questions about the reproduction of knowledge and power. But in the context of surveillance capitalism such questions may need readdressing, as authority over what constitutes valuable and worthwhile knowledge for learning passes to elite computational specialists, the commercial companies they work for, and even to smart machines. As data-driven knowledge about individuals grows in predictive power, decisions about what kinds of knowledge an individual learner should receive may even be largely decided by ‘personalized learning platforms’–as current developments in learning analytics and adaptive learning already illustrate. The prospect of smart machines as educational engines of social reproduction should be the subject of serious future interrogation.

Learning collectives
The second key point is about the ‘policies’ of smart machines as a model for human learning (detailed in chapter 14). Here Zuboff draws on a speech by a senior Microsoft executive talking about the power of combined cloud and Internet of Things technologies for advanced manufacturing and construction. In this context, Zuboff explains, ‘human and machine behaviours are tuned to pre-established parameters determined by superiors and referred to as “policies”’ (409). These ‘policies’ are algorithmic rules that

substitute for social functions such as supervision, negotiation, communication and problem solving. Each person and piece of equipment takes a place among an equivalence of objects, each one “recognizable” to the “system” through the AI devices distributed across the site. (409)

In this example, the ‘policy’ is then a set of algorithmic rules and a template for collective action between people and machines to operate in unison to achieve maximum efficiency and optimal outcomes. Those ‘superiors’ with the authority to determine the policies, of course, are those same computational experts and machines that have benefitted from the division of learning. This gives them unprecedented powers to ‘apply policies’ to people, objects, processes and activities alike, resulting in a ‘grand confluence in which machines and humans are united as objects in the cloud, all instrumented and orchestrated in accordance with the “policies” … that appear on the scene as guaranteed outcomes to be automatically imposed, monitored and maintained by the “system”’ (410). These new human-machine learning collectives represent the future for many forms of work and labour under surveillance capitalism, according to Zuboff.

Zuboff then goes beyond human-machine confluences in the workplace to consider the instrumentation and orchestration of other types of human behaviour. Drawing parallels with the behaviourism of Skinner, she argues that digitally-enforced forms of ‘behavioral modification’ can operate ‘just beyond the threshold of human awareness to induce, reward, goad, punish, and reinforce behaviour consistent with “correct policies”’, where ‘corporate objectives define the “policies” toward which confluent behaviour harmoniously streams’ (413). Under conditions of surveillance capitalism, Skinner’s behaviourism and Pentland’s social physics spill out of the lab into homes, workplaces, and all the public and private space of everyday life–ultimately turning the world into a gigantic data science lab for social and behavioural experimentation, tuning and engineering.

And the final point she makes here is that humans need to become more machine-like to maximize such confluences. This is because machines connected to the IoT and the cloud work through collective action by each learning what they all learn, sharing the same understanding and ‘operating in unison with maximum efficiency to achieve the same outcomes’ (413). This model of collective learning, according to surveillance capitalists, can learn faster than people, and ‘empower us to better learn from the experiences of others’:

The machine world and the social world operate in harmony within and across ‘species’ as humans emulate the superior learning processes of the smart machines. … [H]uman interaction mirrors the relations of the smart machines as individuals learn to think and act by emulating one another…. In this way, the machine hive becomes the role model for a new human hive in which we march in peaceful unison toward the same direction based on the same ‘correct’ understanding in order to construct a world free of mistakes, accidents, and random messes. (414)

For surveillance capitalists human learning is inferior to machine learning, and urgently needs to be improved by gathering together humans and machines into symbiotic systems of behavioural control and management.

Learning in, from, or for surveillance capitalism?
These key points from The Age of Surveillance Capitalism offer some provocative starting places for further investigations into the future shape of education and learning amid the smart machines and their smart computational operatives. Three key points stand out.

1) Cultures of computational learning. One line of inquiry might be into the cultures of learning of those computational experts who have gained from the division of learning. And I mean this in two ways. How are they educated? How are they selected into the right programs? What kinds of ongoing training provides the kinds of privilege to learn about society through mass-scale behavioural data? These are questions about new and elite forms of workforce preparation and professional education. How, in short, are these experts educated, qualified and socialized to do data analytics and behaviour modification—if that is indeed what they do? In other words, how is one educated to become a surveillance capitalist?

The other way of approaching this concerns what is actually involved in ‘learning’ about society through its data. This is both a pedagogic and a curricular question. Pedagogically, education research would benefit from a much better understanding of the kinds of workplace education programmes underway inside the institutions of surveillance capitalism. From a curricular perspective, this would also require an engagement with the kinds of knowledge assumptions and practices that flow through such spaces. As mentioned earlier, sociology of education has long been concerned with how aspects of culture are ‘selected’ for reproduction by transmission through education. As tech companies and related academic labs become increasingly influential, they are producing new ‘social facts’ that might affect how people both within and outside those organizations come to understand the world. They are building new knowledge based on a computational, mathematical, and predictive style of thinking. What, then, are the dynamics of knowledge production that generate these new facts, and how do they circulate to affect what is taught and learnt within these organizations? As Zuboff notes, pioneers such as Sandy Pentland have built successful academic teaching programs at institutes like MIT Media Lab to reproduce knowledge practices such as ‘social physics’.

2) Human-machine learning confluences. The second key issue is what it means to be a learner working in unison with the Internet of Things. Which individuals are included in the kind of learning that is involved in becoming part of this ‘collective intelligence? When smart machines and human workers are orchestrated together into ‘confluence’, and human learning is supposed to emulate machine learning, how do our existing theories and models of human learning hold up? Machine learning and human learning are not obviously comparable, and the tech firms surveyed by Zuboff appear to hold quite robotic notions of what constitutes learning. Yet if the logic of extreme instrumentation of working environments develops as Zuboff anticipates, this still raises significant questions about how one learns to adapt to work in unison with the smart machines, who gets included in this learning, who gets excluded, how those choices and decisions are made, and what kinds of knowledge and skills are gained from inclusion. Automation is likely to lead to both further divisions in learning and more collective learning at the same time–with some individuals able to exercise considerable autonomy over the networks they’re part of, and others performing the tasks that cannot yet be automated.

In the context of concerns about the role of education in relation to automation, intergovernmental organizations such as the OECD and World Economic Forum have begun encouraging governments to focus on ‘noncognitive skills’ and ‘social-emotional learning’ in order to pair human emotional intelligence with the artificial cognitive intelligence of smart machines. Those unique human qualities, so the argument goes, cannot be quantified whereas routine cognitive tasks can. Classroom behaviour monitoring platforms such as ClassCraft have emerged to measure those noncognitive skills and offer ‘gamified’ positive reinforcement for the kind of ‘prosocial behaviours’ that may enable students to thrive in a future of increased automation. Being emotionally intelligent, by these accounts, would seem to allow students to enter into ‘confluent’ relations with smart machines. Rather than competing with automation, they would complement it as collective intelligence. ‘Human capital’ is no longer a sufficient economic goal to pursue through education—it needs to produce ‘human-computer capital’ too.

3) Programmable policies. A third line of inquiry would be into the idea of ‘policies’. Education policy studies have long engaged critically with the ways government policies circumscribe ‘correct’ forms of educational activity, progress, and behaviour. With the advance of AI-based technologies into schools and universities, policy researchers may need to start interrogating the policies encoded in the software as well as the policies inscribed in government texts. These new programmable policies potentially have a much more direct influence on  ‘correct’ behaviours and maximum outcomes by instrumenting and orchestrating activities, tasks and behaviours in educational institutions.

Moreover, researchers might shift their attention to the kind of programmable policies that are enacted in the instrumented workplaces where, increasingly, much learning happens. Tech companies have long bemoaned the adequacy of school curricula and university degrees to deliver the labour market skills they require. With the so-called ‘unbundling’ of the university in particular, higher education may be moving further towards ‘demand driven’ forms of professional learning and on-the-job industry training provided by private companies. When education moves into the smart workplace, learning becomes part of the confluence of humans and machines, where all are equally orchestrated by the policies encoded in the relevant systems. Platforms and apps using predictive analytics and talent matching algorithms are already emerging to link graduates to employers and job descriptions. The next step, if we accept the likeliness of the direction of travel of surveillance capitalism, might be to match students directly to smart machines on-demand as part of the collective human-machine intelligence required to achieve maximum efficiency and optimized outcomes for capital accumulation. In this scenario, the computer program would be the dominant policy framework for graduate employability, actively intervening in professional learning by sorting individuals into appropriate networks of collective learning and then tuning those networks to achieve best effects.

All of this raises one final question, and a caveat. First the caveat. It’s not clear that ‘surveillance capitalism’ will sustain as an adequate explanation for the current trajectories of high-tech societies. Zuboff’s account is not uncontested, and it’s in danger of becoming an explanatory shortcut for deployment anywhere that data analytics and business interests intersect (as ‘neoliberalism’ is sometimes evoked as a shortcut for privatization and deregulation). The current direction of travel and future potential described by Zuboff are certainly not desirable, and should not be accepted as inevitable. If we do accept Zuboff’s account of surveillance capitalism, though, the remaining question is whether we should be addressing the challenges of learning in surveillance capitalism, or the potential for whole education systems to learn from surveillance capitalism and adapt to fit its template. Learning in surveillance capitalism at least assumes a formal separate of education from these technological, political and economic conditions. Learning from it, however, suggests a future where education has been reformatted to fit the model of surveillance capitalism–indeed, where a key purpose of education is for surveillance capitalism.

Zuboff, S. 2019. The Age of Surveillance Capitalism: The fight for a human future at the new frontier of power. London: Profile.
Advertisements
Posted in Uncategorized | Tagged , , , , , , , | Leave a comment

Education for the robot economy

Ben Williamson

Robot by Saundra CastanedaRobotization is driving coding and emotional skills development in education. Image by Saundra Castaneda

Automation, coding, data and emotions are the new keywords of contemporary education in an emerging ‘robot economy’. Critical research on education technology and education policy over the last two decades has unpacked the connections of education to the so-called ‘knowledge economy’, particularly as it was projected globally into education policy agendas by international organizations including the OECD, World Economic Forum and World Bank. These organizations, and others, are now shifting the focus to artificial intelligence and the challenges of automation, and pushing for changes in education systems to maximize the new economic opportunities of robotization.

Humans & robots as sources of capital
In the language of the knowledge economy, the keywords were globalization, innovation, networks, creativity, flexibility, multitasking and multiskilling—what social theorists variously called ‘NewLiberalSpeak’ and the ‘new spirit of capitalism’. With knowledge a new source of capital, education in the knowledge economy was therefore oriented towards the socialization of students into the practices of ICT, communication, and teamwork that were seen as the necessary requirements of the new ‘knowledge worker’.

In the knowledge economy, learners were encouraged to see themselves as lifelong learners, constantly upskilling and upgrading themselves, and developing metacognitive capacities and the ability to learn how to learn in order to adapt to changing economic circumstances and occupations. Education policy became increasingly concerned with cultivating the human resources or ‘human capital’ necessary for national competitive advantage in the globalizing economy. Organizations such as the OECD provided the international large scale assessment PISA to enable national systems to measure and compare their progress in the global knowledge economy, treating young people’s test scores as indicators of human capital development.

The steady shift of the knowledge economy into a robot economy, characterized by machine learning, artificial intelligence, automation and data analytics, is now bringing about changes in the ways that many influential organizations conceptualize education moving towards the 2020s. Although this is not an epochal or decisive shift in economic conditions, but rather a slow metamorphosis involving machine intelligence in the production of capital, it is bringing about fresh concerns with rethinking the purposes and aims of education as global competition is increasingly linked to robot capital rather than human capital alone.

Automation
According to many influential organizations, it is now inevitable that automated technologies, artificial intelligence, robotization and so on will pose a major threat to many occupations in coming years. Although the evidence of automation causing widespread technological unemployment is contested, many readings of this evidence adopt a particularly determinist perspective. The robots are coming, the threat of technology is real and unstoppable, and young people are going to be hit hardest because education is largely still socializing them for occupations that the robots will replace.

The OECD has produced findings reporting on the skill areas that automation could replace. A PriceWaterhouseCoopers report concluded that ‘less well educated workers could be particularly exposed to automation, emphasising the importance of increased investment in lifelong learning and retraining’. Pearson and Nesta, too, collaborated on a project to map the ‘future skills’ that education needs to promote to prepare nations for further automation, globalization, population ageing and increased urbanization over the next 10 years. The think tank Brookings has explicitly stated, ‘To develop a workforce prepared for the changes that are coming, educational institutions must de-emphasize rote skills and stress education that helps humans to work better with machines—and do what machines can’t’.

For most of these organizations, the solution is not to challenge the encroachment of automation on jobs, livelihoods and professional communities. Instead, the robot economy can be even further optimized by enhancing human capabilities through reformed institutions and practices of education. As such, education is now being positioned to maximize the massive economic opportunities of robotization.

Two main conclusions flow from the assumption that young people’s future jobs and labour market prospects are under threat, and that the future prospects of the economy are therefore uncertain, unless education adapts to the new reality of automation. The first is that education needs to de-emphasize rote skills of the kind that are easy for computers to replace and stress instead more digital upskilling, coding and computer science. The second is that humans must be educated to do things that computerization cannot replace, particularly by upgrading their ‘social-emotional skills’.

Coding
Learning to code, programming and computer science have become the key focus for education policy and curriculum reform around the world. Major computing corporations such as Google and Oracle have invested in coding programs alongside venture capitalists and technology philanthropists, while governments have increasingly emphasized new computing curricula and encouraged the involvement of both ed-tech coding products and not-for-profit coding organizations in schools.

The logic of encouraging coding and computer science education in the robot economy is to maximize the productivity potential of the shift to automation and artificial intelligence. In the UK, for example, artificial intelligence development is at the centre of the government’s industrial strategy, which made computer programming in schools an area for major investment. Doing computer science in schools, it is argued, equips young people not just with technical coding skills, but also new forms of computational thinking and problem-solving that will allow them to program and instruct the machines to work on their behalf.

This emphasis on coding is also linked to wider ideas about digital citizenship and entrepreneurship, with the focus on preparing children to cope with uncertainty in an AI age. A recent OECD podcast on AI and education, for example, put coding, entrepreneurship and digital literacy together with concerns over well-being and ‘learning to learn’. Coding our way out of technological unemployment, by upskilling young people to program, work with, and problem-solve with machines, then, is only one of the proposed solutions for education in the robot economy.

Emotions
The other solution is ‘social-emotional skills’. Social-emotional learning and skills development is a fast-growing policy agenda with significant buy-in by international organizations. The World Economic Forum has projected a future vision for education that includes the development and assessment of social-emotional learning through advanced technologies. Similarly, the World Bank has launched a program of international teacher assessment that measures the quality of instruction in socioemotional skills.

The OECD has perhaps invested the most in social-emotional learning and skills, as part of its long-term ‘Skills for Social Progress’ project and its Education 2030 framework. The OECD’s Andreas Schleicher is especially explicit about the perceived strategic importance of cultivating social-emotional skills to work with artificial intelligence, writing that ‘the kinds of things that are easy to teach have become easy to digitise and automate. The future is about pairing the artificial intelligence of computers with the cognitive, social and emotional skills, and values of human beings’.

Moreover, he casts this in clearly economic terms, noting that ‘humans are in danger of losing their economic value, as biological and computer engineering make many forms of human activity redundant and decouple intelligence from consciousness’. As such, human emotional intelligence is seen as complementary to computerized artificial intelligence, as both possess complementary economic value. Indeed, by pairing human and machine intelligence, economic potential would be maximized.

Intuitively, it makes sense for schools to focus on the social and emotional aspects of education, rather than wholly on academic performance. Yet this seemingly humanistic emphasis needs to be understood as part of the globalizing move by the OECD and others to yet again reshape the educational agenda to support economic goals.

Data
The fourth keyword is data, and it refers primarily to how education must be ever more comprehensively measured to assess progress in relation to the economy. Just as the OECD’s PISA has become central to measuring progress in the knowledge economy, the OECD’s latest international survey, the Study of Social and Emotional Skills—a computer-based test for 10 and 15 year-old young people that will report its first findings in 2020—will allow nations and cities to assess how well their ‘human capital’ is equipped to complement the ‘robot capital’ of automated intelligent machines.

If the knowledge economy demanded schools help produce measurable quantities of human capital, in the robot economy schools are made responsible for helping the production of ‘human-computer capital’–the value to be derived from hybridizing human emotional life with AI. The OECD has prepared the test to measure and compare data on how well countries and cities are progressing towards this goal.

While, then, automation does not immediately pose a threat to teachers–unless we see AI_based personalized learning software as a source of technological unemployment in the education sector–it is likely to affect the shape and direction of education systems in more subtle ways in years to come. The keywords of the knowledge economy have been replaced by the keywords of the robot economy. Even if robotization does not pose an immediate threat to the future jobs and labour market prospects of students today, education systems are being pressured to change in anticipation of this economic transformation.

The knowledge economy presented urgent challenges for research; its metamorphosis into an emergent robot economy, driving policy demands for upskilling students with coding skills and upgraded emotional competencies, demands much further research attention too.

Posted in Uncategorized | Tagged , , , , , , , , | Leave a comment

Learning lessons from data controversies

Ben Williamson

This is a talk delivered at OEB2018 in Berlin on 7 December 2018, with links to key sources. A video recording is also available (from about 51mins mark)

Ten years ago ‘big data’ was going to change everything and solve every problem—in health, business, politics, and of course education. But, a decade later, we’re now learning some hard lessons from the rapid expansion of data analytics, algorithms, and AI across society.

DCMS Zuckerberg      Data controversies became the subject of international government attention in 2018

Data doesn’t seem quite so ‘cool’ now that it’s at the centre of some of society’s most controversial events. By ‘controversy’ here I mean those moments when science and technical innovation come into conflict with the public or political concerns.

Internationally, politicians have already begun to ask hard questions, and are looking for answers to recent data controversies. The current level of concern about companies like Facebook, Google, Uber, Huawei, Amazon and so on is now so acute that some commentators say we’re witnessing a ‘tech-lash’—a backlash of public opinion and political sentiment to the technology sector.

The tech sector is taking this on board, such as the Centre for Humane Technology seeking to stop tech from ‘hijacking our minds and society’. Universities that nurture the main tech talent, such as MIT, have begun to recognize their wider social responsibility and are teaching their students about the power of future technologies, and their potentially controversial effects. The AI Now research institute just launched a new report on the risks of algorithms, AI and analytics, calling for tougher regulation.

TES-algorithms-printPrint article on AI & robotization in teaching, from Times Education Supplement, 26 May 2017

We’re already seeing indications in the education media of a growing concern that AI and algorithms are ‘gonna get you’—as it said in the teachers’ magazine the Times Education Supplement last year.

In the states the FBI even issued a public service announcement warning that the collection of sensitive data by ‘edtech’ could result in social engineering, bullying, tracking, identity theft, or other means for targeting children’. An ‘edtech-lash’ has begun.

The UK Children’s Commissioner has also warned of the risks of ‘datafying children’ both at home and at school. ‘We simply do not know what the consequences of all this information about our children will be,’ she argued, ‘so let’s take action now to understand and control who knows what about our children’.

And books like Weapons of Math Destruction and The Tyranny of Metrics have become surprise non-fiction successes, both drawing attention to the damaging effects of data use in schools and universities.

So, I want to share some lessons from data controversies in education in the last couple of years—things we can learn from to avoid damaging effects in the future.

Software can’t ‘solve’ educational ‘problems’ 
One recent moment of data controversy was the protest by US students against the Mark Zuckerberg-supported Summit Public Schools model of ‘personalized learning’. Summit is originally a charter school chain with an adaptive learning platform—partly built by Facebook engineers—that’s scaled up across many high school sites in the US.

But in November, students staged walkouts in protest at the educational limitations and data privacy implications of the personalized learning platform. Student protestors even wrote a letter to Mark Zuckerberg in The Washington Post, claiming assignments on the Summit Learning Platform required hours alone at a computer and didn’t prepare them for exams.

They also raised flags about the huge range of personal information the Summit program collected without their knowledge or consent.

‘Why weren’t we asked about this before you and Summit invaded our privacy in this way?’ they asked Zuckerberg. ‘Most importantly’, they wrote, ‘the entire program eliminates much of the human interaction, teacher support, and discussion and debate with our peers that we need in order to improve our critical thinking…. It’s severely damaged our education.’

So our first lesson is that education is not entirely reducible to a ‘math problem’, nor can it be ‘solved’ with software—it exceeds the increase in data available from teaching and learning processes. For many educators and students alike, education is more than the numbers in an adaptive, personalized learning platform, and includes non-quantifiable relationships, interactions, discussion, and thinking.

Global edtech influence raises public concern
Google, too, has become a controversial data company in education. Earlier this year it launched its Be Internet Awesome resources for digital citizenship and online safety. But the New York Times questioned whether the public should accept Google as a ‘role model’ for digital citizenship and good online conduct when it is seriously embattled by major data controversies.

Google NY TimesThe New York Times questioned Google positioning itself as a trusted authority in schools

Through its education services, it’s also a major tracker of student data and is shaping its users as lifelong Google customers, said the Times. Being ‘Internet Awesome’ is also about buying into Google as a user and consumer.

In fact, Google was a key target of a whole series of Times articles last year revealing Silicon Valley influence in public education. Silicon Valley firms, it appears, have become new kinds of ‘global education ministries’—providing hardware and software infrastructure, online resources and apps, curricular materials and data analytics services to make public education more digital and data-driven.

This is what we might call ‘global policymaking by digital proxy’ as the tech influences public education at speeds and international scale conventional policy approaches cannot achieve.

The lesson here is that students, the media and public may have ideas, perceptions and feelings about technology, and the companies behind it, that are different to companies’ aspirations—claims of social responsibility compete with feelings of ‘creepiness’ about commercial tracking and concern about private sector influence in public education.

Data leaks break public trust
Data security and privacy is perhaps the most obvious topic for a data controversy lesson—but it remains an urgent one as educational institutions and companies are increasingly threatened by cybersecurity attacks, hacks, and data breaches.

K12 cybermapThe K12 Cyber Incident map has catalogued hundreds of school data security incidents

The K-12 Cyber Incident Map is doing great work in the US to catalogue school hacks and attacks, importantly raising awareness in order to prompt better protection. And then there’s the alarming news of really huge data leaks from the likes of EdModo and SchoolZilla—raising fears this is surely only going to get worse as more data is collected and shared about students.

The key lesson here is that data breaches and student privacy leaks also break students’, parents’, and the public’s trust in education companies. This huge increase in data security threats risks exposing the ed-tech industry to media and government attack. We’re supposed to protect children, they might say, but we’re exposing their information to the dark web instead!

Algorithmic mistakes & encoded politics cause social consequences 
Then there’s the problem of educational algorithms being wrong. Earlier this year, the English Testing Service revealed results from a check of whether international students were cheating an English language proficiency test. To discover how many students had cheated, ETS used voice biometrics to analyze tens of thousands of recorded oral tests, looking for repeated voices.

What it found? According to reports, 20% of the time the algorithm was getting the voice matching wrong. That’s a huge error rate, with massive consequences.

Around 5000 international students in the UK wrongly had their visas revoked and were threatened with deportation, all related to the UK’s ‘hostile environment’ immigration policy. Many have subsequently launched legal challenges, and many have won.

Data lesson 4, then, is that poor quality algorithms and data can lead to life-changing outcomes and consequences for students—even raising the possibility of legal challenges to algorithmic decision-making. This example also shows the problem with ascribing too much objectivity and accuracy to data and algorithms—in reality, they’re the products of ‘humans in the room’ whose own assumptions, and potential biases and mistakes can be coded into the software that’s used to make life-changing decisions.

Let’s not forget, either, that the test wouldn’t even have existed except the UK government was seeking to root out and deport unwanted immigrants—the algorithm was programmed with some nasty politics.

Transparency, not algorithmic opacity, is key to building trust with users
The next lesson is about secrecy and transparency. The UK government’s Nudge Unit, for example, revealed this time last year that it had piloted a school-evaluating algorithm for school inspection, which could identify where a school might be failing from its existing data.

Many headteachers and staff are already fearful of the human school inspector. The automated school-inspecting algorithm secretly crawling around in their servers and spreadsheets, if not their corridors, offices and classrooms, hasn’t made them any less concerned. Especially as it can only rate their performance from the numbers, rather than qualitatively assessing the impact of local context on how they perform.

A spokesperson for the National Association of Headteachers said to BBC News, ‘We need to move away from a data-led approach to school inspection. It is important that the whole process is transparent and that schools can understand and learn from any assessment. Leaders and teachers need absolute confidence that the inspection system will treat teachers and leaders fairly’.

The lesson to take from the Nudge Unit experiment is that secrecy and lack of transparency in use of data analytics and algorithms do not win trust in the education sector—teacher unions and education press are likely to reject AI and algorithmic assistance if not believed to be transparent, fair, or context-sensitive.

Psychological surveillance raises fears of emotional manipulation
My last three lessons focus on educational data controversies that are still emerging. These relate to the idea that the ‘Internet of Bodies’ has arrived in the shape devices for tracking the ‘intimate data’ of your body, emotions and brain.

For example, ‘emotion AI’ is emerging as a potential focus of educational innovation—such as biometric engagement sensors, emotion learning analytics, and facial vision algorithms that can determine students’ emotional response to teaching styles, materials, subjects, and different teachers.

Emotive computingEmotionAI is being developed for use in education, according to EdSurge

Among others, EdSurge and the World Economic Forum have endorsed systems to run facial analytics and wearable biometrics of students’ emotional engagement, legitimizing the idea that invisible signals of learning can be detected through skin.

Emotion AI is likely to be controversial because it prioritizes the idea of constant psychological surveillance—the monitoring of intimate feelings and perhaps intervening to modify those emotions. Remember when Facebook got in trouble for its ‘emotional contagion’ study? Fears of emotional manipulation inevitably follow from emotionAI–and the latest AI Now report highlighted this as a key area of concern.

Facial coding and engagement biometrics with emotion AI could even be seen to treat teaching and learning as ‘infotainment’—pressuring teachers to ‘entertain’ and students to appear ‘engaged’ when the camera is recording or the biometric patch is attached.

‘Reading the brain’ poses risks to human rights 
The penultimate lesson is about brain-scanning with neurotechnology. Educational neurotechnologies are already beginning to appear—for example, the BrainCo Focus One brainwave-sensing neuroheadset and application spun out of Harvard University.

Such educational neurotechnologies are based on the idea that the brain has become ‘readable’ through wearable headsets that can detect neural signals of brain activity, then convert those signals into digital data for storage, comparison, analysis and visualization via the teacher’ brain-data dashboard. It’s a way of seeing through the thick protective barrier of the skull to the most intimate interior of the individual.

BrainCo 1The BrainCo Focus One neuroheadset reads EEG signals of learning & presents them on a dashboard

But ‘brain surveillance’ is just the first step as ambitions advance to not only read from the brain but to ‘write back’ into it or ‘stimulate’ its ‘plastic’ neural pathways for more optimal learning capacity.

Neurotechnology is going to be extraordinarily controversial, especially as it is applied to scanning and sculpting the plastic learning brain. ‘Reading’ the brain for signals, or seeking to ‘write back’ into the plastic learning brain, raises huge ethical and human rights challenges—‘brain leaks’, neural security, cognitive freedom, neural modification—with prominent neuroscientists, neurotechnologists and neuroethics councils already calling for new frameworks to protect the readable and writable brain.

Genetic datafication could lead to dangerous ‘Eugenics2.0’
I’ve saved the biggest controversy for last: genetics, and the possibility of predicting a child’s educational achievement, attainment, cognitive ability, and even intelligence from DNA. Researchers of human genomics now have access to massive DNA datasets in the shape of ‘biobanks’ of genetic material and information collected from hundreds of thousands of individuals.

The clearest sign of the growing power of genetics in education was the recent publication of a huge, million-sample study of educational attainment which concluded the number of years you spend in education can be partly predicted genetically.

The study of the ‘new genetics of intelligence’, based on very large sample studies and incredibly advanced biotechnologies, is also already leading to ever-stronger claims of the associations between genes, achievement and intelligence. And these associations are already raising the possibility of new kinds of markets of genetic IQ testing of children’s mental abilities.

Many of you will also have heard the news last week that a scientist claimed to have bred the first ever genetically edited babies, raising a massive debate about re-programming human life itself.

Basically, it is becoming more and more possible to study digital biodata related to education, to develop genetic tests to measure students’ ‘mental rating’, and perhaps even to recode, edit or rewrite the instructions for human learning.

It doesn’t get more controversial than genetics in education. So what data lesson can we learn? Genetic biodata risks reproducing dangerous ideas about the biologically determined basis of achievement, while genetic ‘intelligence’ tests are a step towards genetic selection, brain-rating, and gene-editing for ‘smarter kids’—raising risks of genetic discrimination, or ‘Eugenics 2.0’.

Preventing data controversies 
So why are these data lessons important? They’re important because governments are increasingly anxious to sort out the messes that overenthusiastic data use and misuse has got societies into.

In the UK we have a new government centre for data ethics, and a current inquiry and call for evidence on data ethics in education. Politicians are now asking hard questions about algorithmic bias in edtech, accuracy of data models, risk of data breaches in analytics systems, and the ethics of surveillance of students.

Data and its controversies are under the microscope in 2018 for reasons that were unimaginable during the big data hype of 2008. Data in education is already proving controversial too.

In Edinburgh, we are trying to figure out how to build productive collaborations between social science researchers of data, learning scientists, education technology developers, and policymakers—in order to pre-empt the kind of controversies that are now prompting politicians to begin asking those hard questions.

By learning lessons from past controversies with data in education, and anticipating the controversies to come, we can ensure we have good answers to these hard questions. We can also ensure that good, ethical data practices are built in to educational technologies, hopefully preventing problems before they become full-blown public data controversies.

Posted in Uncategorized | Tagged , , , , , | Leave a comment

The app store for higher education

Ben Williamson

young people phoneA government competition aims to make choosing a degree as easy as swiping a smartphone. Image by Garry Knight

App stores are among the most significant aspects of contemporary cultures. Commercial environments where consumers choose digital products, they are also important spaces where app producers and platform businesses first come into contact with users. As the shopping centres of platform capitalism, app stores enable users to become sources of data collection and value extraction.

Apps for higher education have become a key focus of government investment, and have the potential to become significant intermediaries bringing students, applicants and other publics into contact with HE data. This post continues ongoing research documenting the expanding data infrastructure of HE in the UK, which has already explored the policy context, data-led regulatory approach, data-centred sector agencies, and involvement of data-driven edu-businesses. New apps for shaping student choice bring small businesses, edtech startups, and the not-for-profit sector into the expanding infrastructure, and are introducing the idea that student choice can be shaped (or ‘nudged’) through the interactive presentation of data on apps, price-comparison websites, and social media-style services that indicate the quality of a provider’s performance.

An ‘information revolution’ in student choice
Universities Minister Sam Gyimah announced a competition in summer 2018 for small businesses to create new apps or online services to assist young people in making choices about going to university. Controversially to many in the sector, he claimed the competition would allow tech companies to use graduate earnings data—taken from the Longitudinal Educational Outcomes (LEO) dataset—to ‘create a MoneySuperMarket for students, giving them real power to make the right choice’.

A budget of £125,000 was allocated to support the winning entrants, which were expected to produce working prototypes during September and October. A few months later he announced five shortlisted companies, an additional £300,000 investment for two of the products, and the release of ‘half a million cells of data showing graduate outcomes for every university–more than has ever been published before’.

‘This is the start of an information transformation for students, which will revolutionise how students choose the right university for them’, said Gyimah. ‘I want this to pave the way for a greater use of technology in higher education, with more tools being made available to boost students’ choices and prospects’.

In other words, the competition is just a prototype of what is still to come–a government-backed marketplace of apps, platforms and other products and services to enable applicants, students and graduates to produce, interact with, and use HE data. Elsewhere, Gyimah was reported saying there is ‘clearly a market opportunity’ for services like this, even for those not awarded part of the £300,000 funding from the Department for Education.

Although the competition at this stage has only generated prototypes–only two of which will be more fully developed–all of the companies have already developed a web presence for their apps and products. A Department for Education video tweeted from the official finalists’ event also offers some glimpses of the these prototype products. This allows us to see how an expanding ‘app store’ for student choice might extend the data infrastructure in new ways.

MyEd UniPlaces app
MyEd is an existing provider of services designed to enhance choices in education institutions.

MyEdMyED provides educational choice-enhancement services. Image from https://myed.com/

MyEd already runs services supporting parent choice in nurseries, schools, colleges and universities, in particular by aggregating key data and previous reviews to enable easy user comparison and shortlisting of providers. According to its website:

Our unique reviews process is an intelligence data analysis system that has been designed to provide our users with the most relevant and digestible information to help them make the best decisions on their investment in education.

For the competition, MyEd proposed a UniPlaces app, which it pitched as a ‘web-based compatibility checker’  to assist applicants in making HE choices. Driven by a questionnaire capturing students’ achievements and preferences, the app then seeks to match them to HE options that are linked to certain job prospects.

As an established company, MyEd already compiles together information from a range of sources, including institutions, government departments, published performance tables, and agencies such as HESA and the QAA. in these ways, it is emblematic of the shift toward marketized education and choice across all sector–from early years to HE–in recent education policy.

Uni4U
The unique aspect of the Uni4U proposal is that it was designed by students, though the organization was founded by an entrepreneur with support from the NatWest Business Accelerator.

Uni4UUni4U is gathering additional data by surveying students and school children online. Image from http://uni4u.co.uk/

Like the other apps, Uni4U supports HE choice through the graphical presentation of data about universities, including their location, campus facilities, and graduate earnings.

While in prototype phase, Uni4U produced a website featuring two online surveys to gather further data from future students and current students. It invites future students to identify what would most help them make university choices, and current students to rate the quality of their existing provider and the support they gained in making their initial choice.

Coursematch
Coursematch presents itself on its website as a fully functioning app available via the Apple App store and Google Play, with a claimed 25,000 users. It was already upgraded in its current form in May 2018 and has been marketing itself on social media as ‘The #1 social network to help find your perfect university course and meet future friends!’

CoursematchCourseMatch is a social network for university choice, already available on app stores. Image from https://coursematch.io/

Perhaps the notable aspect of Coursematch is its claim to use machine learning to make the most effective matches between students and courses, twinned with a ‘swipeable’ interface design adopted from dating apps.

‘Our new look app is going to make it easier than ever to browse University courses, and find your perfect course!’ read a recent promotional Coursematch tweet. ‘We are bringing in AI techniques to recommend a selection of courses right for you, to browse through with just a simple swipe’.

Potential students are provided with projected possible earnings based on the average lower quartile, median and upper quartile for particular courses, and can also interact through the app with existing students on those courses. Coursematch is already supported by Jisc, the HE digital learning agency, and Santander Universities.

AccessEd–ThinkUni app
The ThinkUni app comes from the not-for-profit sector, with AccessEd aiming to ‘increase access to university for young people from under-served backgrounds globally. We are creating a global network of partner organisations committed to this mission, sharing with them our expertise, resources and support’.

AccessEdAccessEd supports access to university for young people from under-served backgrounds. Image from https://access-ed.ngo/

Pitched as a ‘personalized careers assistance’ service that is easy for students to use on their smartphones, ThinkUni builds on AccessEd’s previous university access work–including its ‘Brilliant Club’, the UK’s largest university access programme for 11-18 year olds.

According to the co-founder and executive chair of AccessEd, existing sources such as UCAS are huge databases and glorified spreadsheets that make decision-making difficult. With ThinkUni they can instead access details such as which universities the student could choose based on their school exam grades, and how long it would take them to pay back their student loan based on a projected graduate salary.

The Profs—That’s Life
That’s Life is the most unique of the competition finalists–it’s an education and careers simulator produced by The Profs, a successful private HE tutoring company.

ProfsThe Profs is a successful HE private tutoring company. Image from https://www.theprofs.co.uk/

The idea for the service is that it provides a ‘gamified’ simulation of the outcomes of making certain kinds of decisions, and presents projected data such as their future levels of happiness, work-life balance and income, showing students the impact of their life and course choices, including not going to university all.

The gamification and simulation aspects of That’s Life demonstrate how the logics of video games could be employed to enhance student choice, notably by offering students opportunities to experiment with different pathways and problem solving strategies. But the app’s origins in the private HE tutoring sector is also indicative of how private sector and alternative providers are being actively welcomed into public university service provision.

Scaling up the prototype
Whether apps such as those supported by government–or the earnings potential they present–actually influence student choice remains for now an empirical question. Another question is whether initial government investment will enable these app producers to scale their products. In a way, Sam Gyimah is acting like a Silicon Valley venture capitalist, seed-funding early-stage prototypes that bear a high risk of failure.

However, one existing example of a HE-facing app suggests that appetite for real venture capital investment in such products may be growing. Debut is a smartphone app for talent-matching graduates to corporate employers and labour markets. Graduate users create a profile—as with other social media platforms—and complete a psychometric personality test which can then be used for automated push notifications of appropriate jobs. Partnering corporate employers can even ‘talent spot’ and target individual users directly without requiring an application form or CV.

Debut appDebut is a machine learning based talent-matching app. Image from http://debut.careers/

But Debut is also a direct challenge to universities and the status of the academic degree.  ‘We want to unbundle that and turn our user base into a behaviour- and competency-based user base,’ its founder says. ‘The strength would be the person’s competency as opposed to academic success’. Instead, it emphasizes graduates’ ‘cognitive psychometric intelligence’, behavioural traits and competencies. ‘We have everything on students, from their cognitive background, social background, to how well they perform in a selection process’—data it is using to train machine learning algorithms ‘to make personalized recommendations and predictions’.

Debut therefore instantiates the entry of automated predictive talent analytics into UK HE, inciting students to cultivate their marketable personality and behavioural skills above their academic credentials. Users of the platform generate training data for its machine learning learning algorithm to tune and refine its subsequent job-matches and recommendations. In summer 2018 Debut also received £5 million venture capital investment led by James Caan, the entrepreneur from the TV show Dragons’ Den, and  already has 60 corporate clients, including Google, Apple and Barclays, that pay it an annual subscription to sort and organize the graduate data.

Student-powered & metrics-powered HE
As an established product, Debut is well positioned in the emerging app store of services and products to help shape students’ choices. As the DfE competition demonstrates, apps are emerging to match prospective applicants the courses based on graduate earnings data from LEO, while Debut can later then link them to employers based on a training set of graduate competency profiles and successful labour market matches.

The finalists of the DfE competition represent the governmental recognition of the potential of data presented on apps to shape choices and decisions. The prototypical app store for HE choice is, therefore, a significant extension of ongoing upgrades to the data infrastructure of HE. It raises some key issues:

  • It exemplifies government ambitions to ‘unbundle‘ and open up HE to new market providers of technologies, entrepreneurs, the private sector, and other business interests, with government itself acting as a market catalyst and seed-fund investor
  • It brings the logic of ‘swipeable’ apps and social media platforms into HE, importing the business model of platform capitalism and the extraction of value from student data into higher education
  • It utilizes persuasive design and behavioural science insights to design interfaces and visualizations that might ‘hook’ attention, ‘trigger’ behaviours, and ‘nudge’ decisions according to the ‘choice architecture’ provided
  • It continues to treat students as calculative consumers, investing in HE with the expectation of ROI in the shape of graduate outcomes and earnings, and puts pressure on institutions to focus on labour market outcomes as the main purpose of HE
  • It incites prospective and current students to see and think about HE in primarily quantitative and evaluative terms, as represented in metrics and market-like performance rankings and ratings
  • It anticipates potential long-term and real-time data monitoring of students in HE institutions, through a digital surveillance assemblage of apps, platforms and infrastructural connections, thereby making students into data transmitters of institutional qualities as well as consumers of institutional data
  • It instantiates the increasing role of algorithms, machine learning and automation into applicants’, students’ and graduates’ decision-making, with Debut even seeking to short-circuit the job application process and automatically talent-match graduate competency profiles to corporate job descriptions
  • It raises questions about the uses of student data to reinforce pre-existing governmental ideology, with the DfE recently reprimanded by statistical authorities for prioritising political messaging ahead of its statistical evidence–could students apps be designed otherwise rather than to conform to market models of cost-benefit calculation?

By releasing a huge trove of LEO data, it also demonstrates how HE is being made increasingly measurable, computable, and comparable as a competitive, market-driven sector, with Gyimah noting that ‘these new digital tools will highlight which universities and courses will help people to reach the top of their field, and shine a light on ones lagging behind’.

The governmental focus on calculating which universities are ‘lagging’ or even ‘failing’ from their data is itself a huge sector concern, with Michael Barber, chair of the Office for Students, writing in The Telegraph that ‘While student choice should drive innovation, diversity and improvement, we recognise this won’t always be enough. So where market mechanisms are not sufficient, we will regulate’. The piece, entitled ‘We should allow bad universities to fail, as long as we protect their students’, followed another Telegraph article titled ‘If the higher education market is to succeed, bad universities must be allowed to go bust’.

In this highly conservative political and media context, further amplified by think tanks such as Reform, HE is being driven both by the supposed ’empowerment’ of students and by metrics of market performance. The first perspective sees data as central to a ‘student-powered’ sector characterized by choice, value for money, and market competitiveness. The other takes a ‘metrics-powered’ perspective on universities as comparable market actors with winners and failures, as calculated by the choices of applicants to attend, indicator data on provider performance, and LEO or other student outcomes data on graduate outcomes and earnings.

These two perspectives are, however, binocular rather than oppositional. Barber’s emphasis on ‘bad universities’ and Gyimah’s enthusiasm for student-facing apps are part of the same project, with data from and about students  treated as key performance indicators for both policy officials and university applicants to assess. As Barber noted, ‘With more information at their disposal on the quality of courses and associated salary outcomes, [students] will rightly be thinking carefully about such choices. That places an onus on universities to plan realistically and respond quickly where demand is higher–or lower–than expected’.

The emerging, prototypical HE app store instantiates these demands in software. It reveals to students the best-performing universities in terms of degree awards and graduate earnings, but also reveals the ‘bad universities’ and discourages them from ‘investing’ in these institutions and their courses. In these ways, the HE app store threatens to exert dangerously performative effects. By presenting university providers as a market, these apps will shape students’ choices away from certain institutions, or prompt institutions to drop courses that don’t promise a high percentage of positive graduate outcomes, while privileging elite institutions with stronger existing performance records. The app store will speed up the ‘market failure’ of those providers presented in the data as ‘bad universities’.

Posted in Uncategorized | Tagged , , , , , , , | 1 Comment

The mutating metric machinery of higher education

Ben Williamson

ZuijlekomHigher education increasingly depends on complex data systems to process and visualize performance metrics. Image by Dennis van Zuijlekom

Contemporary culture is increasingly defined by metrics. Measures of quantitative assessment, evaluation, performance, and comparison infuse public services, commercial companies, social media, sport, entertainment, and even human bodies as people increasingly quantify themselves with wearable biometrics. Higher education is no stranger to metrics, though they are set to intensify further in coming years under the Higher Education and Research Act. The measurement, comparison, and evaluation of the performance of institutions, staff, students, and the sector as a whole is expanding rapidly with the emergence and evolution of ‘the data-intensive university’.

This post continues a series on the expanding data infrastructure of HE in the UK, part of ongoing research charting the actors, policies, technologies, funding arrangements, discourses, and metrological practices involved in contemporary HE reforms. Current reforms of HE are the result of ‘fast policy’ processes involving ‘sprawling networks of human and nonhuman actors’, and more specifically that involve human data analytics experts and complex nonhuman systems of measurement. Only by identifying and understanding the mobile policy networks and the ‘metrological machinery’ of their HE data projects is it possible to adequately apprehend how macro-forces of governmental reform are being operationalized, enacted, and accomplished in practice.

Metrological machinery
The collection and use of UK university performance data has expanded and mutated dramatically in scope over the last two decades. The metrification of HE through the ‘evaluation machinery’ of research assessment exercises, teaching evaluation frameworks, impact measurements, student satisfaction ratings, and so on, is frequently viewed as part of an ongoing process of neoliberalization and marketization of the sector. One particularly polemical critique describes a ‘pathological organizational dysfunction’ whereby neoliberal priorities and corporate models of marketization, competition, audit culture, and metrification have combined to produce ‘the toxic university’.

The narrative is that HE has been made to resemble a market in which institutions, staff and students are all positioned competitively, with measurement techniques required to assess, compare and rank their various performances. It is a compelling if unsettling narrative. But if we really want to understand the metrification, marketization, and neoliberalization of HE, then we need to train the analytical gaze more closely on the specific and ever-mutating metrological mechanisms by which these changes are being made to happen.

In previous posts I examined the market-making role of the edu-business Pearson,  and the ways the Office for Students (OfS), the HE market regulator, and the Higher Education Statistics Agency (HESA), its designated data body, intend to use student data to compare sector performance. Together, these organizations and their networks are building a complex and evolving data infrastructure that will cement metrics ever more firmly into HE, while opening up the sector to a new marketplace of technical providers of data analysis, performance measurement, comparison and evaluation.

HE technology landscapePolitical demands to make HE more data-driven have opened up a marketplace for providers of digital technologies. Image by Eduventures

In this update I continue unpacking this data infrastructure by focusing on the Quality Assurance Agency for Higher Education (QAA) and the Joint Information Services Committee (Jisc). Both of them, along with HESA, are engaging in significant metrological work in HE. In fact, HESA, QAA and Jisc together constitute the M5 Group of agencies—‘UK higher education’s data, quality and digital experts’—formed in 2016 and named after their collective proximity to the M5 motorway in southwest England. Together, the QAA, HESA and Jisc also co-organize and run the annual Data Matters conference for HE data practitioners, quality professionals and digital service specialists.

To approach these organizations, the concept of ‘metric power’ from David Beer provides a useful framing. Drawing on key theorists of statistics (Desrosières, Espeland, Foucault, Hacking, Porter, Rose etc), metric power accounts for the long-growing intensification of measurement over the last two centuries to the current mobilization of digital or ‘big’ data across diverse domains of societies. Central to metric power is the close alignment of metrics to neoliberal governance. Following the lead of Foucault and others to define neoliberalism as the ‘generalization of competition’ and the extension of the ‘model of the market’ to diverse social domains, Beer argues that ‘put simply, competition and markets require metrics’ because ‘measurement is needed for the differentiations required by competition’.

The concept of metric power, then, is potentially a useful way to approach the metrification of higher education and to explore how far this represents processes of neoliberalization and marketization. By examining the recent projects and future aspirations of agencies such as Jisc and QAA we can develop a better understanding of how a form of metric power is transforming the sector. To be clear at this point, there is nothing to suggest that either the QAA or Jisc are run by neoliberal ideologues–something more subtle is happening. The point is that both organizations, along with HESA and the OfS, are pursuing projects which potentially reinforce neoliberalizing processes by expanding the data infrastructures of HE measurement. They are ‘fast policy’ nodes in the mobile policy networks enacting the metrological machinery of HE reform.

QAA—sentimental evidence
The QAA is the sector agenda ‘entrusted with monitoring and advising on standards and quality in UK higher education’. It maintains the UK Quality Code for Higher Education used for quality assessment reviews, as well as the Subject Benchmark Statements describing the academic standards expected of graduates in specific subject areas. QAA also undertakes in-house research and produces policy briefings.

One of its major strands of activity, via the QAA Scotland office, is an ‘Evidence Enhancement Theme’ focusing on ‘the information (or evidence) used to identify, prioritise, evaluate and report’ on student satisfaction. Its priorities are:

  • Optimising the use of existing evidence: supporting staff and students to use and interpret data and identifying data that will help the sector to understand its strengths and challenges better
  • Student engagement: understanding and using the student voice, and considering concepts where there is no readily available data, such as student community, identity and belonging
  • Student demographics, retention and attainment: using learning analytics to support student success, and supporting institutions to understand the links between changing demographics, retention, progression and attainment including the ways these are reported

The Evidence Enhancement program is unfolding collaboratively across all Scottish HE providers and is intended to result in sector-wide improvements in data use related to student satisfaction.

More experimentally, the QAA released a 2018 study into student satisfaction using data scraped from social media. The student sentiment scraping study, entitled The Wisdom of Students: Monitoring quality through student reviews, was based on a large sample of over 200,000 student reviews of higher education provision to produce a ‘collective-judgment’ score for each provider. These data were then compared with other sources such as TEF and NSS, and found to have a strong positive association. Crowdsourced big data from the web, it suggested, were as reliable as large-scale student surveys and bureaucratic quality assessment exercises as student experience metrics.

The QAA project is a clear example of how big data methodologies of sentiment analysis and discovery using machine learning and web-scraping are being explored for HE insights. For the QAA, taking such an approach is necessary because, as the sector has become more marketized and focused on the experience of the student in a ‘consumer-led system’ regulated by the ‘data-driven’ Office for Students, there has been ‘a gradual reduction in the remit of QAA in assessing and assuring teaching and learning quality in providers, and the rise in the perception of student experience and employment outcomes’ data as more accurately indicating excellence in higher education provision’. As such, measuring student experience in a timely, low-burden and cost-effective fashion has become a new policy priority, while existing instruments such as the TEF and NSS remain annual, retrospective, and potentially open to ‘gaming’ by institutions.

In contrast, collecting ‘unsolicited student feedback’ from reviews on social media platforms is seen by the QAA as a way of ‘securing timely, robust, low-burden and insightful data’ about student experience. In particular, the study involved collecting student reviews from Facebook, Whatuni.com and Studentcrowd.com, with Twitter data to be included in future research. The study authors found that 365 HE providers have Facebook pages with the ‘reviews’ function available, as well as  many pages relating to departments, schools, institutes, faculties, students’ unions, and career services.

Perhaps most significantly, given the constraints of TEF and NSS, the scraping methodology allowed the QAA to come up with collective judgment scores for each provider on any given day. In other words, it allowed for the student experience to be calculated as time-series data, and opened up the possibility of ‘near real-time’ monitoring of provider performance in terms of delivering a positive student experience, which could then be used by providers to specify need for action. The advantages of the approach, according to the QAA, are that it makes year-round real-time feedback feasible ‘based on what students deem to be important to them, rather than on what the creator of surveys or evaluation forms would like to know about’; reduces the data-collection burden; minimizes providers’ ‘opportunities to influence or sanitise the feedback’; and opens up ‘the ability to explore sector-wide issues, such as feedback relating to free speech, contact hours, or vice-chancellor pay’.

In sum, the report concludes, ‘the timely and reliable extraction of the student collective-judgement is an important method to facilitate quality improvement in higher education’. The QAA intends to pilot the methodology with ten HE providers late in 2018.

The QAA concern with sentiment analysis of student experience needs to be understood not just as an artefact of HE marketization and consumerization, but as part of a wider turn to ‘feelings’, ‘sensation’ and ‘emotion’ in contemporary metric cultures. As William Davies notes, ‘Emotions can now be captured and algorithmically analysed (“sentiment analysis”) thanks to the behavioural data that digital technologies collect’, and these data are increasingly of interest as sources of intelligence to be harnessed for political purposes by authorities such as government departments or agencies. Scraping student sentiment from social media replicates the logic of psychological and behavioural forms of governance within HE, and has potential to make the sector ever-more responsive to real-time traces of the study body’s emotional pulse.

QAA dashboardThe QAA-led Provider Healthcheck Dashboard allows institutions to monitor and compare their performance through data visualizations. Image from HESA

The medicalized metaphor of tracing pulses can be carried further in relation to another QAA project. In collaboration with its M5 Group partners Jisc and HESA, QAA led the production of a data visualization package called the ‘Provider Healthcheck Dashboard’. The purpose of the tool is to allow providers to perform ‘in-house healthchecks’ by comparing their institutional performances, on many metrics, against competitors. The metrics used in the Healthcheck dashboard include TEF ratings, QAA quality measurements, NSS scores, Guardian league tables, percentage of 1st or 2:1 degree rankings, and graduate employment performance over five years.

These metrics are presented on the dashboard as if they constitute the ‘vital signs’ of a provider’s medical health and their comparison with norms of performance, as depicted visually as percentage differences from benchmarks. The provider healthcheck acts as a kind of medical read-out of the competitive health of an institution, demonstrating in visual, easy-to-read format how an individual provider is situated in the wider market, and catalyzing relevant treatments to strengthen its performance.

Jisc—predicting performance
Jisc is a membership organization providing ‘digital solutions for UK education and research’. Its strategic ‘vision is for the UK to be the most digitally-advanced higher education, further education and research nation in the world’. Besides its vision, Jisc is the sector’s key driver of learning analytics—the measurement of student learning activities—which it is circulating via its formal associations with the other M5 Group members HESA and QAA.

As a key part of its vision Jisc has conducted significant work outlining a future data-intensive HE and how to accomplish it over the coming decade. It envisages a HE landscape dominated by learning analytics and even artificial intelligence, in which students will increasingly experience a personalized education based on their unique data profiles. Jisc’s chief executive has described ‘the potential of Education 4.0‘ as a response to the ‘fourth industrial revolution’ of AI, big data, and the internet of things. Education 4.0 would involve lecturers being displaced by technologies that ‘can teach the knowledge better’, are ‘immersive’ and ‘adaptive’ to learners’ needs, and that include ‘virtual assistants’ to ‘support students to navigate this world of choice and work with them to make decisions that will lead to future success’.

Towards this vision of an ‘AI-led’ future of HE, Jisc collaborated with Universities UK on the 2016 report Analytics in Higher Education. A key observation of the report is that existing datasets such as TEF provide very limited information for universities, policymakers or regulators to act on:

External performance assessments, such as the TEF, don’t in themselves support institutions understanding and using their data. Advanced learning analytics can allow institutions to move beyond the instrumental requirements of these assessments to a more holistic data analytic profile. Predictive learning analytics are also increasingly being used to inform impact evaluations, via outcomes data as performance metrics. Ultimately, this allows institutions to assess the return on investment in interventions.

As this excerpt indicates, Jisc has key interests in learning analytics, predictive analytics, outcomes data, performance metrics, and measuring return on investment.

It is now seeking to realize these ambitions through its launch in September 2018 of a national learning analytics service for further and higher education. According to the press release, the learning analytics service ‘uses real time and existing data to track student performance and activities’:

From libraries to laboratories, learning analytics can monitor where, when and how students learn. This means that both students and their university or college can ensure they are making the most of their learning experience. … This AI approach brings existing data together in one place to support academic staff in their efforts to enhance student success, wellbeing and retention.

The service itself consists of a number of interrelated parts. It includes cloud-based storage through Amazon Web Services so individual providers do not need to invest in commercial or in-house solutions, and ‘data explorer’ functionality ‘that brings together the data from your various sources and provides quick, flexible visualisations of VLE usage, attendance and assessment – for cohorts and individual students. … The information will help you to plan effective personal interventions with students and to identify under-performing areas of the curriculum’. A third aspect of the service is the ‘learning analytics predictor’ that helps teaching and support staff to use ‘predictive data modelling to identify students who might have problems’ and ‘to plan interventions that support students’.

The final part of the service is a student app called Study Goal, which is available for student download from major app stores. As it is described on the Google Play app store, ‘Study Goal borrows ideas from fitness apps, allowing students to see their learning activity, set targets, record their own activity amongst other things’. In addition, it encourages students to benchmark themselves against peers, and can be used to monitor attendance at lectures.

Jisc Study GoalThe Jisc Study Goal app is based on fitness devices enabling students to monitor their performance and benchmark themselves against peers. Image from Google Play

Study Goal is an especially interesting part of the Jisc learning analytics architecture because, like the provider healthcheck dashboard, it appeals to images of fitness and healthiness through self-quantification, personal analytics and self-monitoring. The logic of activity tracking and self-quantification has been translated from the biometrics of the body back to a kind of health metrics of the institution. University leaders and students alike are being responsibilized for their academic health, while their data are also made available to other parties for inspection, evaluation, prediction and potential intervention. Beyond the learning analytics service and Study Goal app, Jisc has also supported the Learnometer environmental sensing device, which ‘automatically samples your classroom environment, and makes suggestions through a unique algorithm as to what might be changed to allow students to learn and perform at their best’. Not only is student academic and emotional health understood to underpin their performance, but the environment needs to be healthy and amenable to performance-maximization too.

All of these developments indicate a significant reimagining of universities and all who inhabit them as being amenable to ever-more pervasive forms of performance sensing and medicalized inspection. Higher education is becoming a kind of experimental laboratory environment where subjects are exposed through metrics to a data-centric clinical gaze, and where everything from students’ feelings and teaching quality to classroom environment and graduate employment outcomes is a source of risk requiring quantitative anticipation, modelling, and real-time management. Positioned in this way, the political priority to make HE function as a ‘healthy market’ of self-improving competitors thoroughly depends on the metric machinery of agencies such as the QAA and Jisc, and on the expanding data infrastructure in which they are key nodes of experimentation, policy influence, and technical authority.

Metric authority
Jisc and the QAA are bringing new metric techniques into HE, such as sentiment analysis, predictive modelling, comparative data visualization and student benchmarking apps, in ways that do appear to reinforce the ongoing marketization of the sector. They are key nodal actors in the policy networks building a data infrastructure for student measurement–an infrastructure that remains unfinished and continues to evolve, mutate and expand in scope as new actors contribute to it, new analyses are made possible, and new demands are made for data, comparison and evaluation.

It is necessary to restate at this point that the QAA and Jisc are not necessarily uncritically pursuing a market-focused neoliberalizing agenda. The QAA’s sentiment analysis report appears somewhat critical of the market reform of HE under the Office for Students. The point is that these sector agencies are all now part of an expanding data infrastructure that appears almost to have its own volition and authority, and that is inseparable from political logics of competition, measurement, performance comparison, and evaluation that characterize the neoliberal style of governance. It is a data infrastructure of metric power in higher education.

David Beer rounds off his book with several key themes which he proposes as a framework for understanding metric power. These can be applied to the examples of the metrological machinery of HE being developed by the QAA and Jisc.

Limitation. According to Beer, metric power operates through setting limits on ‘what can be known’ and ‘what can be knowable’ by setting the ‘score’ and ‘the rules and norms of the game’. The QAA and Jisc have become key actors of limitation by constraining the assessment and evaluation of HE to what is measurable and calculable. Through learning analytics, Jisc is pushing a particular set of calculative techniques that make students knowable in quantitative terms, as sets of ‘scores’ which may then be compared with norms established from vast quantities of data in order to attach evaluations to their profiles. The QAA-led dashboard similarly sets constraints on how provider performance is defined, and cements the idea that performance comparison is the only game to be played.

Visibility. Metric power is based on what can be rendered visible or left invisible—a ‘politics of visibility’ that also translates into how certain practices, objects, behaviours and so on gain ‘value’, while others are not measured or valued. Through their data visualization projects, Jisc and the QAA are involved in rendering HE visible as graphical comparisons which can be used for making value-judgments—in terms of what is deemed to be valuable market information. But such data visualization projects also render invisible anything that remains un-countable or incalculable, and inevitably make quantitative data that can be translated into graphics appear more valuable than other qualitative evaluations and professional assessments. The Study Goal app reinforces to students that certain forms of quantifiable engagement are valued and prized more highly than other qualitative modes.

Classification. Metric power works by sorting, ordering, classifying and categorizing, with ‘the capacity to order and divide, to group or to individualise, to make-us-up and to sort-us-out’. Through learning analytics pushed by Jisc, students are sorted into clusters and groupings as calculated from their individual data profiles, which might then lead, in Jisc’s ideal, to personalized intervention. Likewise, the sorting of universities by comparative healthcheck dashboards and their ordering into hierarchical league tables serves to classify some as winners and others as fallers and failures in a competitive contest for performance ranking and advantage.

Prefiguration. Metric power ‘works by prefiguring judgements and setting desired aims and outcomes’ as ‘metrics can be used in setting out horizons … and imagined futures and then using them in current decision-making processes’—and this is especially the case with the imagining and pursuit of markets and the measurement of their performance. Here Beer appears to be pointing to the performativity or productivity of data to anticipate future possibilities in ways that catalyse pre-emptive action. Clearly, with its real-time sentiment analysis, the QAA’s student-scraping study is seeking to mobilize data for purposes of prompting action and pre-emption by promoting the use of time-series data that indicate trends towards future outcomes in terms of student ratings. Institutions that can read student satisfaction near to real-time from social media sentiment ,might act to pre-empt their TEF and NSS ratings. Likewise, the Healthcheck Dashboard allows institutions to anticipate future challenges, while Jisc has specifically sought to embed predictive analytics in institutional decision-making.

Intensification. Metric power perpetuates the models of the world with which it sets out, with metrics satisfying the ‘desire for competition’, intensifying processes of neoliberalization, and expanding its models of the market into new areas. We can see with the QAA and Jisc how the market model of competitive evaluation and ranking has extended from research and teaching assessment to rating of institutions via social media scoring and user-reviews. Jisc’s Study Goal app also puts the market model under the very eyes and fingertips of students as it invites them to compare and benchmark themselves against their peers, thereby intensifying metric power through competitive peer relations and positioning students as responsible for their own market performance and prospects.

Authorization. Metric power works by ‘authenticating, verifying, legitimating, authorizing, and endorsing certain outcomes, people, actions, systems, and practices,’ with market-based models and metrics taken and trusted as sources of ‘truth production’. The dashboards and analytics advanced by QAA and Jisc are being propelled into the sector with promises of objectivity, impartiality and neutrality, free of human bias and subjective judgment. As such, these data and their visualization constitute a seemingly authoritative set of truths, yet are ultimately an artificial reality of higher education formed only from those aspects of the sector that are countable and measurable.

Automation. Metric power shapes human agency, decision-making, judgement and discretion as systems of computation and the ‘decisive outcomes of metrics’ are taken as objective, legitimate, fair, neutral and impartial, especially as ‘automated metric-based systems’ potentially take ‘decisions out of the hands of the human actors’ and ‘algorithms are making the decisions’ instead. Although QAA and Jisc are clearly not removing human judgment from the loop in HE decision-making, they are introducing limited forms of automation into the sector through algorithmic sentiment analysis, machine learning and data visualization dashboards that generate ‘decisive outcomes’ and thereby shape institutional or personal decisions.

Affective. Finally, metric power and systems of measurement induce affective responses and feelings—metrics have ‘affective capacities’ such as inducing anxiety or competitive motivation, and thereby ‘promote or produce actions, behaviours, and pre-emptive responses’, largely by prompting people to ‘perform’ in ways that can be valued, compared and judged in measurable terms. Jisc’s Study Goal is exemplary in this respect, as it is intended to incite students to benchmark themselves in order to prompt competitive action. The healthcheck dashboards, likewise, are designed to induce performance anxiety in university leaders and prompt them to take strategic action to ensure advantageous positioning in the variety of metrics by which the institution is assessed. In both examples, HE is framed in terms of ‘risk’, a highly affective state of uncertainty, as a way of catalyzing self-improvement.

As these points illustrate, through organizations such as the QAA and Jisc, HE is encompassed in the sprawling networks of actors and technologies of metric power. The data infrastructure of higher education is an accomplishment of a mobile policy network of sector agencies along with a whole host of other organizations and experts from the governmental, commercial and nonprofit sectors. A form of mobile, networked fast policy is propelling metrics across the sector, and increasingly prompting changes in organizational and individual behaviours that will transform the higher education sector to see and act upon itself as a market.

Posted in Uncategorized | Tagged , , , , , | Leave a comment

The tech elite is making a power-grab for public education

Ben Williamson

Silicon wiresSilicon Valley entrepreneurs are linking public education into their growing networks of activity and influence. Image by Steve Jurvetson

 

In the same week that Amazon founder Jeff Bezos announced a major move into education provision, the FBI issued a stark warning about the risks posed to children by education technologies. These two events illustrate clearly how ed-tech has become a significant site of controversy, a power struggle between hugely wealthy tech entrepreneurs and those concerned by their attempts to colonize the education sector with their imaginaries and technologies. Jeff Bezos, Mark Zuckerberg, Elon Musk, Peter Thiel, and other super-wealthy Silicon Valley actors, are forming alternative visions and approaches to education from pre-school through primary and high schooling to university. They’re the new power-elite of education and their influence is spreading.

I’ve previously written about the Silicon Valley entrepreneurs and venture capitalists making a power-grab for the education sector. Benjamin Doxtdator has also written brilliantly about their rewriting of the history of public education as a social problem requiring urgent correction for the future. Here I just want to compile some recent developments of Silicon Valley intervention at each stage of education, to illustrate the growing scale of their influence as they continue linking public education into their networks of technical development.

The Amazon pre-school network
Amazon’s Jeff Bezos announced via a letter on Twitter his plans to invest $2billion in support for homeless families and a ‘network of new, non-profit, tier-one preschools’. The ‘Academies Fund’ will create ‘Montessori-inspired’ preschools through a new organization to ‘learn, invent and improve’ based on ‘the same set of principles that have driven Amazon’. Most notably, Bezos added, ‘the child will be the customer’ in these schools, with a ‘genuine, intense customer obsession’.

While many will admire the philanthropic efforts of the world’s richest man to support early years education, the idea of Amazon-style pre-schools that see children as customers problematically positions education as a commercialized service in ‘personalized learning’. Bezos is not the first tech sector entrepreneur to announce or invest in pre-schooling, and as Audrey Watters commented,

The assurance that ‘the child will be the customer’ underscores the belief – shared by many in and out of education reform and education technology – that education is simply a transaction: an individual’s decision-making in a ‘marketplace of ideas. … This idea that ‘the child will be the customer’ is, of course, also a nod to ‘personalized learning’…. As the customer, the child will be tracked and analyzed, her preferences noted so as to make better recommendations to up-sell her on the most suitable products.

The image of data-intensive startup pre-schools with young children receiving ‘recommended for you’ content as infant customers of ed-tech products is troubling. It suggests that from their earliest years children will become targets of intensive datafication and consumer-style profiling. As Michelle Willson argues in her article on algorithmic profiling and prediction of children, they ‘portend a future for these children as citizens and consumers already captured, modelled, managed by and normalised to embrace algorithmic manipulation’.

Primary Spaces
Primary schooling has been a strong focus for Silicon Valley for several years. Notable examples include Mark Zuckerberg’s The Primary School and Max Ventilla’s Altschool, two of the most high-profile startup schools to embed personalized learning technologies and approaches within the whole pedagogic apparatus. Less is known about Ad Astra, the hyper-exclusive private school project set up by Tesla boss Elon Musk within his SpaceX headquarters, although it too emphasizes students pursuing personal projects, problem-solving, and STEM subjects.

Space XElon Musk’s Ad Astra school is located in the HQ of Space X. Image by Steve Jurvetson

However, the globally-popular ed-tech company ClassDojo recently announced a partnership with Ad Astra to create new content for primary school age children. Building on the success of its previous content partnerships on ‘growth mindset’ and ‘empathy’, ClassDojo has worked with Ad Astra to create a set of resources focused on ‘conundrums’ that involve ‘open-ended critical thinking and ethics challenges’. The resources are not intended to be used at Ad Astra itself, but will be released to teachers and schools later in 2018.

The ClassDojo partnership means that Ad Astra’s focus on problem-solving and ethical challenges will be mobilized into classrooms at potentially huge scale. ClassDojo already claims millions of users, and is fast expanding as a major social media platform and content platform for primary schools in many countries. The conundrums ClassDojo and Ad Astra have created pose problems that are considered foundational to ‘building liberal society’. This suggests that the kind of ‘liberal society’ assumed by entrepreneurs such as Elon Musk is a vision to be pursued through the mass inculcation of children’s critical thinking and problem-solving.

Given that Musk, like Amazon’s Bezos, is also investing in space exploration, their efforts in young children’s education raise significant questions about what kind of future world and liberal society they are imagining and seeking to build. What kind of child are they trying to construct to take part in a future society that, for Bezos and Musk, may well be distributed into space?

Super High Schools
High schools are the focus for Laurene Powell Jobs’ XQ Super School project, which is a ‘community of people mobilizing America to reimagine public high school’. The project previously awarded philanthropic funding through a competition to 18 US high schools, including Summit School, one of a chain sponsored by Facebook’s Mark Zuckerberg.

XQ Super School is not just a competition though—it is seeking to produce a glossy blueprint for the future of public high school itself in its new guise as a ‘community’ or ‘network’ of reform. Its updated website features a variety of resources, videos, guidance, partnership opportunities and other materials to stimulate imaginative thought across the education sector. It also now features highly developed learner goals for schools to aspire to, including problem-solving, collaboration, invention, and the cultivation of ‘growth mindset’–mindset being the preferred success-psychology of Silicon Valley right now, and itself developed and propagated from Stanford University, itself the original academic home of many of the valley’s most successful entrepreneurs.

XQ super school projectXQ Super School marketing. Image from XQ Super School

It is easy to view XQ Super School as a commercial takeover of public education. Perhaps more subtly, though, what XQ are others are accomplishing is a reimagining of high school through the cultural lens of Silicon Valley. These entrepreneurs are pursuing a future vision based on their own politics, their own psychological theories, and their own discourse—of community, of problem-solving, of invention, of growth mindset—and propelling it into the remaking of public education at large.

Intelligent Universities
The contemporary university is also being reimagined by the tech power-elite. Peter Thiel—the co-founder of PayPal alongside Elon Musk—for example, established the Thiel Fellowship as an alternative to higher education for ambitious young technology entrepreneurs. Higher education itself has become the target for a massive growth in the educational technology market, part of what David Berry terms the new ‘data-intensive university’.

The social media platform LinkedIn has become one of the most significant players in the data-intensive HE market. Since being acquired by Microsoft for more than $26bn in 2016, Janja Komljenovic argues that LinkedIn is increasingly targeting the HE sector with particular features that are generated explicitly for students, graduates and universities. These features include student profiles, university branded pages, and the capacity for students to search universities based on graduate career outcomes.

According to Komljenovic, ‘LinkedIn moves beyond the passivity of advertising to its users towards actively structuring digital labour markets, in which it strategically includes universities and its constituents’, and argues that it is using its ‘qualification altmetrics’ to build ‘a global marketplace for skills to run in parallel to, or instead of university degrees’.

In this sense, LinkedIn is fundamentally transforming and challenging HE by making students and universities into ‘prosumers’ in ‘data markets’, where ‘the data they produce is monetised and repackaged to become governing devices for their own sector’, and is reframing ‘meanings in the HE sector about quality of universities and degrees; graduates and their diplomas; and skills in relation to employability’. As such, increasingly LinkedIn’s algorithms hold potential to match individuals, skills and jobs as gaps are revealed in labour markets, and appear to challenge the project of higher education to become more outcomes- and skills-focused as a result.

HE technology landscapeThe 2018 higher education technology landscape. Infographic by Eduventures

Amazon, too, is seeking position in higher education. It recently announced that it was installing Amazon Echo Dot devices in all student dormitories at St Louis University as part of its Alexa for Business offering. The move, it was reported, is ‘among the largest smart speaker deployments at a university and could help Amazon to establish smart speakers and the voice interface as typical among younger users’.

Beyond its clear business goals, with the partnership Amazon is marking the entrance of AI into HE, with Alexa becoming an automated student experience assistant. It is hard to imagine that Alexa won’t have a place in Jeff Bezos’s preschool network too, not least as voice assistants may make a better interface than screens with children who have yet to learn to read or write. Amazon is entering public education at both preschool and postsecondary phases, with massive implications for institutions, staff and students of all ages.

The FBI and the ‘ed-techlash’
The tech elite now making a power-grab for public education probably has little to fear from FBI warnings about education technology. The FBI is primarily concerned with potentially malicious uses of sensitive student information by cybercriminals. There’s nothing criminal about creating Montessori-inspired preschool networks, using ClassDojo as a vehicle to build a liberal society, reimagining high school as personalized learning, or reshaping universities as AI-enhanced factories for producing labour market outcomes–unless you consider all of this a kind of theft of public education for private commercial advantage and influence.

The FBI intervention does, however, at least generate greater visibility for concerns about student data use. The tech power-elite of Zuckerberg, Musk, Thiel, Bezos, Powell Jobs, and the rest, is trying to reframe public education as part of the tech sector, and subject it to ever-greater precision in measurement, prediction and intervention. These entrepreneurs are already experiencing a ‘techlash‘ as people realize how much they have affected politics, culture and social life. Maybe the FBI warning is the first indication of a growing ‘ed-techlash’, as the public becomes increasingly aware of how the tech power-elite is seeking to remake public education to serve its own private interests.

Posted in Uncategorized | Tagged , , , , , , | 1 Comment

Genetics, big data science, and postgenomic education research

Ben Williamson

Emily Willoughby_Genetics of educational attainment_2018A diagram visualizing the genetic variants associated with educational attainment. Image by Emily Willoughby.

An international consortium of genetics researchers has established a link between genes and educational attainment from a study of over a million people. One of the largest genetics studies ever published in a science journal, it represents a significant step forward for the emerging field of educational genetics. The growth of genetics expertise in education also, however, raises substantial concerns about biological determinism and new forms of eugenics, and reanimates long-standing debates about the genetic inheritance of intelligence and cognitive ability.

In this post I outline some key findings of the study, but primarily focus on the significant implications and issues it raises for education research more widely. The implications of the study are that it: (1) establishes genetics as a powerful new front in educational knowledge production; (2) positions big data science as a methodological apparatus for future educational studies; (3) surfaces extreme political polarization regarding genetic factors in education that will be difficult to reconcile as genetics enters education policy debates; (4) potentially opens up a new market for commercial educational genetics products; and (5) reveals the need for new social scientific forms of engagement with, and critique of, genetics research and postgenomic science in the education field.

Gene discovery
Published in Nature Genetics at the end of July 2018 by the international Social Science Genetic Association Consortium (SSGAC) in collaboration with the consumer genetics company 23andMe, the paper ‘Gene discovery and polygenic prediction from a genome-wide association study of educational attainment in 1.1 million individuals’ reports findings showing that genetic patterns across a large population are associated with years spent in school. According to its 80 authors, ‘educational attainment is moderately heritable and an important correlate of many social, economic and health outcomes,’ and is therefore an important focus in a number of educational genetics studies.

Specifically, the scientists  identified over a thousand genetic variants linked with educational attainment, particularly those involved in brain-development processes and the formation of neuronal connections in foetuses and newborns. These biological factors, the scientists claim, influence psychological development, which in turn affects how far and for how long people continue at school.

The SSGAC has been careful in reporting the results. They do not claim to have identified any single genes for education, and the data don’t predict educational attainment for individuals. The research also found that genetic variants have a far weaker effect than environmental influences on educational attainment, and was restricted to analysis of a homogeneous sample people aged in their 40s and 50s of white European descent (the study failed with a sample of African-Americans). The authors produced a massive Q&A document—longer than the paper itself—to help explain and clarify the results, methods and conclusions, while downplaying the policy and practical implications of its findings. As such, the paper has been carefully published in acknowledgement of the potential controversy it could cause, and to anticipate misinterpretation and misreporting of its findings.

Nonetheless, the paper has catalysed significant media interest and social media commentary. Three days after publication, the paper had been Tweeted 1000 times, blogged multiple times, and reported in news media around the world—picking up an enormous Altmetric score in the process. There is useful coverage in the New York Times, Atlantic and MIT Technology Review reporting the key findings.

Clearly the paper is a massive advance for genetics science, in education and beyond. For those education researchers and social scientists outside of the genetics field, however, it has major implications in terms of knowledge production, methods, policy influence, and the commercialization of educational genetics.

Powerful genetic knowledge
Along with other recent advances in genetics in education, the SSGAC study instantiates the emergence of a powerful new field of knowledge production. Such research is only possible now owing to the complete sequencing of the human genome–the entire genetic structure of human DNA–over a decade ago, and since then studies in human genomics have expanded rapidly. As a result, science studies researchers claim we are now in a postgenomic age.

As a research field, educational genomics seeks to unpack the genetic factors involved in individual differences in learning ability, behavior, motivation, and achievement. Importantly, researchers of educational genomics do not assume either that there is any single genetic factor that determines learning ability, cognition or intelligence, or that genetic factors entirely explain the complexity of learning. Identifying an individual’s genotype—the full heritable genetic identity of a person—and its relationship to learning, intelligence or educational outcomes remains complex. Practitioners of educational genomics and behavioural genetics look for patterns in huge numbers of genetic factors that might explain behaviours and achievements in individuals, by studying the interaction of genotypes and environmental influences on phenotypical behaviours and traits (such as intelligence etc).

The SSGAC has positioned itself as a leading consortium for such postgenomic education science with the publication of their paper, but another key figure bringing genomics research into education is the behavioural geneticist Robert Plomin, co-author of the controversial G is for Genes: The Impact of Genetics on Education and Achievement. Plomin has extensively studied the links between genes and attainment using ‘genome-wide polygenic scoring’ (GPS), a method also employed in the SSGAC study. A polygenic score is produced by analysing huge number of genetic markers, and their interactions with environmental factors, in order to predict a particular behavioural or psychological trait. As computer processing power, data storage capacity, and data analytics technologies have advanced in recent years, it has become possible to correlate huge quantities of genotypical data with a host of phenotypical traits.

Under the banner of a ‘new genetics of intelligence’, Plomin and colleagues have used polygenic scores to predict academic achievement in schools. The substantial increase in heritability they found ‘represents a turning point in the social and behavioural sciences because it makes it possible to predict educational achievement for individuals directly from their DNA,’ thereby ‘moving us closer to the possibility of early intervention and personalized learning.’

While the SSGAC avoids calling for interventions based on its data, the results open up possibilities for further studies and analyses. These include: studies that control for genetic influences in order to generate credible estimates of how changes in school policy influence health outcomes; study why specific genetic variants predict educational attainment; and study how the effects of genes on education differ across environmental contexts. As such, the research itself is a catalyst for further educational genomics studies.

Although educational genomics remains in its infancy, it seems likely to advance considerably in coming years, linking genotypes to phenotypical traits, behaviours and other outcomes. It will link more closely with psychology and neuroscience as associations are further established between genes and neurons, personality traits and so on. As more findings emerge, further support will grow for evidence-based scientific perspectives on learning. New forms of genetic and genomic expertise in educational matters are already emerging, and challenging existing forms of social scientific and philosophical educational research which have challenged the biological determinism of genetics for decades.

Big data science
The methodological apparatus of the SSGAC study, and other research in educational genomics and behavioural genetics, is huge—it dwarfs the technical, methodological, financial and expert resources of other forms of educational research. The SSGAC study itself is the accomplishment of a well-funded international team of 80 scientists working in departments of psychology, sociology, behavioural genetics, behavioural science, neurogenomics, economics, biosciences, health sciences, and many others. A core part of the team included more than 20 scientists from the commercial organization 23andMe, the Silicon Valley company backed by Google. The research, then, was distributed across public universities and commercial labs at huge scale and significant cost.

Beyond the big size of the team and its funding, the study is also typical of the big data methods of genetic science. The data on its sample of over a million people was from two sources. One was the UK Biobank, a huge open access health resource based on a living population of over 500,000 volunteer participants, which was established by the Medical Research Council and the Wellcome Trust and opened up to scientists in 2012. One of many biobanking projects worldwide, it opens up unprecedented access to large samples of genetic data for analysis. The other data was sourced from 23andMe itself, the consumer genetics company offering health and ancestry services on a profit-making basis.

The methods described in the appendix to the SSGAC study demonstrate the quantitative and computational complexity of such large-scale genetics research. The study depends on a range of statistical methods, tests, mathematical formulae, algorithms, data visualizations, software platforms with names such as METAL and PLINK, and bioinformatics platforms called DEPICT, MTAG, PANTHER and MAGMA.

As such, the paper published in Nature Genetics is the end-result of the activities of a huge interdisciplinary science team, generous financial funding, enormous databanks from both the non-for-profit and private sectors, and highly sophisticated big data analytics methods, all powered by a vast infrastructure of bioinformatics technologies, statistical software analysis packages, data analytics and visualization. The scale of the scientific infrastructure of knowledge production is miles away from the norms of educational research.

Yet we may expect further education research to locate itself within such infrastructures of professional expertise, labs, databanks, analytics methods and software. Already, scientists are beginning to propose new multidisciplinary experimentation and intervention under the heading of ‘precision education’. Genetics and neuroscience are spectacular new fronts of big data-driven scientific research, and related subfields of educational genomics and educational neuroscience are growing fast, with the support of wealthy foundations and commercial partners. As a result, studies such as that by the SSGAC and other educational genetics teams position big data science as a new frontier of innovative and interdisciplinary education research.

Policy sciences
Researchers in the field of Science and Technology Studies (STS) have long maintained that science and politics are inseparable, and often focus their attention on scientific controversies. This is particularly the case when science enters into official policy, and is translated and manipulated to fit political agendas and policymakers’ requirements. The new genetics of education are an ideal illustration of an emerging scientific controversy in education.

The SSGAC research represents the potential for a significant shift in emphasis in education policy to embrace genetics expertise. Though the SSGAC reports no direct policy implications from its study, it is clear that policymakers seeking explanations for educational attainment would be interested in the results. As Kalervo Gulson and P. Taylor Webb have argued, new kinds of ‘bio-edu-policy-science actors’ may be emerging as authorities in educational policy, ‘not only experts on intervening on social bodies such as a school, but also in intervening in human bodies’. And science writer Antonio Regalado pointed out that one of the SSGAC authors had previously stated that once polygenic scores could be used to predict IQ, it would trigger a ‘serious policy debate’ about ‘personal eugenics’.

Commenting on the SSGAC study, John Warner cautions about how conservative economists might seek to translate the results into policy proposals. ‘How long before schools subject to performance funding as determined by graduation metrics begin to discriminate against students with low polygenic educational attainment scores?’ he asks. ‘When will automated human resources algorithms start weighing polygenic educational attainment scores when sorting through job applicants?’ These questions point to the possibility of students being grouped and clustered together by their polygenic scores, and the potential for enforcing new kinds of ‘biosocial collectivity’ within schools.

A significant problem with the potential translation of educational genomics into education policy is that genetics in education is extremely controversial and politicized. The publication in the mid-90s of The Bell Curve rekindled old debates about genetic determinism, eugenics and racialized discrimination in relation to IQ testing and the political uses of intelligence data. Concerns persist about this ‘new geneism’, and help account for the very careful, actively depoliticised packaging of the SSGAC study. A recent article in The New Statesman on the genetics of education identified deep polarization between right-wing advocates of genetics and left-wing critics, with the former preferring explanations based in biology and the latter seeking environmental explanations. A column reporting on the SSGAC study in the New York Times argued ‘progressives should embrace the genetics of education’, suggesting that ‘the power of the genomic revolution [can] be harnessed to create a more equal society’ while berating the ‘long tradition of left-wing thinkers who considered biological research inimical to the goal of social equality’.

Matters aren’t helped by the fact that some of the most outspoken advocates of genetic explanations for attainment, achievement and intelligence are divisive public figures such as Toby Young and Charles Murray (co-author of The Bell Curve). In a recent Spectator article titled ‘The left is heading for a reckoning with the new genetics’, Young attacked what he saw as liberal progressives’ ‘environmental determinism’ as ‘scientifically indefensible’. ‘Like Marx,’ he argued, ‘post-modernists believe that man’s true nature is reducible to the totality of social relations, that individuals are nothing more than the embodiments of particular class-relations and class-interests, and that everything comes down to the struggle for power. I wouldn’t expect an uncritical acceptance of the new genetics from that quarter’.

Drawing on an interview with Charles Murray, Young also speculated that left wing sociologists in particular would likely become irrelevant unless they embraced the new genetics by the mid-2020s. For Murray, this was even a source of deep concern, since he thought ‘once left-wing intellectuals finally let go of environmental determinism they may veer too far in the opposite direction and embrace gene editing technologies like CRISPR-Cas9 to try to create the perfect socialist citizen’.

Given Young’s proximity to education policymakers and politicians unde the current UK Conservative government, his comments on genetics have caused widespread alarm among academic and educators. Generating policy proposals based on educational genomics in this tense environment, then, is likely to be a continuing source of deep controversy and irreconcilable political suspicions. It appears that education policy in coming years will have to engage in significant debate about genetics and even personal eugenics, requiring informed participation by social scientists whose views on the matter are currently subject to attack and ridicule by conservative commentators. Education policy studies of this scientific and political controversy will be essential.

Genetic exploitation
With growing awareness of the increasing power of genetic science in education, it is highly likely that commercial organizations will seek to exploit the opportunity to build an educational genetics market of services and products.

Consumer companies such as Google-backed 23andMe have already exploited the opportunities made available by the sequencing of the human genome to launch genetic testing services as commercial products. As 23andMe make up part of the team behind the SSGAC study, this commercial outfit has now not only positioned itself as part of the apparatus of education research, but potentially could stand to gain from extending to the provision of further educational genetics products. In the same week the SSGAC study was released, 23andMe also released details of a deal with big pharmaceutical company GlaxoSmithKline to use data from its 5 million customers of home genetics testing kits to design new drugs. The $300million deal will see GSK and 23andMe  applying artificial intelligence and machine learning to the medical discovery process, analysing genetic data from 23andMe and other sources such as UK Biobank. As a private company with vast genetic databanks, 23andMe is clearly positioning itself as a key part of the infrastructure of genetic science in pharmaceuticals and education.

Other companies are likely to see market potential in educational genetic testing products too. Already, concerns are emerging about startup companies seeking to exploit advances in human genomics research to produce genetic IQ tests. Cheap DNA kits for IQ testing in schools, in the shape of ‘intelligence apps’ or other genetic ed-tech products, may be feasible in the not-too-distant future, though considerable and understandable concern exists about their usefulness and ethics. Robert Plomin has proposed that DNA analysis devices such as ‘learning chips’ could make reliable genetic predictions of heritable differences in academic achievement, and it is easy to speculate how consumer-DNA companies could extend in this direction.

Major risks would emerge from the expansion of an educational genetics markets. One is that as genetic predictions become accepted  as forecasts of a child’s future ability, new approaches may emerge to ‘artificially select future generations’–a ‘eugenics 2.0‘ for selecting ‘smarter kids’. While embryo screening programs probably remain unlikely in the West, large-scale efforts are already underway elsewhere to find the genetic code for high IQ. This raises the possibility for selective-intelligence to become attractive to wealthy parents seeking genetic advantage for their children.

The merging of genetic science, big data and commercial speculation in education could lead to a new form of ‘platform scientism’, where the logics of capital accumulation and data analytics combine to push genetic testing and other profiling services in schools. The danger of such a scenario, as detailed in The Atlantic, is that obsession with these ‘slippery genetic predictions could turn people’s attention away from other things that influence how children do in school and beyond — things like their family’s wealth, the stress in their neighborhoods, the quality of the schools themselves’.

Critical postgenomic education research
The acceleration and expansion of educational genetics research as a big data science of attainment, achievement and even intelligence raises distinctive challenges for social scientific education research. Straightforward critique and rejection of genetics represents a possible form of resistance. However, within the wider field of sociology and STS research on postgenomics, researchers have begun to propose different forms of analysis and critique, with some educational researchers also working to get beyond simplistic critical reactions to new biological thinking in productive new ways.

Contemporary postgenomic science, with its emphasis on gene-environment interaction, offers an invitation for social scientists to explore how the biological and the social constitute each other. Biosocial studies, for example, acknowledge that the body, biology and brain are shaped by their social circumstances and environmental contexts. Commenting on contemporary postgenomic science, biosocial researchers argue that the social world gets ‘under the skin’ to impress upon the biological. They insist that bodies are influenced by power structures in society, becoming tangled with social, political and cultural structures and environments.

Biosocial work in education is just beginning to emerge. Developing a ‘biosocial education’ agenda, Deborah Youdell argues that learning may be best understood as the result of ‘social and biological entanglements.’ Biosocial education research therefore takes biology seriously, but also digs critically into the ways scientists have conceptualized the body and thereby made it amenable to experimentation and intervention.

A biosocial approach would seek to understand educational genetics in both biological and social scientific terms by appreciating that the social environments in which learning takes place do in fact inscribe themselves on bodies and brains. The genetic and neural data of contemporary postgenomics would have to be understood from a biosocial view as data about social processes, not only biological processes.

Since genetics is a highly data-intensive and software-saturated field of experimentation and knowledge production, a biosocial perspective would also address the implications of data processing of students’ genetic and neural details. Taking further cues from STS, it would acknowledge that data are always a partial selection, that their analysis through vast data infrastructures of methods and software packages matters a great deal to the results produced, and that the results can influence what happens in educational settings. Is the ‘quantified human’ held in a database and represented by a polygenic score really detailed enough to yield insights to intervene upon students? Additionally, biosocial research would be alive to the possible consequences of for-profit commercial companies building software platforms for collecting and analysing students’ genetic and neural information.

The million-sample SSGAC study is clearly a landmark in postgenomic education science. It is a field of experimentation and knowledge production requiring novel forms of social scientific and philosophical analysis. A biosocial approach may be one way forward, but it is clear that educationalists need to develop a range of concepts and methods in order to perform critical postgenomic education research as the genetic science of education expands and accelerates.

Posted in Uncategorized | Tagged , , , | 1 Comment