New biological realities in the genetics of education

New research with a sample of more than 3 million individuals claims to have identified thousands of genetic differences linked with education. Photo by Rob Curran on Unsplash

The recent publication of a study of the genetics of educational attainment is once again raising questions and controversies about the potential use of biological information to inform education policy and practice. In the BioEduDataSci project, funded by the Leverhulme Trust, we have been collecting and examining a very large sample of texts and commentary to understand the development of the new genetics of education and its potential consequences. Such studies are characterized by being highly data-scientific, focused on identifying minute molecular differences, and highly controversial both ethically and scientifically. This research has a high profile in the media, and draws criticism from a variety of scientific and political perspectives.

The new study, conducted and published by the Social Science Genetic Association Consortium (SSGAC), is no different. It is the fourth in a series of studies linking DNA differences to educational outcomes. The first educational attainment study (EA1) examined a sample of around 100,000 people, the second (EA2) a sample of 300,000, and the third (EA3) featured a sample in excess of 1 million. EA4, however, features a sample of more than 3 million genotyped individuals, and has identified more than 3,000 tiny genetic variants—known as single nucleotide polymorphisms (SNPs)—that are said to be associated with years spent in education. The paper, published in Nature Genetics, is extremely technical, but its key findings have been usefully summarized by one of its main investigators, and a follow-up çommentary paper by one of the authors stating simply that ‘genes matter when it comes to educational performance and social outcomes’.

Like previous projects and publications linking DNA to educational outcomes, the EA4 study certainly raises serious ethical challenges. These include the lack of representation in the data analysed, the potential to reinforce existing racial categories and discriminatory outcomes, produce negative self-fulfilling prophecies, and possible appropriation by right-wing conservatives and ideologically-motivated scientists to make racialized, hard-hereditarian arguments about IQ and social stratification. The horrific history of eugenics in education, particularly through the intelligence testing movement, has left a dark legacy that means current studies of the genetics of education come under particularly intense scrutiny, not least because of the persistence of discriminatory practices and policies grounded in genetic theories of difference.  

Rather than rehearse those issues here, however, I want to focus on one specific scientific dispute in relation to genetic educational attainment studies. The post draws from our wider research detailing the organizational, conceptual, methodological and technical systems and structures underpinning the new genetics of education, and exploring the knowledge claims and proposals for intervention emerging from such studies. What we are interested in this post is not so much developing an external critique of educational genetics, but tracking some of its internal conflicts and their implications.

New genetics of education

Over the last 15 years, new studies of the genetics of education have appeared from two overlapping fields of scientific inquiry. Behaviour genetics is a branch of psychology examining the genetic bases of human behaviours and traits, or how genotypes influence phenotypes (like cognitive ability). It has recently adopted high-tech and data-driven genomics methods to study the ‘molecular genetic architecture’ of ‘educationally-relevant traits’ and phenotypes, including cognitive ability and intelligence.

Meanwhile, social science genomics, or sociogenomics, represents the interdisciplinary combination of genomics with certain branches of sociology, economics, and political science. It is interested in the biological structures and mechanisms that interact with environmental factors to produce socioeconomic outcomes. Educational attainment is taken by sociogenomics as a key socioeconomic outcome that is related to other outcomes such as occupation, social status, wealth, and health.

Where behaviour genetics and sociogenomics overlap is in their methodologies and techniques of analysis. They both use highly data-intensive research infrastructure, such as biodata repositories and data mining software, and methods that can identify vastly complex associations between thousands of minute SNP variations and their correlation with educational outcomes or relevant traits.

They use methods such as genome-wide association studies (GWAS), processing enormous quantities of SNP data, and calculating ‘polygenic scores’—a quantitative sum of all SNP variant data—and apply them to education. Thus, educational attainment studies such as those undertaken by the SSGAC use GWAS methods and SNP analysis to produce polygenic scores for education, which can then be used to predict the attainment of independent samples. Similar approaches have been taken to a range of education-relevant phenomena, such as intelligence and learning

The SSGAC’s huge-sample educational attainment studies are taken as gold standard models for investigating the genetics of education by both behaviour genetics and sociogenomics researchers. Their findings underpin the arguments in Robert Plomin’s controversial book Blueprint, where he proposes that DNA data could used as the basis for personalized ‘precision education’, are explored in detail by Katherine Paige Harden in The Genetic Lottery to underpin her argument for educational reforms, and animate a wide range of other sociogenomics studies (although the SSGAC reports no practice or policy implications from the EA4 study due to being only ‘weakly predictive’).

Not all scientists agree, however, that multimillion-sample genomics studies offer any useful insights into the genetics underpinning education, even less that they might inform how education itself is organized.  

Heritability problems

Perhaps surprisingly, one of the most consistent and vocal critics of data-driven genomics studies of education is a leading behaviour geneticist, Eric Turkheimer. Turkheimer certainly believes in the heritability of human behaviours—that is, that a certain portion of behaviour is influenced by genetics rather than being entirely environmentally shaped. He wrote the ‘three laws’ of behaviour genetics affirming as much. A fourth law was added in 2015 (by authors from the SSGAC) stating that ‘a typical human behavioural trait is associated with very many genetic variants, each of which accounts for a very small percentage of the behavioural variability’.

Polygenicity among hundreds of thousands or even millions of SNPs has become a defining law of social and behavioural genomics studies, including those focused on the genetic heritability of educational behaviours and outcomes. Leading scientists claim these methods promise to ‘open the black box of heritability’ and finally reveal the pathways from DNA to social outcomes such as educational achievement.

Turkheimer, however, has become highly critical of the methodological turn to genomics when studying complex behaviours and outcomes, including in the SSGAC EA studies, as his useful recent tweet-response to EA4 indicates. The basis of the critique is the so-called ‘missing heritability problem’. Turkheimer argues in a new paper with Lucas Matthews that the concept of ‘heritability’—an ‘estimate of the proportion of phenotypic [behavioural] variance that is statistically associated with genetic differences’—has changed with shifting methodologies for its measurement.

Basically, earlier forms of behaviour genetics utilizing quantitative genetics methods, such as twin studies, identified DNA to play a large influence on any behavioural trait, often in the region of 50-70%. For example, in relation to education, quantitative behaviour genetics found around 50% of variation in intelligence, as measured by IQ tests, was heritable.

In contrast, Matthews and Turkheimer argue that recent high-tech, data-intensive genomics methods, such as genome-wide association studies (GWAS), have hugely increased computational power but reduced explanatory power. For example, ‘cutting-edge GWAS have recently estimated that only 10% of variance in IQ is statistically associated with differences in DNA’. They point out that studies of educational attainment from the SSGAC are therefore scientifically underwhelming, as they account for somewhere in the region of 12-15% of variance, despite the huge samples and computational power put to the analysis. This gap is what’s known as the missing heritability problem.

The issue for Matthews and Turkheimer is that most solutions to missing heritability appear to be focused on throwing even bigger samples and more processing power at the problem. They describe this as ‘dissolving the numerical gap’ between traditional quantitative and molecular computational kinds of heritability estimates, but argue ‘resolving the numerical discrepancies between alternative kinds of heritability will do little to advance scientific explanation and understanding of behavior genetics’. They note that ‘most writing on the topic expresses optimism that this day will soon come as researchers collect larger datasets and develop more sophisticated statistical genetic models of heritability’.

By contrast, Matthews and Turkheimer ‘argue that framing the missing heritability problem in this way—as a relatively straightforward quantitative challenge of reconciling conflicting kinds of heritability—underappreciates the severe explanatory and methodological problems impeding scientific examination and understanding of heritability’.

More urgent than closing the numerical gap, for them, is the persistent ‘prediction gap’, or the challenge of making accurate and reliable prediction from DNA to behaviour, and, even more so, the ‘mechanism gap’, which refers to the gap in explaining the specific mechanisms linking molecular genotypes to behavioral phenotypes. There remains, they suggest, a ‘black box’ of hidden mechanisms that simply solving the numerical gap will never discover.

Graphic detailing the numerical, prediction, and mechanism gaps in behaviour genetics, from Matthews and Turkheimer (2022)

These gaps in prediction and explanation of mechanisms are especially acute for studies of the genetics of education. They pose a challenge to claims that genetic data could be used—in the not so distant future—to open the ‘black box of heritability’, or even inform educational policy or practice in schools:

the putatively causal relationship between … SNPs and differences in educational outcomes is entirely opaque, other than the very general assertion that many of the SNPs are close to genes that are expressed in neural tissue. Until scientists have identified, described, and substantiated causal-mechanical etiologies that would explain why countless SNPs are correlated with behavioral outcomes like IQ and educational attainment, then what we call the mechanism gap of the missing heritability problem remains a daunting and persistent scientific challenge. … Highly complex human behavioral traits and outcomes such as intelligence and educational attainment are farthest from dissolution: the numerical gaps, predictions gaps, and mechanism gaps for these cases may never be resolved.

The paper highlights two important issues confronting the new genetics of education. First, despite significant investment in scientific infrastructure, data analytics technologies, and high-profile publications, educational genetics studies remain a long way from opening the ‘black box’ of the specific genetic mechanisms that underpin educational outcomes. And second, it highlights how educational genetics studies are not just ethically controversial and subject to external critique, but scientifically controversial and internally contested too.

Making biodatafied realities

Another key part of Matthews and Turkheimer’s argument is that the heritability estimates produced by quantitative genetics are of a very different kind than the heritability models produced by molecular genomics methods such as GWAS and SNP analysis. This is not just a matter of quantitative innovation, then, but a qualitatively different mode of investigation which produces a very different kind of knowledge.

In this sense, Matthews and Turkheimer seem close to suggesting that the computational infrastructure of genomics makes a significant difference to the knowledge that is produced. This is an argument familiar in critical data and infrastructure studies, where it is assumed data infrastructures are far from merely neutral interfaces to access factual reality, but instead represent ‘expressions of knowledge/power, shaping what questions can be asked, how they are asked, how they are answered, how the answers are deployed, and who can ask them’.

Likewise, the epidemiologist Cecile Janssens has recently argued that polygenic scores, in particular, emerged as a ‘pragmatic solution’ to the statistical problem of calculating very large SNP associations in genomics. Technically, a polygenic score is calculated using particular data-mining software applications, computing formats, algorithms and statistical standards created by specialists in high-tech genomics research laboratories. As Janssens suggests, this level of technical mediation in the construction of polygenic scores matters.

Polygenic scores, Janssens argues, ‘do not “exist” in the same way’ as other measurable biological processes such as blood pressure, but only as ‘algorithms’, ‘models’, or ‘simplifications of reality’. Her concern is that polygenic scores, as pragmatic solutions to a statistical problem, might create a new ‘biological reality’ and be used as the basis for certain forms of intervention, despite being only simplified models.

Janssens’ concern, like that of Matthews and Turkheimer, reflects important critiques of genomics in science and technology studies. The historian of bioinformatics Hallam Stevens, for example, argues that ‘these algorithms and data structures do not merely store, manage, and circulate biological data; rather, they play an active role in shaping it into knowledge’.

Polygenic scores and heritability estimates produced through informatic forms of biology, then, may generate very different conceptions of what constitutes ‘biological reality’. As Joan Fujimura and Ramya Rajagopalan have argued elsewhere, data-intensive genomics ‘is ultimately a statistical exercise that depends on the analytic software itself and the information that goes into the statistical software’. As such, they emphasize, when a GWAS or SNP analysis reveals complex associations, ‘these underlying patterns are known only through the data and data producing technologies of the geneticists’.

If Matthews and Turkheimer are correct, then it seems like the kind of heritability of educational attainment that recent GWAS, SNP and polygenic score studies have uncovered is a different kind of heritability that can be known only through the data and technologies of educational genetics research.

As a pragmatic solution to a computing problem, polygenic scores are changing the ways that DNA is understood to affect social and behavioural outcomes. They are based on a kind of heritability that exists primarily as a computational artefact–one which, despite its weak predictivity and absence of explanatory power, is nonetheless attracting significant media and public attention as a way to understand the molecular bases of educational outcomes.

Even while overall effect sizes of such studies may remain modest, they may already be shifting public and professional discourse towards a more biological perspective, infusing educational debates with the vocabulary of genotypes, heritability, phenotypes, and genetic architectures, despite the persistent explanatory gap in the underlying science.

The social life of bio-edu-data science

What we can suggest here, then, is that new biological knowledge and even biological realities are being created through the use of data-intensive genomics technologies and methods in educational genetics research. As Matthews and Turkheimer argue, a different kind of heritability emerges from the complex scientific and data infrastructures of GWAS and polygenic scoring. For Janssens, polygenic scores only exist as algorithms, not as embodied substance. This challenges assertions that once the polygenic ‘genetic architecture’ underpinning educational outcomes is objectively known through big biodata analysis, it may be possible to design educational interventions on the basis of such knowledge.

Rather, new knowledge about the genetics of education is generated through distinctive computational systems, software, and methods that all leave their mark on understandings of the genetic substrates of educational outcomes. What emerges from such studies, as Matthews, Turkheimer and Janssens suggest, are new bio-edu realities that have been produced through computers, data processing, and particular statistical applications, rather than simply unmediated insights into the objective molecular substrates that underpin students’ educational achievements.

But these new realities can be consequential despite being scientifically contested or only weakly explanatory. They may animate enthusiasm for ideas about genetically-informed schooling, and could potentially lead to a hardening of biological explanations, sometimes dangerously motivated by racism, for the complex social, political, cultural and economic factors that shape students’ achievement in schools.

For these reasons, in the BioEduDataSci project, we’re tracking the ‘social life’ of the new genetics of education. This means trying to understand the social contexts and conditions of new knowledge production, the reception of such new knowledge, and its social, political and ethical implications as such knowledge circulates in the media and in public, often by being translated and interpreted and sometimes turned to harmful results.

The new genetics of education is not yet a settled science, and if critical voices such as Eric Turkheimer are correct, may never be. Nonetheless, the new genetics of education remains a fast-moving science in the making, surfacing complex issues and problems that need addressing and debating among many stakeholders across the biological and social sciences, policy and practitioner sites, before any emerging findings are considered as insights for implementation.

UPDATE: A few days after posting this, the EA3 educational attainment study was cited to justify an act of horrific racist violence resulting in many deaths in Buffalo, USA. This has animated urgent calls among the genomics community for scientists of such research to take far more responsibility for their study findings – or perhaps not even publish it at all given its dangerous consequences – generating counterarguments about scientific freedom.

Posted in Uncategorized | Tagged , , , , , , | Leave a comment

Google magic

Google has announced a new ‘magical’ upgrade to its Classroom platform. Photo by Ixxlhey 🇲🇻 on Unsplash

Google has announced the latest in a sequence of upgrades to its popular online learning platform Google Classroom, and it’s all about injecting more artificial intelligence into schools. Classroom was popular with teachers all around the world before Covid-19, but has experienced enormous growth since 2020. Google’s announcement of new AI functionalities reveals its continued plans for expansion across the education sector, and beyond, and its technical and economic power to reshape learning through automation.

Autopedagogy

The company publicized its new AI feature for Classroom, called Practice Sets, in a marketing blog with an explanatory video on the Google for Education site. The basic functionality of Practice Sets is powered by ‘adaptive learning technology’, which it said ‘will give teachers the time and tools to better support their students — from more interactive lessons to faster and more personal feedback.’

The marketing video is instructive of the particular imaginary of schooling that Google is seeking to operationalize with Practice Sets. The adaptive learning technology, it claims, will provide ‘one-to-one feedback,’ providing a ‘round the clock tutor for each student.’ It ‘identifies relevant learning skills’ while students are using the feature within Classroom assignments, then ‘finds relevant hints and resources’ which appear as ‘recommended video and content cards for each learning skill.’ Practice Sets also features ‘autograding’ functionality, giving teachers ‘visibility into their [students’] thinking,’ and ‘automated insights’ into ‘class-wide trends’ to provide a ‘quick view of class performance,’ as well as ‘actionable insights’ they can use to improve their teaching.     

In an accompanying blogpost outlining its approach to adaptive learning technology, the head of Google for Education said ‘applying recent AI advances’ to adaptive learning ‘opens up a whole new set of possibilities to transform the future of school into a personal learning experience.’ He added, ‘Adaptive learning technology saves teachers time and provides data to help them understand students’ learning processes and patterns.’

These marketing materials are presented in a highly persuasive way. They tap into contemporary problems of schooling such as providing adequate support and feedback to students. They also promote the idea that education is about ‘learning skills,’ resonant with contemporary education policy preoccupations with skills and their value.

But the Google marketing is also highly technosolutionist. It proposes that datafied forms of surveillance and automation are ideally suited to solving the problems of schooling. Google positions Practice Sets as a kind of always-on, on-demand classroom assistant, able to execute auto-pedagogical interventions depending on a constant trace of a student’s data signals. It’s likely just one step in Google’s roll-out of AI in its education platforms. As one favourable industry review suggested, ‘Now that Google is adding AI capabilities to Google Classroom, expect the search giant to add even more automation to its online learning platform going forward.’

The appearance of Practice Sets exemplifies the ways automation is becoming an increasing presence in schools. Neil Selwyn, Thomas Hillman, Annika Bergviken Rensfeldt and Carlo Perrotta argue that automation is not materializing in the spectacular guise of robot educators, but as much more mundane services and feature upgrades.

Education — as with most areas of contemporary life — is becoming steadily infused with small acts of technology-based automation. These automations are intrinsic to the software, apps, systems, platforms and digital devices that pervade contemporary education. … [W]e are now teaching, studying and working in highly-automated and digitally directed educational environments.

While Practice Sets certainly fits the mould of a mundane instance of micro-automation, its significance is its potential scale. Indeed, Google claimed it could achieve the promise of adaptive learning ‘at unprecedented scale.’ This is because Google Classroom has penetrated into hundreds of thousands of classrooms worldwide, reaching more than 150 million students.

Practice Sets is currently available only in beta, but will in months to come be rolled out as a feature upgrade to premium Workspace customers. A year ago, Google said it was reframing Classroom as a learning management system, not just a platform for learning from home during Covid disruptions, and set out a ‘roadmap’ for its future development. Practice Sets represents the latest step on that roadmap, one that involves becoming the global learning management infrastructure for schools and increasing the presence of autopedagogical assistants in the school classroom, with potential for intervention with millions of students.

Its ambitions for scalability exceed the Classroom alone, however. It is also positioning itself as a ‘learning company,’ dedicated to ‘Helping everyone in the world learn anything in the world’, whether at school, at work, or in everyday life itself. It’s not a huge jump to assume that Google’s expansive ambitions as a learning company will gradually see it extend automation well beyond the school.

Technomagic

What is striking about the Practice Sets announcement is the way it presents adaptive learning technology. For teachers, it will ‘supercharge teaching content,’ and ‘when students receive individualized, in-the-moment support, the results can be magical.’ Early users of the beta service are reported to have described Practice Sets as ‘Google magic.’ This emphasis on magical results and Google magic is typical of technology marketing discourse. (As an aside Google Magic is also the name of its Gmail spam filter.)

As MC Elish and danah boyd have previously argued, ‘magic’ is frequently invoked in tech marketing materials, especially in relation to AI, while minimizing attention to the methods, human labour and resources required to produce a particular effect:

When AI proponents and businesses produce artefacts and performances to trigger cultural imaginaries in their effort to generate a market and research framework for the advancement of AI, they are justifying and enabling a cultural logic that prioritizes what is technically feasible over what is socially desirable. … [W]hat’s typically at stake in the hype surrounding Big Data and AI is not the pursuit of knowledge, but the potential to achieve a performative celebration of snake oil.

AI is of course not magic. The discourse and imaginary of magical AI obscures the complex social, economic, technical and political efforts involved in its development, hides its internal mechanisms, and disguises the wider effects of such systems.

Invoking ‘Google magic’ therefore erases the underlying business interests of Google in education and the internal dynamics of its data-driven AI developments. Google has a distinctive business model, which is imprinted on everything it does, including educational platforms like Classroom. It’s a business plan that is laser-focused on generating value from data and applying automation and AI across all its activities.

And it’s a controversial business model to apply to education. Dan Krutka, Ryan Smits and Troy Willhelm argue that ‘Google extracts personal data from students, skirts laws intended to protect them, targets them for profits, obfuscates the company’s intent in their Terms of Service, recommends harmful information, and distorts students’ knowledge.’

Google magic also obscures the underlying technical functionality of Practice Sets. One clue appears in the marketing materials: ‘With recent AI advances in language models and video understanding, we can now apply adaptive learning technology to almost any type of class assignment or lesson.’ At last year’s Google developer conference, CEO Sundar Pichai trailled the company’s plans to employ language models in education. These ambitions would appear to extend far beyond the functionality of Practice Sets, to include artificial conversational agents capable of conducting real-time dialogue with students.

The presentation was a glossy marketing event, but behind the scenes huge Google teams are working on language models with proposed applications in sectors like education. So-called Google magic is actually the complex social, technical and economic accomplishment of lavishly-funded R&D teams who are exploring how to do things such as ‘quantify factuality’ in order to automatically recommend students information from trusted sources for their studies.

The idea of Google magic also hides the internal political conflicts that have characterized Google’s recent efforts to build large language models. Language models are some of the most contentious, ethically problematic AI projects that Google has pinned its future business prospects on. Language models are trained in huge datasets compiled from the web. They can therefore contain social biases and amplify discrimination, and are also extremely energy-intensive and potentially environmnetally damaging. Google however has been mired in controversy since firing two of its prominent AI ethics researchers for highlighting these problems.

Google magic can obscure other things too. It evades discussions about whether global technology companies should possess power and authority to reshape the work of teachers and the experiences of students, at unprecedented scale. It turns political problems about schooling into seemingly simple technological solutions that tech companies are best placed to administer.

Invoking Google magic places out of sight any debate about the social or collective purposes of education, assuming that individualized tuition and efficient knowledge acquisition supported by automation is the ideal pedagogic mode. It also forecloses the possibility of other forms of information retrieval and discovery, and asserts Google as an authoritative cultural gatekeeper of knowledge.

Google magic distracts attention from the complex privacy policies and user agreements that determine how Google can extract and use the vast troves of user data generated from every click on the interface. It disguises the complex thickets of algorithms that make decisions about individual students, characterizing them simply as innocent and objective ways of delivering ‘factuality’ to students.

The idea of AI technology as magic also hides the fact that automation constitutes a profound experiment in how schools operate, with potentially serious effects on teacher labour and student learning and development that remain unknown. It may even draw a curtain across persistent questions about the legality of its commercial data mining operations in education.

Misdirection

Google has of course produced a lot of useful technical services, many of which may be welcome in schools. Whether AI adaptive learning will actually work in schools, or if teachers will want to use it, remains to be seen. A sceptical perspective is best taken in relation to technomagical marketing claims of as-yet unproven technology interventions.

But a critical perspective is necessary too, one that draws attention to the political role of Google as a significant and growing source of influence over schooling. It has extraordinary power to affect what happens in classrooms, to monitor children through the interface, to intervene in the labour of teachers, to generate class performance data, to select content and knowledge, and to introduce AI and automation into schools through mundane feature upgrades. It markets its services as ‘magical’ but they are really technical, social, economic, and political interventions into education systems at enormous scale around the globe.

As my colleague John Potter pointed out in response to the Google marketing, magic often refers to the ‘skill of misdirection,’ a certain sleight of hand that indicates to the audience to ‘Look at this magical stuff over here… (but don’t look at what’s happening over there).’ But ‘over there’ is precisely where educational attention needs to be directed, at the technical things, even the boring things like privacy policies and user agreements, that are reshaping teaching and learning in schools. Google may be opening up exciting new directions for schooling, but it may also be misdirecting education towards a future of ever-increasing automation and corporate control of the classroom.

Posted in Uncategorized | Tagged , , , | Leave a comment

PISA for machine learners

A new report from the OECD explores how human skills can complement artificial intelligence. Source: OECD

The common narrative of the future of education is that artificial intelligence and robotization will transform how people work, with changing labour markets requiring schools to focus on developing new skills. This version of the future is reflected in influential ideas about the ‘Fourth Industrial Revolution’, where novel forms of ‘Education 4.0’ will produce the necessary skilled human labour to meet the needs of ‘Industry 4.0.’ Statistical calculations and predictions of multitrillion dollar ‘skills gaps’ in the new AI-driven economy have helped fortify such visions of the future, appealing to government and business interests in GDP and productivity returns.  

The Organisation of Economic Cooperation and Development (OECD) has played a considerable role in advancing ideas about education in the Fourth Industrial Revolution, particularly through its long-term Future of Education and Skills 2030 program launched in 2016.  A background report on the 2030 project showed how education systems were not responding to the ‘digital revolution’ and new Industry 4.0 demands, and presented the OECD’s case for the development of new skills, competencies and knowledge through ‘transformative change’ in education.

Defining the future skills required of the digital revolution is now being undertaken by the OECD’s Artificial Intelligence and the Future of Skills work program, a 6 year project commenced in 2019 by its Centre for Educational Research and Innovation (OECD-CERI). As described on its approval:

The motivation for the Future of Skills project comes from a conviction that policymakers need to understand what AI and robotics can do—with respect to the skills people use at work and develop during education—as one key part of understanding how they are likely to affect work and how education should change in anticipation.

Its ‘goal is to provide a way of comparing AI and robotics capabilities to human capabilities,’ and therefore to provide an evidence base for defining—and assessing—the human skills that should be taught in future education systems. In this sense, the project has potential to play a significant role in establishing the role of AI in relation to education, not least by encouraging policymakers to pursue educational reforms in anticipation of technological developments. This post offers an initial summary of the project and some implications.

‘PISA for AI’

The first AI and the Future of Skills report was published in November 2021. Over more than 300 pages, it outlines the methodological challenges of assessing AI and robotics capabilities. The point of the report is to specify what AI can and cannot do, and therefore to more precisely identify its impact on work, as a way of then defining the kinds of human skills that would be required for future social and economic progress.

OECD graphic detailing technological and educational change. Source: OECD

In the Foreword, the OECD Director of Education, Andreas Schleicher, noted that ‘In a world in which the kinds of things that are easy to teach and test have also become easy to digitise and automate, we need to think harder how education and training can complement, rather than substitute, the artificial intelligence (AI) we have created in our computers.’

The project, Schleicher added, ‘is taking the first steps towards building a “PISA for AI” that will help policy makers understand how AI connects to work and education.’

The idea of a ‘PISA for AI’ is an intriguing one. The implication here is that the OECD might not only test human learners’ cognitive skills and capabilities, as its existing PISA assessments do, or their skills for work, as PIAAC tests do. It could also test the skills and capabilities of machine learners in order to then redefine the kinds of human skills that need to be taught, all with the aim of creating ‘complementary’ skills combinations. Ongoing assessments might then be administered to ensure human-machine skills complementarities for long-term economic and social benefit.

Computing Cognition

So how does the OECD plan to develop such assessments? One part of the report, authored by academic psychologists, details the ways cognitive psychology and industrial-organisational psychology have underpinned the development of taxonomies and assessments of human skills, including cognitive abilities, social-emotional skills, collective intelligence, and skills for industry. The various chapters consider the feasibility of extending such taxonomies and tests to machine intelligence. Another section of the report then looks at the ways the capabilities of AI can be evaluated from the perspective of academic computer science.

Given the long historical interconnections of cognitive science and AI—which go all the way back to cybernetics—these chapters represent compelling evidence of how the OECD’s central priorities in education have developed through the combination of psychological and computer sciences as well as economic and government rationales. In recent years it has shifted its attention to insights from the learning sciences resulting from advances in big data analytics and AI. Similar combinations of psychological, economic, computational and government expertise were involved in the  formation of the OECD’s assessment of social and emotional skills.

In the final summarizing chapter of the report, for example, the author noted that ‘the computer science community acknowledges the intellectual foundation and extensive materials provided by psychology,’ although, because the ‘the cognitive capacities of humans and AI are different,’ further work would require ‘bringing together different types of approaches to provide a more complete assessment of AI.’

The next stage of the AIFS project will involve piloting the types of assessments described in this volume to identify how well they provide a basis for understanding current AI capabilities. This work will begin with intense feedback from small groups of computer and cognitive scientists who attempt to describe current AI capabilities with respect to the different types of assessment tasks.

The project is ambitiously bringing together expertise in theories, models, taxonomies and methodologies from the computer and psychological sciences, in order ‘to understand how humans will begin to work with AI systems that have new capabilities and how human occupations will evolve, along with the educational preparation they require.’

Additionally, the project will result in some familiar OECD instruments: international comparative assessments and indicators. It will involve the ‘creation of a set of indicators across different capabilities and different work activities to communicate the substantive implications of AI capabilities,’ and ‘add a crucial component to the OECD’s set of international comparative measures that help policy makers understand human skills.’ In many respects, the OECD appears to be pursuing the development of a novel model of human-nonhuman skills development, and building the measurement infrastructure to ensure education systems are adequately aligning both the human and machine components of the model.

The idea of a ‘PISA for AI’ is clearly a hugely demanding challenge—one the OECD doesn’t foresee delivering until 2024. Despite being some years from enactment, however, PISA for AI already raises some key implications for the future of education systems and education policy.

Human-Computer Interaction

The OECD-CERI AI and the Future of Skills project is establishing artificial intelligence as a core priority for education policymakers. Although AI is already by now part of education policy discourse, the OECD is seeking to make it central to policy calculations about the kinds of workforce skills that education systems should focus on. The project may also help strengthen the OECD’s authority in education at a time of rapid digitalization, reflecting the historical ways it has sought to adapt and maintain its position as a ‘global governing complex.’

The first implication of the project, then, is its emphasis on workplace-relevant ‘skills’ as a core concern in education systems. The OECD has played a longstanding role in the translation of education into measurable skills that can be captured and quantified through testing instruments, as a means to perform comparative assessments of education systems and policy effectiveness. The project is establishing OECD’s authoritative position to define the relevant skills that future education systems will need to inculcate in young people. It is drawing on cognitive psychology and computer science, as well as analysis of changing labour markets, to define these skills, and potentially displacing other accounts of the purposes and priorities of education as a social institution.

A second implication stems from its assumption that the future of work will be transformed by AI in the context of a Fourth Industrial Revolution. The project seems to uncritically accept a techno-optimistic imaginary of AI as an enabler of capitalist progress, despite the documented risks and dangers of algorithmic work management, automated labour, and discriminatory outcomes of AI in workplaces, and a raft of regulatory proposals related to AI. Cognitive and computer science expertise are clearly important sources for developing assessment methodologies. The risk however is the production of a PISA for AI that doesn’t ask AI to account for its decisions when they potentially lead to deleterious outcomes. Moreover, matching human skills to AI capabilities as a fresh source of productivity is unlikely to address persistent power asymmetries in workplaces–especially prevalent in the tech industry itself–or counter the use of automation as a route to efficiency savings.

Third, the project appears to assume a future in which skilled human labour and AI perform together in productive syntheses of human and machine intelligence. While the role of AI and robotics as augmentations to professional roles may have merits, it is certainly not unproblematic. Social research, philosophy and theory—as well as science fiction—has grappled with the implications of human-machine hybridity for decades, through concepts such as the ‘cyborg,’ ‘cognitive assemblages,’ ‘posthumanism,’ ‘biodigital’ hybrids, ‘thinking infrastructures,’ and ‘distributed’ or ‘extended cognition.’ The notion that skilled human labour and AI might complement each other, as long as they’re appropriately assessed and attuned to one another’s capabilities, may be appealing but probably not as straightforward as the OECD makes out. Absent, too, are considerations of the power relations between AI producers–such as the global tech firms that produce many AI-enabled applications–and the individual workers expected to complement them. 

The fourth implication is that upskilling students for a future of working with AI is likely to require extensive studying alongside AI in schools, colleges and universities too. Earlier in 2021, the OECD published a huge report promoting the transformative benefits of AI and robotics in education. While AI in education itself may hold benefits, the idea of implanting AI in classrooms, curricula, and courses is already deeply contentious. It is part of longrunning trends towards increased automation, datafication, platformization, and the embedding of educational institutions and systems in vast digital data infrastructures, often involving commercial businesses from edtech startups to global cloud operators. As such, an emphasis on future skills to work with AI is likely to result in highly contested technological transformations to sites and practices of education.

Finally, there is a key implications in terms of how the project positions students as the beneficiaries of future skills. As an organization dedicated to economic development, the OECD has long focused on education as an enabler of ‘human capital.’ It has even framed so-called ‘pandemic learning loss’ in terms of measurable human capital deficits as defined by economists. In this framing, educated or skilled learners represent future value to the economies where they will work; they are assets that governments invest in through education systems, and the OECD measures the effectiveness of those investments through its large-scale assessments.

The AI and future skills program doesn’t just focus on ‘human capital,’ however. It focuses on human-computer interaction as the basis for economic and social development. By seeking to complement human and AI capabilities, the OECD is establishing a new kind of ‘human-computer interaction capital’ as the aim of education systems. Its plan to inform policymakers about how to optimize education systems to produce skilled workers to complement AI capabilities appears to make the pursuit of HCI capital a central priority for government policy, and it potentially stands to make HCI capital into a core purpose of education. Students may be positioned as human components in these new HCI capital calculations, with their value worked out in terms of their measurable complementarity with machine learners.

Posted in Uncategorized | Tagged , , , | Leave a comment

Counting learning losses

‘Learning loss’ is an urgent political concern based on complex measurement systems. Photo by Nguyen Dang Hoang Nhu on Unsplash

The idea that young people have ‘lost learning’ as a result of disruptions to their education during the Covid-19 pandemic has become accepted as common knowledge. ‘Learning loss’ is the subject of numerous large-scale studies, features prominently in the media, and is driving school ‘catch-up’ policies and funding schemes in many countries. Yet for all its traction, there seems less attention to the specific but varied ways that learning loss is calculated. Learning loss matters because it has been conceptualized and counted in particular ways as an urgent educational concern, and is animating public anxiety, commercial marketing, and political action.

Clearly educational disruptions will have affected young people in complex and highly differentiated ways. My interest here is not in negating the effects of being out of school or critiquing various recovery efforts. It’s to take a first run at examining learning loss as a concept, based on particular forms of measurement and quantification, that is now driving education policy strategies and school interventions in countries around the world. Three different ways of calculating learning loss stand out. First is the longer psychometric history of statistical learning loss research, second its commercialization by the testing industry, and third the reframing of learning loss through econometric forms of analysis by economists.

The measurement systems that enumerate learning loss are, in several cases, contradictory, contested, and incompatible with one another. ‘Learning loss’ may therefore be an incoherent concept, better understood as multiple ‘learning losses’ based on their own measuring systems.   

Psychometric set-ups

Learning loss research is usually traced back more than 40 years to the influential publication of Summer Learning and the Effects of Schooling by Barbara Heyns in 1978. The book reported on a major statistical study of the cognitive development of 3000 children while not in school over the summer, using the summer holiday as a ‘natural experimental situation’ for psychometric analysis. It found that children from lower socioeconomic groups tend to learn less during the summer, or even experience a measurable loss in achievement.

These initial findings have seemingly been confirmed by subsequent studies, which have generally supported two major conclusions: (1) the achievement gap by family SES traces substantially to unequal learning opportunities in children’s home and community environments; and (2) the experience of schooling tends to offset the unequalizing press of children’s out-of-school learning environments. Since the very beginning of learning loss studies, then, the emphasis has been on the deployment of psychometric tests of the cognitive development of children not in school, the lower achievement of low-SES students in particular, and the compensatory role that schools play in mitigating the unequalizing effects of low-SES family, home and community settings.

However, even researchers formerly sympathetic to the concept of learning loss have begun challenging some of these findings and their underlying methodologies.  In 2019, the learning loss researcher Paul T. von Hippell expressed serious doubt about the reliability and replicability of such studies. He identified serious flaws in learning loss tests, lack of replicability of classic findings, and considerable contradiction with other well-founded research on educational inequalities.

Perhaps most urgently, he noted that a significant change in psychometric test scoring methods—from paper and pen surveys to ‘a more computationally intensive method known as item response theory’ (IRT) in the mid-1980s—completely reversed the original findings of the early 1980s. With IRT, learning loss seemed to fade away. The original psychometric method ‘shaped classic findings on summer learning loss’, but the newer item-response theory method produced a very different ‘mental image of summer learning’.

Moreover, noted von Hippel, even modern tests using the same IRT method produced contradictory results. He reported on a comparison of the Measures of Academic Progress (MAP) test developed by the testing organization Northwest Evaluation Association (NWEA), and a test developed for the Early Childhood Longitudinal Study. The latter found that ‘summer learning loss is trivial’, but the NWEA MAP test reported that ‘summer learning loss is much more serious’. So learning loss, then, appears at least in part to be an artefact of the particular psychometric-set-up constructed to measure it, with results that appear contradictory. This is not just a historical problem with underdeveloped psychometric instruments, but persists in the computerized IRT systems that were deployed to measure learning loss as the Covid-19 pandemic set in during 2020.

Commercializing learning loss

Here it is important to note that NWEA is among the most visible of testing organizations producing data about learning loss during the pandemic. Even before the onset of Covid-19 disruptions, NWEA was using data on millions of US students who had taken a MAP Growth assessment to measure summer learning loss. Subsequently, the NWEA MAP Growth test has been a major source of data about learning loss in the US, alongside various assessments and meta-analyses from the likes of commercial testing companies Illuminate, Curriculum Associates, and Renaissance and the consultancy McKinsey and Company.

Peter Greene has called these tests ‘fake science’, arguing that ‘virtually all the numbers being used to “compute” learning loss are made up’. In part that is because the tests only measure reading and numeracy, so don’t count for anything else we might think as ‘learning’, and in part because the early-wave results were primarily projections based on recalculating past data from completely different pre-pandemic contexts. Despite their limitations as systems for measuring learning, the cumulative results of learning loss tests have led to widespread media coverage, parental alarm, and well-funded policy interventions. In the US, for example, states are spending approximately $6.5 billion addressing learning loss.

Learning loss results based on Renaissance Star reading and numeracy assessments for the Department for Education

In England, meanwhile, the Department for Education commissioned the commercial assessment company Renaissance Learning and the Education Policy Institute to produce a national study of learning loss. Utilizing data from reading and mathematics assessments of over a million pupils who took a Renaissance Star test in autumn 2020, the findings were then published by the Department for Education as an official government document. An update report in 2021, published on the same government webpage, linked the Renaissance Star results to the National Pupil Database. This arrangement exemplifies both the ways commercial testing companies have generated business from measuring learning loss, and their capacity to shape and inform government knowledge of the problem–as well as the persistent use of reading and numeracy results as proximal evidence of deficiencies in learning.

Moreover, learning loss has become a commercial opportunity not just for testing companies delivering the tests, but for the wider edtech and educational resources industry seeking to market learning ‘catch-up’ solutions to schools and families. ‘The marketing of learning loss’, Akil Bello has argued, ‘has been fairly effective in getting money allocated that will almost certainly end up benefiting the industry that coined the phrase. Ostensibly, learning loss is a term that sprung from educational research that identified and quantified an effect of pandemic-related disruptions on schools and learning. In actuality, it’s the result of campaigns by test publishers and Wall Street consultants’.

While not entirely true—learning loss has a longer academic history as we’ve seen—it seems accurate to say the concept has been actively reframed from its initial usage in the identification of summer loss. Rather than relying on psychometric instruments to assess cognitive development, it has now been narrowed to reading and numeracy assessments. What was once a paper and pen psychometric survey in the 1980s has now become a commercial industry in computerized testing and the production of policy-influencing data. But this is not the only reframing that learning loss has experienced, as the measurements produced by the assessment industry have been paralleled by the development of alternative measurements by economists working for large international organizations.

Economic hysteresis

While early learning loss studies were based in psychometric research in localized school district settings, and the assessment industry has focused on national-level results in reading and numeracy, other recent large-scale studies of learning loss have begun taking a more econometric approach, at national and even global scales, derived from the disciplinary apparatus of economics and labour market analysis.

Influential international organizations such as the OECD and World Bank, for example, have promoted and published econometric research calculating and simulating the economic impacts of learning loss. They framed learning loss as predicted skills deficits caused by reduced time in school, which would result in weaker workforce capacity, reduced income for individuals, overall ‘human capital’ deficiencies for nations, and thereby reduced gross domestic product. The World Bank team calculated this would costs the global economy $11 trillion, while the economists writing for the OECD predicted ‘the impact could optimistically be 1.5% lower GDP throughout the remainder of the century and proportionately even lower if education systems are slow to return to prior levels of performance. These losses will be permanent unless the schools return to better performance levels than those in 2019’.

These gloomy econometric calculations are based on particular economic concepts and practices. As another OECD publication framed it, learning loss represents a kind of ‘hysteresis effect’ usually studied by labour economists as a measure of the long-term, persistent economic impacts of unemployment or other events in the economy. As such, framing education in terms of hysteresis in economics assumes learning loss to be a causal determinant of long-term economic loss, and that mitigating this problem should be a major policy preoccupation for governments seeking to upskill human capital for long-term GDP growth. Christian Ydesen has recently noted that the OECD calculations about human capital deficits caused by learning loss are already directly influencing national policymakers and shaping education policies.

It’s obvious enough why the huge multitrillion dollar deficit projections of the World Bank and OECD would alarm governments and galvanize remedial policy interventions in education. But the question remains how these massive numbers were produced. My following notes on this are motivated by talks at the excellent recent conference Quantifying the World, especially a keynote presentation by the economic historian Mary Morgan. Morgan examined ‘umbrella concepts’ used by economists, such as ‘poverty’, ‘development’ and ‘national income’, and the ways each incorporates a set of disparate elements, data sets, and measurement systems.

The production of numerical measurements, Morgan argued, is what gives these umbrella concepts their power, particularly to be used for political action. Poverty, for example, has to be assembled from a wide range of measurements into a ‘group data set’. Or, as Morgan has written elsewhere, ‘the data on population growth of a society consist of individuals, who can be counted in a simple aggregate whole’, but for economists ‘will more likely be found in data series divided by occupational classes, or age cohorts, or regional spaces’. Her interest is in ‘the kinds of measuring systems involved in the construction of the group data set’.

Figures published by the OECD on the economic impacts of learning loss on G20 countries

Learning loss, perhaps, can be considered an umbrella concept that depends on the construction of a group data set, while that group data set too relies on a particular measuring system that aligns disparate data into the ‘whole’. For example, then, if we look at the OECD report ‘The Economic Impacts of Learning Loss’, it is based on a wide range of elements, data sets and measuring systems. Its authors are Eric Hanushek and Ludger Woessmann, both economists and fellows of the conservative, free market public policy think tank the Hoover Institution based at Stanford University. The projections in the report of 1.5-3% lower GDP for the rest of the century represent the ‘group data set’ in their analysis. But this consists of disparate data sets, which include: estimates of hours per day spent learning; full days of learning lost by country; assessments of the association between skills learned and occupational income; correlational analyses of educational attainment and income; effects of lost time in school on development of cognitive skills; potential deficits in development of socio-emotional skills; and how all these are reflected in standardized test scores.

It’s instructive looking at some excerpts from the report:

Consistent with the attention on learning loss, the analysis here focuses on the impact of greater cognitive skills as measured by standard tests on a student’s future labour-market opportunities. …  A rough rule of thumb, found from comparisons of learning on tests designed to track performance over time, is that students on average learn about one third of a standard deviation per school year. Accordingly, for example, the loss of one third of a school year of learning would correspond to about 11% of a standard deviation of lost test results (i.e., 1/3 x 1/3). … In order to understand the economic losses from school closures, this analysis uses the estimated relationship between standard deviations in test scores and individual incomes … based on data from OECD’s Survey of Adult Skills (PIAAC), the so-called “Adult PISA” conducted by the OECD between 2011 and 2015, which surveyed the literacy and numeracy skills of a representative sample of the population aged 16 to 65. It then relates labour-market incomes to test scores (and other factors) across the 32 mostly high-income countries that participated in the PIAAC survey.

So as we can see, the way learning loss is constructed as an umbrella concept and a whole data set by the economists working for the OECD involves the aggregation of many disparate factors, measures and econometric measurement practices. They include past OECD data, as well as basic assumptions about learning as being synonymous with ‘cognitive skills’ and objectively measurable through standardized tests, and a host of specific measuring systems. Data projections are constructed from all these elements to project the economic costs of learning loss for individual G20 countries, and then calculated together as ‘aggregate losses in GDP across G20 nations’ using the World Development Indicators database from the World Bank as the base source for the report’s high-level predictions.

It is on the basis of this ‘whole’ calculation of learning loss—framed in terms of economic hysteresis as a long-term threat to GDP—that policymakers and politicians have begun to take action. ‘How we slice up the economic world, count and refuse to count, or aggregate, are contingent and evolving historical conventions’, argues Marion Fourcade. ‘Change the convention … and the picture of economic reality changes, too—sometimes dramatically’. While there may well be other ways of assessing and categorizing learning loss, it is the specific econometric assembly of statistical practices, conventions, assumptions, and big numbers that has made learning loss into part of ‘economic reality’ and into a powerful catalyst of political intervention.

Counting the costs of learning loss calculations

As a final reflection, I want to think along with Mary Morgan’s presentation on umbrella concepts for a moment longer. As the three examples I’ve sketchily outlined here indicate, learning loss can’t be understood as a ‘whole’ without disaggregating it into its disparate elements and the various measurement practices and conventions they rely on. I’ve counted only three ways of measuring learning loss here—the original psychometric studies; testing companies’ assessments of reading and numeracy; and econometric calculations of ‘hysteresis effects’ in the economy—but even these are made of multiple parts, and are based on longer histories of measurement that are contested, incompatible with one another, sometimes contradictory, and incoherent when bundled together.

As Morgan said at the Quantifying the World conference, ‘the difficulties—in choosing what elements exactly to measure, in valuing those elements, and in combining numbers for those many elements crowded under these umbrella terms—raise questions about the representing power of the numbers, and so their integrity as good measurements’.

Similar difficulties in combining the numbers that constitute learning loss might also raise questions about their power to represent the complex effects of Covid disruptions on students, and their integrity to produce meaningful knowledge for government. As my very preliminary notes above suggest there is no such thing as learning loss, but multiple conceptual ‘learning losses’ based on their own measurement systems. There are social lives behind the methods of learning loss.

Regardless of the incoherence of the concept, learning loss will continue to exert effects on educational policies, school practices and students. It will buoy up industries, and continue to be the subject of research programs in diverse disciplines and across different sites of knowledge production, from universities to think tanks, consultancies, and international testing organizations. Learning loss may come at considerable cost to education, despite its contradictions and incoherence, by diverting pedagogic attention to ‘catch-up’ programs, allocating funds to external agencies, and attracting political bodies to focus on mitigation measures above other educational priorities.

Posted in Uncategorized | Tagged , , , | Leave a comment

Nudging assets

The acquisition of learning management system Blackboard has opened up opportunities for the new company to generate value from integrating data and nudging student behaviour. Photo by Annie Spratt on Unsplash

The acquisition of the global education platform Blackboard by Anthology has brought the mundane Learning Management System back to attention. While full details of the deal remain to be seen, and it won’t be closed until the end of the year, it surfaces two important and interlocking issues. One is the increasing centrality of huge data integrations to the plans of education technology vendors, and the second is the seeming attractiveness of the data-driven approach to edtech financiers.

Primarily, the acquisition of Blackboard by Anthology centres on business interests, according to edtech consultant Phil Hill, who notes that Blackboard’s owners have been seeking to sell the company for three years. The purpose of the deal, Hill argues, ‘is a revenue growth opportunity driven by cross-selling, international growth, and the opportunities to combine products and create new value, particularly at the data level.’ This approach, Hill further suggests, makes sense on the ‘supply side’ for vendors and investors who see value in combining data and integrating systems, if less so on the ‘demand side’ of universities and schools, whose primary concerns are with usability.

There are two things going on here worth questioning a little further. First, what exactly are Blackboard/Anthology hoping to achieve by combining data, and second, why is this attractive to investors? Based on some recent company blog posts from Blackboard, the answer to the first question appears to be about the capacity for ‘nudging’ students towards better outcomes through ‘personalized experiences’ based on data analytics, and the second question might be addressed by understanding those data as ‘assets’ with expected future earnings power for their owners. This post is an initial attempt to explore those issues and their interrelationship.

Nudging

One of the key features of the Blackboard/Anthology announcement was that it would enable much greater integration of the existing software systems of the two companies, including Learning Management System, community engagement, student success, student information system and enterprise resource planning. ‘Combining the two companies will create the most comprehensive and modern EdTech ecosystem at a global scale for education’, the CEO, president and chair of Blackboard, Bill Ballhause wrote in a company blog. ‘It will enable us to break down data silos, and surface deeper insights about the learner so we can deliver unmatched personalized experiences across the full learner lifecycle and drive better outcomes’.

The idea of breaking down ‘data silos’ and integrating data systems is part of a longer Blackboard strategy on making the most of cloud computing for cross-platform interoperability. Blackboard migrated most of its services to Amazon Web Services starting in 2015, with reportedly significant effects on how it could make use of the data collected by its LMS. ‘Our new analytics offering, Blackboard Data, is a good example where we are leveraging AWS technologies to build a platform that provides data-driven insight across all our solutions’, Blackboard reported in 2017. These insights will now be generated across the entire Blackboard/Anthology portfolio, raising data privacy and protection implications that Blackboard was quick to address just a day after the announced acquisition.

Beyond data privacy issues, though, the stated purpose of integrating data is to enact ‘Blackboard’s vision of personalizing experiences’. Writing earlier in the summer, Blackboard’s CEO Bill Ballhaus set out the company’s longer-term vision for personalizing learning experiences. Drawing on examples of online shopping, healthcare and entertainment, Ballhaus argued that a ‘critical mass of data powers proactive nudges’ based on highly granular personal data profiles. Education, however, had not yet ‘kept pace with the shift to customized experiences that other industries achieved’. This, he said, had now changed with the disruptions of the previous year.

‘The massive shift to online learning driven by the COVID-19 global pandemic enabled continuity of education in the near term, while opening the door for education to move forward on a journey toward more personalized experiences’, Ballhaus argued. ‘We’ve had our sights set on the future for the past few years and have the ability to securely harness data, with robust privacy protections, from across our ecosystem of EdTech solutions with the specific intent of enabling personalized experiences to drive improved outcomes’.

The discourse of ‘nudges’ as the central technique of personalized learning runs throughout this vision. ‘Students need nudges’ to reach better outcomes, Ballhaus continued, with ‘the 25 billion weekly interactions in our learning management and virtual classroom systems’ enabling Blackboard to operationalize such a nudge-based approach to personalized learning.

By emphasizing student nudges fuelled by masses of data as the basis of personalized learning, Blackboard has tapped into the logics of the psychological field of behavioural economics and its political uptake in the form of behavioural governance. Mark Whitehead and coauthors describe how behavioural governance has proliferated across public policy in many countries in recent years—especially the UK and US—through the application of nudge strategies. This has been amplified by digital ‘hypernudge’ techniques based on personal data profiles, which, as Karen Yeung argues, ‘are extremely powerful and potent due to their networked, continuously updated, dynamic and pervasive nature’.

So, the business plan behind the Blackboard/Anthology merger appears to be to enact a form of behavioural governance in digital education, operationalizing personalized hypernudges within the architectures of vast edtech ecosystems. While such a form of ‘machine behaviourism’ has existed in imaginary form for some years, it may now materialize in the seemingly mundane machinery of the learning management systems used by institutions across the globe. And that potential capacity for nudging also appears to be the source of expected future value for financial backers.

Assets

While the Blackboard/Anthology deal has been presented by the two companies as a merger, and interpreted by most as an acquisition of the former by the latter, in reality this is a deal between their financial backers and owners. Anthology is majority owned by Veritas Capital (a private equity firm investing in products and services to government and commercial customers), with Leeds Equity Partners (a private equity firm focused on investments in the Knowledge Industries) as a minority owner, while Blackboard is owned by Providence Equity Partners (a global private equity investment firm focused on media, communications, education, software and services investments). Veritas is providing new funding and retaining majority shareholder status, with both Leeds and Providence as minority shareholders following the acquisition.

The exact value of the deal remains unknown—Phil Hill has suggested it may be in the region of $3bn—but clearly these three private equity firms see prospects for value creation in the future. To interpret this, we need to understand some of the logic of investment. Recent economic sociology work can help here, particularly the concepts of capitalization and assetization.

As Fabian Muniesa and colleagues phrase it, capitalization refers to the processes and practices involved in ‘valuing something’ in terms of ‘the expected future monetary return from investing in it’. Capitalization, they continue, ‘characterizes the reasoning of the banker, the financier and the entrepreneur’, and calculating future expected returns is central to any form of investment. Capitalization then also depends on seeing something as an asset with future value, or making it into one. Kean Birch and Muniesa define an asset as any resource controlled by its owner as a source of expected future benefits, and ‘assetization’ thus as the processes involved in making that resource into a future revenue stream. Transforming something into an asset is therefore central to capitalization. 

Capitalization and assetization may be useful concepts for exploring the Blackboard/Anthology deal. Clearly, Veritas, Leeds and Providence as owners and shareholders are seeking future value from their assets. Their entire business is capitalizing on the assets they hold investments in, in expectation of return on investment. In part, the platforms that Blackboard and Anthology will combine are the assets. It is expected that more customers will purchase from them through cross-selling compatible products (e.g. by integrating Blackboard LMS with Anthology student information systems and making them interoperable for ease of use).

But given the prominence in the deal announcement and other posts of ‘breaking down data silos’ and ‘the possibilities of delivering personalized experiences fueled by data through our combination’, it seems likely that there is a process of assetizing the data themselves going on here. If the platforms and services themselves have future value, that is dependent upon the 25 billion weekly interactions of users as a new source of value creation. How are data made valuable?

In a recent study, Birch and coauthors highlight how ‘Big Tech’ companies transform personal digital data into assets with future earnings power both for the companies and their investors. They transform personal data into assets to generate future revenue streams. And Birch and colleagues argue that this assetization of user data occurs through the ‘transformation of personal data into user metrics that are measurable and legible to Big Tech and other political-economic actors (e.g., investors)’. In similar ways, then, the new Big EdTech company emerging from the combination of Blackboard and Anthology aims to transform student data into measurable and legible forms for their investors. 25 billion weekly interactions leave traces which can be made valuable.

As Janja Komljenovic has recently argued, ‘the digital traces that students and staff leave behind when interacting with digital platforms’ can be ‘made valuable by processing data into intelligence for either improving an existing product or service, or creating a new one, selling data-based products (such as learning analytics or other data intelligence on students), various automated matching services, automated tailored advertising, exposure to the audience, and so on’. The value comes not from the data themselves, but ‘from their predictive power and inducing behaviour in others’. In other words, as Komljenovic elaborates, ‘what becomes valuable in digital education is power over the direction of student and staff teaching, learning and work patterns. It is first about the power over calculating predictions and thus performing future, and second, about tailoring experience and nudging behaviour’.

In this particular sense, then, we can see how the objective of ‘nudging’ students through data-fuelled personalized experience may be a core part of the assetization process involved in the merger of Blackboard and Anthology. The platforms and services themselves as marketable products for institutions to pay for, or the weekly 25 billion data points of interaction with them, are not the only sources of expected value. Instead, the predictive capacity to shape education by personalizing experience and nudging student behaviours appears to be the key to unlocking future revenue streams.

Assetizing the nudge and nudging the asset in Big EdTech

The Blackboard/Anthology deal seems to foreground two complementary trends in the edtech sector. The first is that the ‘nudge’ has become the source of expected future value to asset owners. Personalized learning via digital nudges is clearly a core part of the expected value that Blackboard will return its new private equity owners and shareholders. This is assetizing the nudge.

The second is that student data have become the focus of the nudge, with digital nudges expected to increase student outcomes. In this sense, the masses of student data held by Blackboard/Anthology are being transformed into assets too. And if we understand those data to produce ‘data subjects’ or informational identities of a student, then we might conceivably think of students themselves as assets with value that can be increased through predictive nudging. This is nudging the asset, although it’s too early to see quite how this will work out in practice at the new company or in the institutions that use its services,

Maybe later details on the deal will help us clarify the precise ways assetization and nudging complement one another in an emerging environment of Big EdTech deals and integrations. It is important for critical edtech research to get up-close to these developments at the intersections of nudging and assetization, as practical techniques of behavioural governance and capitalization, even in the most mundane places like the LMS.

Posted in Uncategorized | Tagged , , , | 1 Comment

New biological data and knowledge in education

Research centres and laboratories have begun conducting studies to record and respond to the biological aspects of learning. Photo by Petri Heiskanen on Unsplash

Novel sources of data about the biological processes involved in learning are beginning to surface in research on education. As the sciences of the human body have been transformed by advances in computing power and data analysis, researchers have begun explaining learning and outcomes such as school attainment and achievement in terms of their embodied underpinnings. These new approaches, however, are generating controversy, and demand up-close social science analysis to understand what processes of knowledge production are involved, as well as how they are being received in public, academic and political debates.

Late last year, the Leverhulme Trust awarded us a research project grant to study the rise of data-intensive biology in education. As we now kick off the project, I’m really pleased to be working with a great interdisciplinary team that includes Jessica Pykett, a social and political geographer at Birmingham University; Martyn Pickersgill, a sociologist of science and medicine at Edinburgh; and Dimitra Kotouza, a political sociologist joining us at Edinburgh straight from an excellent previous project on the policy networks, data practices and market-making involved in addressing the ‘mental health crisis’ in UK higher education.

The project focuses on three domains of data-intensive biology in education:

  • the emergence of ‘big data’ genetics in the shape of ‘genome-wide association studies’ utilizing molecular techniques and bioinformatics technologies including biobanks, microarray chips, and laboratory robot scanners to identify complex ‘polygenic patterns’ associated with educational outcomes
  • neurotechnology development in the brain sciences, such as wearable electroencephalography (EEG) headsets, neuro-imaging, and brain-computer interfaces with neurofeedback capacities, and their application in school-based experiments
  • rapid advances in the development and utilization of ‘affect-aware’ artificial intelligence technologies, such as voice interfaces and facial emotion detection for interactive, personalized learning, that are informed by knowledge and practice in the psychological and cognitive sciences

We are planning to track these developments and their connections with cognate advances in the learning sciences, AI in education, and recent proposals around ‘learning engineering’ and ‘precision education’. Across this range of activities, we see a concerted effort to employ data-scientific technologies, methodologies and practices to record biological data related to learning and education, and in some cases to develop responses or interventions based on it. We’re only just starting the project with the full team in place, but a couple of very recent developments help exemplify why we consider the project important and timely.

Controversy over the genetics of education

On the very same day our Leverhulme Trust grant arrived, 6 September, The New Yorker published a 10,000 word article entitled ‘Can Progressives Be Convinced That Genetics Matters?’ Primarily a long-form profile of the psychology professor Paige Harden, the article describes the long and controversial history of behaviour genetics, a field in which Harden has become a leading voice—as signified by the forthcoming publication of her book The Genetic Lottery: Why DNA Matters for Social Equality.

The main thrust of the article is about Harden’s attempts to develop a ‘middle ground’ between right wing genetic determinists and left wing progressives. She is described in the piece as a ‘left hereditarian’ who acknowledges the role played by biology in social outcomes such as educational attainment, but also the inseparability of such outcomes from social and environmental factors (‘gene x environment bidirectionality’). The article is primarily focused on the politics of behaviour genetics, which has long been a major field of controversy even within the scientific disciplines of genetics due to its ‘ugly history’ in eugenics and scientific racism.

Judging from reactions on Twitter among genetics researchers and educators, these are problems—both disciplinary and political—which are more complex and intractable than either the article or the science lets on. Concerns remain, despite optimistic hopes of a ‘middle ground’, that new molecular behaviour genetics insights will be mobilized and reframed by ideologically-motivated groups to reinforce dangerous genetically-reductionist notions of race, gender and class.

The New Yorker profile also notes that recent developments in genome-wide association studies (GWAS) have begun producing significant findings about the connections between genes and educational outcomes. These are ‘big data’ endeavours using samples of over a million subjects and complex bioinformatics infrastructures of data analysis, and are part of a burgeoning field known as ‘sociogenomics’. Again, many of these sociogenomics studies appear informed by the left hereditarian perspective—seeing complex, biological polygenic patterns related to educational outcomes as operating bidirectionally with environmental factors, and arguing that genetically-informed knowledge can lead to better, social justice-oriented outcomes.

But educational GWAS research and polygenic scoring informed by a sociogenomics paradigm is not itself a settled science. As I began illustrating in some recent preparatory research for this project, the scientific apparatus of a data-intensive, bioinformatics-driven approach to education remains in development, is producing very different forms of interpretation, and is leading to disagreement over its pedagogic and policy implications. Even from within the field, a behaviour genetics approach to education based on big data analysis remains a fraught enterprise. Outside the field, it is prone to being appropriated to support ideological right-wing positions and as fuel to attack so-called ‘progressives’ and their ‘environmental determinism’.

The controversy over behaviour genetics and education is not new, as Aaron Panofsky has shown. As part of a long-running series of critical studies and publications on behaviour genetics, he analyzed its involvement in promoting ideas about genetically-informed education reform. Focusing in particular in the work of behaviour geneticist Robert Plomin, Panofsky notes that his vision of genetically-informed education utilizing high-tech molecular genomics technologies represents a form of ‘precision education’ modelled on ‘precision medicine’ in the biomedical field. In precision medicine, doctors ‘could use genetic and biomarker information to divide individuals into distinct diagnosis and treatment categories’. A precision education approach would ostensibly use similar information to support ‘personalization’ according to students’ ‘different genetic learning predispositions’.

According to Panofsky, however, precision medicine ‘represents an approach to health and healing very much in line with our neoliberal political times’. It focuses, he argues, ‘toward “me medicine” that seeks to improve health through high-tech, expensive, privatized, individualized, and decontextualized intervention and away from “we medicine” that aims to improve health and illness in the broad public through focusing on widely available interventions and targeting health’s social determinants’.

Thus, for Panofsky, Plomin’s vision of precision education raises the risk that ‘while genetically personalized education is represented as a tool to help educate everyone, it represents more of a “me” approach than a “we” approach’. He argues it risks deflecting attention away from other educational problems and their social determinants–such as school funding, policy instability, workforce quality and labour relations, and especially underlying inequalities and poverty–by focusing instead on the identification of individuals’ biological traits and the cultivation of ‘each individual’s genetic potential’.

Overall, The New Yorker article helps illustrate the controversies that genetics research in education may continue to generate in coming years. It also shows how advances in data-intensive bioinformatics technologies and sociogenomics theorizing are already beginning to play a role in knowledge production on educational outcomes. As the high-profile publication of Harden’s The Genetic Lottery indicates, these advances and arguments are likely to continue, albeit perhaps in different forms and with different motivations. Robert Plomin’s team, for example, argues that ‘molecular genetic research, particularly recent cutting-edge advances in DNA-based methods, has furthered our knowledge and understanding of cognitive ability, academic performance and their association’, and will ‘help the field of education to move towards a more comprehensive, biologically oriented model of individual differences in cognitive ability and learning’.

A key part of our project will involve tracking these unfolding developments in biologically oriented education, their historical threads, technical and methodological practices, and their ethics and controversies.

Engineering student-AI empathy

The second development is related to ‘affect-aware’ technologies to gauge and respond to student emotional states. Recently, the National Science Foundation awarded almost US$20m to a new research institute called the National AI Institute for Student-AI Teaming (iSAT), as part of its huge National AI Research Institutes program.  One of three AI Institutes dedicated to education, iSAT is focused on ‘turning AI from intelligent tools to social, collaborative partners in the classroom’. According to its entry on the NSF grants database, it spans the ‘computing, learning, cognitive and affective sciences’ and ‘advances multimodal processing, natural language understanding, affective computing, and knowledge representation’ for ‘AI-enabled pedagogies’.

The iSAT vision of ‘student-AI teaming’—a form of human-machine collaborative learning—is based on ‘train[ing] our AI on diverse speech patterns, facial expressions, eye movements and gestures from real-world classrooms’. To this end it has recruited two school districts, totalling around 5000 students, to train its AI on their speech, gestures, facial and eye movements. The existing publications of iSAT are instructive of its planned outcomes. They include ‘interactive robot tutors’, ‘embodied multimodal agents’, and an ‘emotionally responsive avatar with dynamic facial expressions’.

The last of those iSAT examples, the ‘emotionally responsive avatar’, is based on the application of ‘emotion AI’ technology from Affectiva, an MIT Affective Computing lab commercial spin-out. The lead investigator of iSAT was formerly based at the lab, and has an extensive publication record focused on such technologies as ‘affect-aware autotutors’ and ‘emotion learning analytics’. In this sense, iSAT represents the advance of a particular branch of learning analytics and AI in education, supported by federal science funding and the approval of the leading US science agency.

Emotion AI-based approaches in education, like molecular behaviour genetics, are deeply controversial. Andrew McStay describes emotion AI as ‘automated industrial psychology’ and a form of ‘empathic media’ that takes ‘autonomic biological signals’ captured through biosensors as proxies for a variety of human affective processes and behaviours. Empathic media, he argues, aims to make ‘emotional life machine-readable, and to control, engineer, reshape and modulate human behaviour’. This biologization and industrialization of the emotions for data capture by computers therefore raises major issues of privacy and human rights. Luke Stark and Jesse Hoey have argued that ‘The ethics of affect/emotion recognition, and more broadly of so-called “digital phenotyping” ought to play a larger role in current debates around the political, ethical and social dimensions of artificial intelligence systems’. Google, IBM and Microsoft have recently begun rolling back plans for emotion sensing technologies following internal reviews by their AI ethics boards.

Over the last few years, several examples have emerged of education technology applications utilizing emotion AI-based approaches. They generally tend to provoke considerable concern and even condemnation, as part of broader public, media, industry and political debates about the role of AI in societies. Given that such technologies are already currently the subject of considerable public and political contestation, it is notable then that similar biosensor technologies are being generously supported as cutting edge AI developments with direct application in educational settings. While iSAT certainly has detailed ethical safeguards in place, some broader sociological issues remain outstanding.

The first is about the apparatus of data production involved in such efforts. iSAT employs Affectiva facial vision technology, which is itself based on the taxonomy of ‘basic emotions’ and the ‘facial action coding system’ developed in the 1970s by the psychologist Paul Ekman and colleagues. As researchers including McStay, Stark and Hoey have well documented, basic emotions and facial coding are highly contested as seemingly ‘universalist’ and mechanistic measures of the diversity of human emotional life. So iSAT is bringing highly controversial psychological techniques to bear on the analysis of student affect, in the shape of biosensor-enabled automated AI teaching partners. There remains an important social science story to tell here about the long historical development of this apparatus of affect measurement, its enrolment into educational knowledge production, and its eventual receipt of multimillion dollar federal funding.  

The second concerns the implications of engineering ‘empathic’ partnerships between students and AI through so-called ‘student-AI teaming’. This requires the student being made machine-readable as a biological transmitter of signals, and a subject of empathic attention from automated interactive robot tutors. Significant issues remain to explore here too about human-machine emotional relations and the consequences for young people of their emotions being read as training data to create empathic educational media.

In the research we are planning, we aim to trace the development of such apparatuses and practices of emotion detection in education, and their consequences in terms of how students are perceived, measured, understood, and then treated as objects of concern or intervention by empathic automatons.

Bio-edu-data science

Overall, what these examples indicate is how advances in AI, data, sensor technologies, and education have merged with scientific research in learning, cognitive, and biological sciences to fixate on students’ bodies as signal-transmitters of learning and its embodied substrates. While the apparatus of affective computing at iSAT tracks external biological signals from faces, eyes, speech and gestures as traces of affect, learning and cognition, the apparatus of bioinformatics is intended to record observations at the molecular level.

The bioinformatics apparatus of genetics, and the biosensor apparatus of emotion learning analytics are beginning to play significant parts in how processes of learning, cognition and affect, as well as outcomes such as attainment and achievement, are known and understood. New biologized knowledge, produced through complex technical apparatuses by new experts of both the data and life sciences, is being treated as increasingly authoritative, despite varied controversies over its validity and its political and ethical consequences. This new biologically-informed science finds traces of learning and its outcomes in polygenic patterns and facial expressions, as well as in traces of other embodied processes.

In our ongoing research, then, we are trying to document some of the key discourses, lab practices, apparatuses, and ethical and political implications and controversies of an emerging bio-edu-data science. Bio-edu-data science casts its gaze on to students’ bodies, and even through the skin to molecular dynamics and traces of autonomic biological processes. We’ll be reporting back on this work as we go.

Posted in Uncategorized | Tagged , , , , , | 1 Comment

Breaking open the black box of edtech brokers

Mathias Decuypere and Ben Williamson

Education technology brokers build new connections between the private edtech industry and state schools. Photo by Charles Deluvio on Unsplash

A new kind of organization has appeared on the education technology landscape. Education technology ‘brokers’ are organizations that operate between the commercial edtech industry and state schools, providing guidance and evidence on edtech procurement and implementation. Staffed by new experts of evaluation and decision-making, they act as connective agencies to influence schools’ edtech purchasing and use, as well as to shape the market prospects of the commercial edtech companies they represent or host. As a new type of ‘evidence intermediary’, these brokerage organizations and experts possess the professional knowledge and skills to mobilize data, platform technologies, and evidence-making methods to provide proof of ‘what works’, demonstrate edtech ‘impact’, and provide practical guidance to school decision-makers about edtech procurement.

Although brokers represent a novel point of connection between the edtech market and state systems of schooling, little is known about the aims or practical techniques of these organizations, or their concrete effects on schools. Edtech brokers are ‘black boxes’ that need opening up for greater attention by researchers and educators.

We are delighted to have received an award from a global research partnership between KU Leuven and the University of Edinburgh, which is funding a 4-year full-time PhD studentship to research the rise of edtech brokers with Mathias Decuypere (KU Leuven) and Ben Williamson (Edinburgh). The project will examine and conceptualize edtech brokerage as part of a transnational policy agenda to embed edtech in education, the operations of brokers in specific national contexts, and their practical influences on schools. The research will build on and advance our shared interests in digital platforms and edtech markets in education, as well as our broader concerns with data-intensive governance and digitalized knowledge production.

We have already identified a wide range of brokers to examine. One illustrative case is the Edtech Genome Project, ‘a sector-wide effort to discover what works where, and why’, developed by the Edtech Evidence Exchange in the US with partnership support from the Chan Zuckerberg Initiative, Carnegie Corporation, and Strada Education Network. It is building a digital ‘Exchange’ platform to enable ‘decision-makers to access data and analysis about edtech implementations’ with a view to both ‘increase’ student learning and save schools billions of dollars on ‘poor’ edtech spending. 

To a significant extent, we anticipate edtech brokers such as the Exchange becoming highly influential platform and market actors in education, across a range of contexts, in coming years.   

Post-Covid catalytic change agencies

In the context of the Covid-19 educational emergency, the role, significance and position of educational brokers have already grown: they are able to marshal their knowledge and expertise to advise schools on the most impactful edtech to address issues such as so-called ‘learning loss’ or ‘catch-up’ requirements.

These evolutions are not radically new. They reflect the increasing participation of private technology companies as sources of policy influence (supported by external consultancies, think tanks and international organizations) in education systems worldwide; and the rise in new types of ‘evidence’ production, including ‘what works’ centers and ‘impact’ programs, and the related emergence of new kinds of professional roles for evaluation and evidence experts in education.

However, especially during the Covid-19 emergency, brokers have begun asserting their expertise and professionality to support schools’ post-pandemic recovery, and creating practical programs and platforms to achieve that aim. Not only are edtech brokers positioning themselves as experts in evaluation and evidence, or as connective nodes between private companies and public education; they act as catalytic change agencies advising schools on the appropriate institutional pathways and product purchases to make for digital transformation.

Ambassadors and engines

We have initially identified two types of brokers:

  1. Ambassador brokers represent either a single technology provider or a selected sample of industry actors. They provide sales, support and training for specific vendor products, including global technology suppliers such as Google and Microsoft, acting as supporting intermediaries for the expansion of their platforms and services into schools.
  2. Search engine brokers function as public portals presenting selected evidence of edtech quality and impact to shape edtech procurement decisions in schools. They function as searchable databases of ‘social proof’ of ‘what works’ in the ‘edtech impact marketplace’, enabling school staff to access product comparisons and evaluative review materials.

Edtech brokers represent significant changes in the ways state education is organized. Both types of brokers operate as or through platforms that offer (part of) their services through digital means, exemplifying as well as catalyzing fast-paced digital transformations in education systems.

Ed-tech brokering is furthermore a global phenomenon, with initiatives variously funded by international organizations, philanthropies, national government agencies, and associations of private companies, representing concerted transnational and multisector reformatory ambitions to embed edtech in schools.

Edtech brokers all draw on ‘evidence’ and ‘scientific evaluations’, making it accessible and attractive for decision makers in schools. They are thereby shifting the sources of professional knowledge that inform schools’ decisions towards particular evaluative criteria of quality, impact, or ‘what works’. More particularly, ed-tech brokers are emblematic of the rise to power of new types of professionals and new forms of expertise in education.

Overall, we approach brokers as new intermediary actors in state education that are shifting the cognitive frames by which educators and school leaders think and act in relation to edtech. Brokers not only generally guide users’ decision-making processes and cognition; they equally contribute to structure particular forms of education and make specific forms of education visible, knowable, thinkable, and, ultimately, actionable.

The social lives of brokers

This project examines the transnational expansion of edtech brokering as a new organizational type and a new form of professionality in education, and provides an up-close empirical examination of its practical work and concrete effects, by opening the ‘black box of edtech brokering’ in different national contexts. We will utilize a social topology framework to study the policy ecosystem, platform interfaces, and data practices of edtech brokers, as well as their effects on school users of these services.

Exploring the fast-developing intermediary role of edtech brokers is crucial both for academic purposes as well as for the educational field itself: brokers are assembling the knowledge, expertise and platforms through which post-pandemic education will be defined. Because of their very position as connective intermediaries between specific schools and the edtech corporate world, brokers translate both the objectives of edtech companies and educational institutions into shared and context-specific aims. In doing so, they reformat, redo, restructure, and reconceive what education is or could be about.

Moving from the transnational level of edtech brokering as an emerging phenomenon to the ‘social lives’ of edtech brokers in action, the project will drill down to their influence on decision-making in schools in comparative national contexts. In countries such as Belgium and the UK, we have already observed how both ambassador and search engine brokers are actively seeking to influence the uptake and use of edtech in schools. The project will commence autumn 2021, with fieldwork to be carried out in Belgium and the UK.


Posted in Uncategorized | Tagged , , , | 1 Comment

Valuing futures

Ben Williamson

Education technology investors are imagining new visions of the future of education while calculating the market valuation of their investment portfolios. Photo by Lukas Blazek on Unsplash

The future of education in universities is currently being reimagined by a range of organizations including businesses, technology startups, sector agencies, and financial firms. In particular, new ways of imagining the future of education are now tangled up with financial investments in education technology markets. Speculative visions and valuations of a particular ‘desirable’ form of education in the future are being pursued and coordinated across both policy and finance.

Visions and valuations

Edtech investing has grown enormously over the last year or so of the pandemic. This funding, as Janja Komljenovic argues, is based on hopes of prospective returns from the asset value of edtech, and also determines what kinds of educational programs and approaches are made possible. It funds unique digital forms of education, investing speculatively in new models of teaching and learning to enable them to become durable and, ideally, profitable for both the investor and investee.

We’ve recently seen, for instance, the online learning platform Coursera go public and reach a multibillion dollar valuation based on its reach to tens of millions of students online. New kinds of investment funds have emerged to accelerate edtech market growth, such as special purpose acquisition companies (SPACs) that exist to raise funds to purchase edtech companies, scale them up quick and return value to both the SPAC and its investor, plus new kinds of education-focused equity funds and portfolio-based edtech index investing that select a ‘basket’ of high-value edtech companies for investors to invest in.

The result of all this investment activity has been the production of some spectacular valuation claims about the returns available from edtech. The global edtech market intelligence agency HolonIQ calculated venture capital investment in edtech at $16bn last year alone, predicting a total edtech market worth $400bn by 2025.

But, HolonIQ said, this isn’t just funding seeking a financial return—it’s ‘funding backing a vision to transform how the world learns’. These edtech investments tend to centre on a particular shared vision of how the future of education could or should be, and on particular products and companies that promise to be able to materialize that future while generating shareholder value. To this end, it just announced three ‘prototype scenarios‘ for the future of higher education, ‘differentiated by market structure’, as a way of developing consensus about desirable imaginaries and market opportunities for investment. The scenarios are imaginary constructs backed by quantitative market intelligence that HolonIQ has calculated with its in-house valuation platform. These are, to draw on the economic sociologist Jens Beckert, instruments of ‘fictional expectations’ that investment organizations craft to showcase their convictions and hopes, supported by specific devices of financial speculation that provide a more ‘calculative preview of the future’.

The aim of such instruments of expectation here is to stimuate speculative investments in new forms of education, and stabilize them as durable models for prospective future returns. The vision and the valuation of educational futures are intricately connected, and as Keri Facer recently noted, speculative investment of this kind is about making ‘bets’ on certain ‘valued’ educational futures while ‘shorting’ or foreclosing other possible futures for education.

What bets are being made? These bets are being made, for example, on the vision contained in the 2021 Global Learning Landscape report and infographic from HolonIQ. The landscape is a taxonomy of 1,250 edtech companies that HolonIQ has assessed in terms of their market penetration, product innovation, and financial prospects. As a fictional expectation inscribed in material form, the purpose of the infographic here is both to attract investors—for whom HolonIQ provides bespoke venture capital services—and to attract educational customers to ‘invest’ in institutional digital innovation through procuring from these selected services.

A persuasive vision or fictional expectation of the future of education is contained and transmitted in this infographic. As an instrument of expectation it emphasizes companies and products promising data-driven teaching and learning and analytics; online platforms such as MOOCs, online program management and other forms of public-private platform partnerships; AI in education, smart learning environments and personalized learning; workforce development and career matching apps, and other forms of student skills measurement and employability profiling. The infographic distills both an imaginative educational vision and a speculative investment valuation of the digital future of teaching and learning.

Education reimagined

The vision and valuation of educational futures are currently being joined together powerfully in the UK by an ongoing partnership between Jisc—the HE sector non-profit digital agency—and Emerge Education, a London-based edtech investment company. Jisc and Emerge have recently produced a series of visionary reports and strategy documents dedicated to Reimagining Learning and Teaching towards a vision of higher education in 2030. Together, the reports function as instruments of expectation with the intention of producing conviction in others that the imaginaries they project are desirable and attainable.

All the reports, written by Emerge with Jisc input, focus on the central fictional expectation of ‘digital transformation’ or ‘rebooting’ HE through partnerships with edtech startups, for example, in teaching, assessment, well-being, revenue diversification, and employability. They have produced an ‘edtech hotlist’ of companies to deliver those transformations, and created a ‘Step Up’ programme of partnerships between startups and universities to actively materialize the imaginary they’re pursuing.

The Jisc-Emerge partnership highlights how investment and policy are being coordinated towards a shared aim with expected value for HE institutions and for edtech companies and their investors at the same time. Exemplifying how investors’ fictional expectations catalyse real-world actions, this valued vision of HE in 2030 appears across the partnership’s reports, and especially in the main report also supported by UUK and Advance HE.

The report offers a vision of revolutionary digital acceleration, university adaptation and reimagining as digital organizations, characterized by personalized learning experience driven by artificial intelligence and adaptive learning systems that are modified automatically and dynamically. Universities are told to invest in their digital estates, learning infrastructure, personalized and adaptive learning, and AI. The sector is urged to adopt new data standards for the exchange of learner data, new micro-credentials, forms of assessment and well-being analytics.

The vision of learning and teaching ‘reimagined’ here, with the approval of Jisc, UUK and Advance HE, is highly congruent with the investment strategy of Emerge itself, with its emphasis on investing in a portfolio of ‘companies building the future of learning and work’. The fictional expectations and investment imaginary of Emerge have therefore been inscribed both into policy-facing documents and into its own strategic portfolio of investments.

Portfolio futures

So what this indicates is how edtech investment has become highly significant to how the future of teaching and learning is being imagined and materialized. Education futures are being imagined in parallel with market calculations and speculative investments, inscribed in graphical scenarios and calculative previews as instruments of expectation. Investment portfolios are being fused to policy imaginaries of education by way of shared fictional expectations that coordinate both policy and investment towards the same aims. Certain possible futures are being funded into existence or to scale.

Investment organizations are not just funding fortunate companies, but actively shaping how the future of education is imagined, narrated, invested in, and made into seemingly actionable strategies for institutions. By coordinating both policy and investment portfolios towards shared objectives, they’re valuing and betting on visions of digital transformation that promise prospective investment returns while devaluing and shorting alternative imaginaries of possible HE futures. This begs the question of how other futures of education can be produced, negotiated dialogically by educators, and invested in as a collective portfolio of counter-imaginaries of teaching and learning.

Posted in Uncategorized | Tagged , , , , | Leave a comment

Edtech sci-fi

Ben Williamson

Artistic sci-fi depiction of a futuristic classroom. Image by Josan Gonzales.

Before making a career out of studying education technology, I was a student of literature. As an undergraduate student of English Lit at Cardiff University, we were taught it was possible to critique the canon, analyze cultural objects as mundane as cereal packets, and engage with ‘genre’ fiction such as crime, horror and sci-fi. Later, as a part-time PhD literature student working full-time for an edtech ‘futurelab’, I read Neal Stephenson’s 1995 sci-fi novel The Diamond Age; among many elements, it features an edtech device called the Primer. It was a strange moment as my PhD, partly about Stephenson’s novels, came into contact with my edtech day-job.

The idea of exploring edtech in sci-fi has remained in the background of my work ever since, but I’ve never properly figured out what to do about it or even if it was too niche an area of literary interest.

Doing something about edtech sci-fi came up again during a recent workshop to develop a new taught course. Might edtech sci-fi open up students to critical perspectives on current edtech issues such as datafication, inequalities, commercialization and so forth?

As a way of finding out, on Twitter, I asked “Anyone got good examples of education technologies in sci-fi, text or film? Got the Primer in the Diamond Age, roboteachers in Class of 1999, but what else? Possibly for a course #edtechscifi”. Below I’m listing all the responses I received, partly for my own benefit but hopefully in case others are interested too. But first a quick discussion of why studying edtech in sci-fi may be a useful way of approaching a range of critical current issues in research on education.

Science fiction has, for well over a century, provided authors with a way of speculating about the future from current trends, and by doing so to explore major concerns, tensions and anxieties characterizing its historical, social and political context. From fears of nuclear destruction–such as A Canticle for Liebowitz by Walter Miller in the 1950s–and anxieties over neural implants–in Gibson’s Neuromancer and Stephenson’s Snow Crash in the 80s/90s–today, much contemporary sci-fi is grappling with the consequences of social media, data profiling, surveillance, automation and inequalities.

Recent favourites are Zed by Joanna Kavenna, the novella collection Radicalized by Cory Doctorow, and Burn-In: a novel of the real robotic revolution by PW Singer and August Cole. The latter is a heavily endnoted, research-based novel about the dangers of automation and right-wing extremism authored by two intelligence analysts. They’ve termed it “fiction intelligence” that blends narrative with nonfiction. I’ve also got Kim Stanley Robinson’s The Ministry for the Future on my shelf, a near-future fiction about environmental destruction. In the book How to Run a City like Amazon, and other fables, a group of academic social scientists and geographers even produced a collection of social science fiction stories and poetry about corporate digital urbanism.

Fiction may even animate social theory: as David Beer argues, “fiction has been used to encounter and interrogate far-reaching and vital questions about the social world, some of which are deeply political and global in their scope”.

So fiction in general and sci-fi specifically can speak to urgent contemporary social, technical, political and environmental concerns. As academic geographer (and fiction writer) Rob Kitchin points out,

science fiction employs the tactics of estrangement (pushing a reader outside of what they comfortably know) and defamiliarisation (making the familiar strange) as a way of creating a distancing mirror and prompting critical reflection on society, now and to come. Perhaps unsurprisingly, there is a long history of academics drawing on the imaginaries of science fiction in their analyses, and also science fiction writers using academic ideas in their stories.

I’d suggest this should prompt more engagement with edtech in sci-fi – not to treat as sci-fi as a model for the future of education, but as a way of exploring the far-reaching personal, social, political and envionmental impacts of edtech development from recent trends.

Artist vision of the future of education: the Edu Ocunet by Tim Beckhardt

One suggestion to my Twitter query from several people was the 2002 dystopian novel Feed by MT Anderson, a fabulous near-future novel featuring neural interfaces and the complete handover of state responsibility to corporations. We don’t have to think too hard to come up with examples of individual tech entrepreneurs and corporations already pursuing the development of brain-computer interfaces that could bring the dystopia of Feed to fruition.

Feed also features a very ominous depiction of education in the shape of “SchoolTM“, a completely corporatized education system that teaches students, through their direct-to-brain feed, to value rampant consumerism and environmental destruction over history, politics and civic participation. The novel explores the consequences of such an technology-centred, corporate education system for its teenaged protagonists and, moreover, for democracy itself.

David Golumbia and Frank Pasquale were kind enough to send me a copy of a recent chapter in which they analyze Feed as a way in to understanding a current “corporate-political world” characterized by the “primacy of the corporate form”. It’s a brilliant chapter, and offers a compelling justification for focusing analytical attention on fiction as a way of studying contemporary social, technical, economic and political problems.

Fiction, they argue,

frees authors to extrapolate from current trends to thick descriptions of the futures they portend. Corporations and governments often use scenario analysis to understand a range of possible futures to prepare for, but such analyses tend to eschew the visceral, subjective, and psychological insights that good fiction embodies. A novelist can imagine the ways in which the minds of individuals both reflect and reinforce their social environment. These considerations are just as worthy of policy-makers’ attention as the economic and political models that now dominate discussions of corporate rights.

Beyond the depiction of the interior lives of characters, novels engage with the complex social, political, economic and and environmental crises of our time.

The depiction of education in Anderson’s novel, they go on, “forms the critical backdrop for the world depicted in Feed, since so much of the novel turns out to depend on the characters’ lack of critical thinking skills and ignorance of fundamental issues of history and politics.” This, for me, offers a rich opening for the further examination of edtech in sci-fi, or, indeed, “social science fiction” writing as critical academic practice in edtech research. I’m interested to explore further how to engage with edtech sci-fi in possible future research and teaching.

In the meantime, however, here’s a list the lovely people on Twitter suggested of edtech sci-fi texts, TV, and film. Three were even suggestions of existing compilations of edtech sci-fi: a 2015 piece by Audrey Watters on Education in Science Fiction, a collection by Stephen Heppell, and an entry on Education in SF at the Encyclopedia of Science Fiction. Check those out too. I’ve alphabetized the list but nothing more. Some people added short descriptions, which I’ve paraphrased, and others links, which you’ll have to mine the replies to find, I’m afraid.

A Clockwork Orange, novel by Anthony Burgess, film by Stanley Kubrick – technologized socialization

AI film by Steven Spielberg – Dr Know, a holographic answer engine

Anathem by Neal Stephenson – anti-tech monasteries

And Madly Teach by Biggle

An Enterprising Man by Joe Frank

A.R.T.H.U.R. poem by Laurence Learner – “metal people / And movers” who “make what they call mistakes”

Beyond Freedom by BF Skinner – behaviourist utopia

Brave New World by Aldous Huxley – hypnopaedia and audio conditioning

Chronopolis by JG Ballard – education after civilization has tried to forget measuring time

Class of 1999 – robot teachers

Computer Friendly by Eileen Gunn

Copying Toast – memory-printed bread

Cypher – psychedelic brainwashing

Cyteen by CJ Cherryh – muscle memory and hypnopaedia through AV/nerve stimulation input

Deep Space 9 – future classroom and school

Die Fernschule (The Distance Learning School) by Kurd Lasswitz

Doomsday and others by Connie Willis – Oxford uni students educated for time travel

Doraemon – 18th generation robot academy

Electric Dreams – safe and sound episode

Ender’s Game by Orson Scott Card – novel and film – 50% about edtech

Erewhon by Samuel Butler – intelligent machines and futuristic university

ET – Speak & Spell

Firefly Srenity – futuristic classroom scenes

Futuretrack 5

Hitch Hiker’s Guide – Babel Fish

Hunger Games – training simulations

Idiocracy – testing

Jetsons – robot teacher

Knight Rider – KTT helps with Hoff with planning and problem solving

Limitless – NZT bio-stimulant

Never Let Me Go by Kazuo Ishiguro – boarding school for student clones raised and educated for body organ donation

Old Man’s War – Brainpal

Orbital Resonance by John Barnes

Otherland by Williams

Pern and Pegasus series by Anne McCaffrey – AIVAS system and online learning

Profession by Isaac Asimov – students educated for specific professions by direct brain-computer interfaces (“Taping”)

Quantum Logic series by Greg Bear – plot about universities and privatized education

Rainbow’s End by Vernor Vinge – high school immersive environments

Raised by Wolves – the teacher is the tech

Ready Player One – Oasis, school in VR

Robot Revolt by Nicholas Fisk – robot tutor

Star Trek – the Holodeck, Kobayashi simulation, Vulcan learning sphere

Star Wars – lightsabre training, robot lecturer, clone training centre

Starship Troopers – 3D bug training models

Stranger in a Strange Land by Robert Heinlein – teaching via Martian telepathy

2000AD – Tharg’s Future Shock

TeleAbsence by Michael Burstein

The Child Garden by Geoff Ryman – learning about Derrida from viral injections

The Diamond Age by Neal Stephenson – personalized learning Primer

The Dispossessed by Ursula LeGuin – interstellar communication

The Fun They Had by Isaac Asimov

The Last Book in the Universe by Rodman Philbrick

The Machine Stops by EM Forster – predicts online education by a hundred years

The Matrix – “I know Kung Fu”

The Prisoner – ‘The General’ episode – mind-altering edtech called Speed Learn

The Simpsons – ‘Miseducation of Lisa Simpson’ episode

The Thing Under the Glacier by Brian Aldiss – student wearable brain-controlled ‘miniputer’

The Veldt by Ray Bradbury

Thirty Days Had September by Robert F Young – second-hand robot teacher

Time in Thy Flight by Ray Bradbury

To Live Again by Silverberg

Ulysses 31 – the Cortex

Venture Brothers – learning beds

Walden Two by BF Skinner – intersection of sci-fi, imaginaries and edtech

WarGames – Joshua and machine learning

Years and Years – cyborg training technologies

If you come across any others, please do tag #edtechscifi and @BenPatrickWill on Twitter and I’ll keep adding.

Posted in Uncategorized | Tagged , , , | Leave a comment

Pandemic privatization and digitalization in higher education

Ben Williamson and Anna Hogan

The state of emergency in higher education systems around the world during the Covid-19 pandemic has opened up the sector to an expanding range of education technologies, commercial companies, and private sector ambitions. In our new report commissioned by Education International (the global federation of teacher unions), entitled ‘Pandemic Privatisation in Higher Education: Edtech and University Reform’, we examine various ways commercialization and privatization of higher education have been pursued and advanced through the promotion of edtech and ‘digital transformation’ agendas during campus closures and disruptions over the last year. Although we recognize that digital technologies and private or commercial organizations can bring many benefits to HE, they also raise significant challenges with long-term implications for HE staff, students and institutions. Many of these challenges are long-term political and economic matters as much as they are short-term practical matters of online teaching.

The report is detailed and long enough, but even since we finished it in late 2020, the developments we identified have accelerated and expanded. These include investors seeking to capitalize on new visions of teaching and learning, and multisector coalitions coming together to reimagine the future of HE through digital infrastructure and platform-based transformations — ultimately ‘re-infrastructuring’ and ‘platformizing’ universities to operate according to design principles imported by the digital tech industry. These are profoundly political issues about control, power, influence and governance in HE, mirrored by similar shifts of control to technology in the health sector.

Maybe most of the proposed changes associated with so-called digital transformation won’t work out in practice. That may be for several reasons: large-scale transformative proposals are rarely realized in their ideal form, and technologies can always be resisted, subverted, ignored, or simply mobilized in much more mundane ways than their architects intended. But we hope the report at least raises awareness of the changes that many powerful organizations are imagining and seeking to materialize in the very near future. The form, role and functions of higher education may be profoundly reimagined and reconstructed during post-pandemic recovery, and all stakeholders in the sector need to be involved in debates over the sector’s future.

Here is the summary from our full report as a starter for such debates:

  • Pandemic privatisation through multi-sector policy. Emergencies produce catalytic opportunities for market-oriented privatisation policies and commercial reforms in education. The COVID-19 pandemic has been used as an exceptional opportunity for expanding privatisation and commercialisation in HE, particularly through the promotion of educational technologies (edtech) as short-term solutions to campus closures and the positioning of private sector actors as catalysts and engineers of post-pandemic HE reform and transformation. The pandemic privatisation and commercialisation of HE during the COVID-19 emergency is a multi-sector process involving diverse actors that criss-cross fields of government, business, consultancy, finance, and international governance, with transnational reach and various effects across geographical, social, political, and economic contexts. It exemplifies how ‘disaster techno-capitalism’ has sought to exploit the pandemic for private sector and commercial advantage.
  • Higher education reimagined as digital and data-intensive. Diverse organisations from multiple sectors translated the public health crisis into an opportunity to reimagine HE for the long term as a digitally innovative and data-intensive sector of post-pandemic societies and economies. While face to face teaching constituted an urgent global public health threat, it was also constructed by organisations including education technology businesses, consultancies, international bodies and investors as a longer-term problem and threat to student ‘upskilling’, ‘employability’, and global post-coronavirus economic recovery. Framed as a form of ‘emergency relief’ during campus closures, education technologies were also presented as an opportunity for investment and profit-making, with the growing market of edtech framed as a catalytic enabler of long-term HE reconstruction and reform.
  • Transformation through technology solutionism. Education technologies and companies became highly influential actors in HE during the pandemic. Private organisations and commercial technologies have begun to reform colleges and universities from the inside, working as a social and technical infrastructure that shapes institutional behaviours and, as programmed pedagogical environments, determines the possible organisation of teaching and learning. In the absence of the physical infrastructure of campuses and classrooms during the pandemic, institutions were required to develop digital infrastructure to host online teaching. This opened up new and lucrative market opportunities for vendors of online learning technologies, many of which have actively sought to establish positions as partners in long-term transformations to the daily operations of colleges and universities. New kinds of technical arrangements, introduced as temporary emergency solutions but positioned as persistent transformations, have affected how teaching is enacted, and established private and commercial providers as essential infrastructural intermediaries between educators and students. These technologies are enacting significant changes to the teaching and learning operations and practices of HE institutions, representing a form of solutionism that treats all problems as if they can be fixed with digital technologies.
  • New public-private partnerships and competition. New public-private partnerships developed during the pandemic blur the boundaries between academic and industry sectors. Partnerships between academic institutions and the education and technology industries have begun to proliferate with the development of business models for the provision of online teaching and learning platforms. Global technology companies including Amazon, Google, Alibaba and Microsoft have sought to extend their cloud and data infrastructure services to an increasing number of university partners. Colleges and universities are also facing increasing competition from private ‘challenger’ institutions, new industry-facing ‘digital credential’ initiatives, and employment-based ‘education as a benefit’ schemes offering students the convenience of flexible, affordable, online learning. These developments enhance the business logics of the private sector in HE, privileging education programs that are tightly coupled to workplace demands, and expand the role of for-profit organisations and technologies in the provision of education.
  • Increasing penetration of AI and surveillance. Edtech companies and their promoters have increased the deployment of data analytics, machine learning and artificial intelligence in HE, and emphasised the language and practices of ‘personalised learning’ and ‘data-driven decision-making’. Organisations from across the sectoral spectrum have highlighted the importance of ‘upskilling’ students for a post-pandemic economy allegedly dominated by AI and automation and demanding new technical competencies. AI has also been enhanced through the deployment of large-scale data monitoring tools embedded in online learning management software, surveillance technologies such as distance examination proctoring systems, and campus safety systems such as student location and contact tracing apps. In imaginaries of the AI-enabled future of HE, next-generation learning experiences will be ‘hyperindividualised’ and scaled with algorithms, coupled with digital credentialing and data-driven alignment of education with work.
  • Challenges to academic labour, freedom and autonomy. The professional work of academic educators has been affected by the increasing penetration of the private sector and commercial technology into HE during the pandemic. Staff have had little choice over the technologies they are required to employ for their teaching, resulting in high-profile contests over the use, in particular, of intrusive surveillance products or concerns over the potential long-term storage and re-use of recorded course materials and lectures. Academic educators have been required to double up their preparation and delivery of classes for both in-person and online formats. Classes and events featuring ‘controversial’ speakers or critical perspectives have been cancelled due to the commercial terms of service of providers of online video streaming platforms. The expansion of data analytics, AI and predictive technologies also challenges the autonomy of staff to make professionally informed judgments about student engagement and performance, by delegating assessment and evaluation to proprietorial software that can then prescribe ‘personalised learning’ recommendations on their behalf. Finally, academic freedom is at risk when online teaching and learning conducted in an international context runs counter to the politics of certain state regimes, leading to concerns over censorship and the suppression of critical inquiry in remote education.
  • Alternative imaginaries of post-pandemic HE. Online teaching and learning is neither inevitably transformative nor necessarily deleterious to the purpose of universities, the working conditions of staff, or the experience of students. However, the current reimagining of HE by private organisations, and its instantiation in commercial technologies, should be countered with robust, critical and research-informed alternative imaginaries centred on recognising the purpose of higher education as a social and public good. The appearance of manifestos and networks dedicated to this task demonstrates a widespread sense of unease about the ways emergency measures are being translated into demands to establish a new ‘digital normalcy’ in HE. Educators, students, and the unions representing them should dedicate themselves to identifying effective practices and approaches, countering the imposition of commercial models that primarily focus on profit margins or pedagogically questionable practices, and developing alternative imaginaries that might be realised through collective deliberation and action. 

We hope educators, unions, leaders and others will engage with some of these issues in the months to come. The full report is available to view or download here, or you can access PDF versions of the summary in English, French and Spanish.

Posted in Uncategorized | Tagged , , , , , , , , , | Leave a comment