PISA for machine learners

A new report from the OECD explores how human skills can complement artificial intelligence. Source: OECD

The common narrative of the future of education is that artificial intelligence and robotization will transform how people work, with changing labour markets requiring schools to focus on developing new skills. This version of the future is reflected in influential ideas about the ‘Fourth Industrial Revolution’, where novel forms of ‘Education 4.0’ will produce the necessary skilled human labour to meet the needs of ‘Industry 4.0.’ Statistical calculations and predictions of multitrillion dollar ‘skills gaps’ in the new AI-driven economy have helped fortify such visions of the future, appealing to government and business interests in GDP and productivity returns.  

The Organisation of Economic Cooperation and Development (OECD) has played a considerable role in advancing ideas about education in the Fourth Industrial Revolution, particularly through its long-term Future of Education and Skills 2030 program launched in 2016.  A background report on the 2030 project showed how education systems were not responding to the ‘digital revolution’ and new Industry 4.0 demands, and presented the OECD’s case for the development of new skills, competencies and knowledge through ‘transformative change’ in education.

Defining the future skills required of the digital revolution is now being undertaken by the OECD’s Artificial Intelligence and the Future of Skills work program, a 6 year project commenced in 2019 by its Centre for Educational Research and Innovation (OECD-CERI). As described on its approval:

The motivation for the Future of Skills project comes from a conviction that policymakers need to understand what AI and robotics can do—with respect to the skills people use at work and develop during education—as one key part of understanding how they are likely to affect work and how education should change in anticipation.

Its ‘goal is to provide a way of comparing AI and robotics capabilities to human capabilities,’ and therefore to provide an evidence base for defining—and assessing—the human skills that should be taught in future education systems. In this sense, the project has potential to play a significant role in establishing the role of AI in relation to education, not least by encouraging policymakers to pursue educational reforms in anticipation of technological developments. This post offers an initial summary of the project and some implications.

‘PISA for AI’

The first AI and the Future of Skills report was published in November 2021. Over more than 300 pages, it outlines the methodological challenges of assessing AI and robotics capabilities. The point of the report is to specify what AI can and cannot do, and therefore to more precisely identify its impact on work, as a way of then defining the kinds of human skills that would be required for future social and economic progress.

OECD graphic detailing technological and educational change. Source: OECD

In the Foreword, the OECD Director of Education, Andreas Schleicher, noted that ‘In a world in which the kinds of things that are easy to teach and test have also become easy to digitise and automate, we need to think harder how education and training can complement, rather than substitute, the artificial intelligence (AI) we have created in our computers.’

The project, Schleicher added, ‘is taking the first steps towards building a “PISA for AI” that will help policy makers understand how AI connects to work and education.’

The idea of a ‘PISA for AI’ is an intriguing one. The implication here is that the OECD might not only test human learners’ cognitive skills and capabilities, as its existing PISA assessments do, or their skills for work, as PIAAC tests do. It could also test the skills and capabilities of machine learners in order to then redefine the kinds of human skills that need to be taught, all with the aim of creating ‘complementary’ skills combinations. Ongoing assessments might then be administered to ensure human-machine skills complementarities for long-term economic and social benefit.

Computing Cognition

So how does the OECD plan to develop such assessments? One part of the report, authored by academic psychologists, details the ways cognitive psychology and industrial-organisational psychology have underpinned the development of taxonomies and assessments of human skills, including cognitive abilities, social-emotional skills, collective intelligence, and skills for industry. The various chapters consider the feasibility of extending such taxonomies and tests to machine intelligence. Another section of the report then looks at the ways the capabilities of AI can be evaluated from the perspective of academic computer science.

Given the long historical interconnections of cognitive science and AI—which go all the way back to cybernetics—these chapters represent compelling evidence of how the OECD’s central priorities in education have developed through the combination of psychological and computer sciences as well as economic and government rationales. In recent years it has shifted its attention to insights from the learning sciences resulting from advances in big data analytics and AI. Similar combinations of psychological, economic, computational and government expertise were involved in the  formation of the OECD’s assessment of social and emotional skills.

In the final summarizing chapter of the report, for example, the author noted that ‘the computer science community acknowledges the intellectual foundation and extensive materials provided by psychology,’ although, because the ‘the cognitive capacities of humans and AI are different,’ further work would require ‘bringing together different types of approaches to provide a more complete assessment of AI.’

The next stage of the AIFS project will involve piloting the types of assessments described in this volume to identify how well they provide a basis for understanding current AI capabilities. This work will begin with intense feedback from small groups of computer and cognitive scientists who attempt to describe current AI capabilities with respect to the different types of assessment tasks.

The project is ambitiously bringing together expertise in theories, models, taxonomies and methodologies from the computer and psychological sciences, in order ‘to understand how humans will begin to work with AI systems that have new capabilities and how human occupations will evolve, along with the educational preparation they require.’

Additionally, the project will result in some familiar OECD instruments: international comparative assessments and indicators. It will involve the ‘creation of a set of indicators across different capabilities and different work activities to communicate the substantive implications of AI capabilities,’ and ‘add a crucial component to the OECD’s set of international comparative measures that help policy makers understand human skills.’ In many respects, the OECD appears to be pursuing the development of a novel model of human-nonhuman skills development, and building the measurement infrastructure to ensure education systems are adequately aligning both the human and machine components of the model.

The idea of a ‘PISA for AI’ is clearly a hugely demanding challenge—one the OECD doesn’t foresee delivering until 2024. Despite being some years from enactment, however, PISA for AI already raises some key implications for the future of education systems and education policy.

Human-Computer Interaction

The OECD-CERI AI and the Future of Skills project is establishing artificial intelligence as a core priority for education policymakers. Although AI is already by now part of education policy discourse, the OECD is seeking to make it central to policy calculations about the kinds of workforce skills that education systems should focus on. The project may also help strengthen the OECD’s authority in education at a time of rapid digitalization, reflecting the historical ways it has sought to adapt and maintain its position as a ‘global governing complex.’

The first implication of the project, then, is its emphasis on workplace-relevant ‘skills’ as a core concern in education systems. The OECD has played a longstanding role in the translation of education into measurable skills that can be captured and quantified through testing instruments, as a means to perform comparative assessments of education systems and policy effectiveness. The project is establishing OECD’s authoritative position to define the relevant skills that future education systems will need to inculcate in young people. It is drawing on cognitive psychology and computer science, as well as analysis of changing labour markets, to define these skills, and potentially displacing other accounts of the purposes and priorities of education as a social institution.

A second implication stems from its assumption that the future of work will be transformed by AI in the context of a Fourth Industrial Revolution. The project seems to uncritically accept a techno-optimistic imaginary of AI as an enabler of capitalist progress, despite the documented risks and dangers of algorithmic work management, automated labour, and discriminatory outcomes of AI in workplaces, and a raft of regulatory proposals related to AI. Cognitive and computer science expertise are clearly important sources for developing assessment methodologies. The risk however is the production of a PISA for AI that doesn’t ask AI to account for its decisions when they potentially lead to deleterious outcomes. Moreover, matching human skills to AI capabilities as a fresh source of productivity is unlikely to address persistent power asymmetries in workplaces–especially prevalent in the tech industry itself–or counter the use of automation as a route to efficiency savings.

Third, the project appears to assume a future in which skilled human labour and AI perform together in productive syntheses of human and machine intelligence. While the role of AI and robotics as augmentations to professional roles may have merits, it is certainly not unproblematic. Social research, philosophy and theory—as well as science fiction—has grappled with the implications of human-machine hybridity for decades, through concepts such as the ‘cyborg,’ ‘cognitive assemblages,’ ‘posthumanism,’ ‘biodigital’ hybrids, ‘thinking infrastructures,’ and ‘distributed’ or ‘extended cognition.’ The notion that skilled human labour and AI might complement each other, as long as they’re appropriately assessed and attuned to one another’s capabilities, may be appealing but probably not as straightforward as the OECD makes out. Absent, too, are considerations of the power relations between AI producers–such as the global tech firms that produce many AI-enabled applications–and the individual workers expected to complement them. 

The fourth implication is that upskilling students for a future of working with AI is likely to require extensive studying alongside AI in schools, colleges and universities too. Earlier in 2021, the OECD published a huge report promoting the transformative benefits of AI and robotics in education. While AI in education itself may hold benefits, the idea of implanting AI in classrooms, curricula, and courses is already deeply contentious. It is part of longrunning trends towards increased automation, datafication, platformization, and the embedding of educational institutions and systems in vast digital data infrastructures, often involving commercial businesses from edtech startups to global cloud operators. As such, an emphasis on future skills to work with AI is likely to result in highly contested technological transformations to sites and practices of education.

Finally, there is a key implications in terms of how the project positions students as the beneficiaries of future skills. As an organization dedicated to economic development, the OECD has long focused on education as an enabler of ‘human capital.’ It has even framed so-called ‘pandemic learning loss’ in terms of measurable human capital deficits as defined by economists. In this framing, educated or skilled learners represent future value to the economies where they will work; they are assets that governments invest in through education systems, and the OECD measures the effectiveness of those investments through its large-scale assessments.

The AI and future skills program doesn’t just focus on ‘human capital,’ however. It focuses on human-computer interaction as the basis for economic and social development. By seeking to complement human and AI capabilities, the OECD is establishing a new kind of ‘human-computer interaction capital’ as the aim of education systems. Its plan to inform policymakers about how to optimize education systems to produce skilled workers to complement AI capabilities appears to make the pursuit of HCI capital a central priority for government policy, and it potentially stands to make HCI capital into a core purpose of education. Students may be positioned as human components in these new HCI capital calculations, with their value worked out in terms of their measurable complementarity with machine learners.

Posted in Uncategorized | Tagged , , , | Leave a comment

Counting learning losses

‘Learning loss’ is an urgent political concern based on complex measurement systems. Photo by Nguyen Dang Hoang Nhu on Unsplash

The idea that young people have ‘lost learning’ as a result of disruptions to their education during the Covid-19 pandemic has become accepted as common knowledge. ‘Learning loss’ is the subject of numerous large-scale studies, features prominently in the media, and is driving school ‘catch-up’ policies and funding schemes in many countries. Yet for all its traction, there seems less attention to the specific but varied ways that learning loss is calculated. Learning loss matters because it has been conceptualized and counted in particular ways as an urgent educational concern, and is animating public anxiety, commercial marketing, and political action.

Clearly educational disruptions will have affected young people in complex and highly differentiated ways. My interest here is not in negating the effects of being out of school or critiquing various recovery efforts. It’s to take a first run at examining learning loss as a concept, based on particular forms of measurement and quantification, that is now driving education policy strategies and school interventions in countries around the world. Three different ways of calculating learning loss stand out. First is the longer psychometric history of statistical learning loss research, second its commercialization by the testing industry, and third the reframing of learning loss through econometric forms of analysis by economists.

The measurement systems that enumerate learning loss are, in several cases, contradictory, contested, and incompatible with one another. ‘Learning loss’ may therefore be an incoherent concept, better understood as multiple ‘learning losses’ based on their own measuring systems.   

Psychometric set-ups

Learning loss research is usually traced back more than 40 years to the influential publication of Summer Learning and the Effects of Schooling by Barbara Heyns in 1978. The book reported on a major statistical study of the cognitive development of 3000 children while not in school over the summer, using the summer holiday as a ‘natural experimental situation’ for psychometric analysis. It found that children from lower socioeconomic groups tend to learn less during the summer, or even experience a measurable loss in achievement.

These initial findings have seemingly been confirmed by subsequent studies, which have generally supported two major conclusions: (1) the achievement gap by family SES traces substantially to unequal learning opportunities in children’s home and community environments; and (2) the experience of schooling tends to offset the unequalizing press of children’s out-of-school learning environments. Since the very beginning of learning loss studies, then, the emphasis has been on the deployment of psychometric tests of the cognitive development of children not in school, the lower achievement of low-SES students in particular, and the compensatory role that schools play in mitigating the unequalizing effects of low-SES family, home and community settings.

However, even researchers formerly sympathetic to the concept of learning loss have begun challenging some of these findings and their underlying methodologies.  In 2019, the learning loss researcher Paul T. von Hippell expressed serious doubt about the reliability and replicability of such studies. He identified serious flaws in learning loss tests, lack of replicability of classic findings, and considerable contradiction with other well-founded research on educational inequalities.

Perhaps most urgently, he noted that a significant change in psychometric test scoring methods—from paper and pen surveys to ‘a more computationally intensive method known as item response theory’ (IRT) in the mid-1980s—completely reversed the original findings of the early 1980s. With IRT, learning loss seemed to fade away. The original psychometric method ‘shaped classic findings on summer learning loss’, but the newer item-response theory method produced a very different ‘mental image of summer learning’.

Moreover, noted von Hippel, even modern tests using the same IRT method produced contradictory results. He reported on a comparison of the Measures of Academic Progress (MAP) test developed by the testing organization Northwest Evaluation Association (NWEA), and a test developed for the Early Childhood Longitudinal Study. The latter found that ‘summer learning loss is trivial’, but the NWEA MAP test reported that ‘summer learning loss is much more serious’. So learning loss, then, appears at least in part to be an artefact of the particular psychometric-set-up constructed to measure it, with results that appear contradictory. This is not just a historical problem with underdeveloped psychometric instruments, but persists in the computerized IRT systems that were deployed to measure learning loss as the Covid-19 pandemic set in during 2020.

Commercializing learning loss

Here it is important to note that NWEA is among the most visible of testing organizations producing data about learning loss during the pandemic. Even before the onset of Covid-19 disruptions, NWEA was using data on millions of US students who had taken a MAP Growth assessment to measure summer learning loss. Subsequently, the NWEA MAP Growth test has been a major source of data about learning loss in the US, alongside various assessments and meta-analyses from the likes of commercial testing companies Illuminate, Curriculum Associates, and Renaissance and the consultancy McKinsey and Company.

Peter Greene has called these tests ‘fake science’, arguing that ‘virtually all the numbers being used to “compute” learning loss are made up’. In part that is because the tests only measure reading and numeracy, so don’t count for anything else we might think as ‘learning’, and in part because the early-wave results were primarily projections based on recalculating past data from completely different pre-pandemic contexts. Despite their limitations as systems for measuring learning, the cumulative results of learning loss tests have led to widespread media coverage, parental alarm, and well-funded policy interventions. In the US, for example, states are spending approximately $6.5 billion addressing learning loss.

Learning loss results based on Renaissance Star reading and numeracy assessments for the Department for Education

In England, meanwhile, the Department for Education commissioned the commercial assessment company Renaissance Learning and the Education Policy Institute to produce a national study of learning loss. Utilizing data from reading and mathematics assessments of over a million pupils who took a Renaissance Star test in autumn 2020, the findings were then published by the Department for Education as an official government document. An update report in 2021, published on the same government webpage, linked the Renaissance Star results to the National Pupil Database. This arrangement exemplifies both the ways commercial testing companies have generated business from measuring learning loss, and their capacity to shape and inform government knowledge of the problem–as well as the persistent use of reading and numeracy results as proximal evidence of deficiencies in learning.

Moreover, learning loss has become a commercial opportunity not just for testing companies delivering the tests, but for the wider edtech and educational resources industry seeking to market learning ‘catch-up’ solutions to schools and families. ‘The marketing of learning loss’, Akil Bello has argued, ‘has been fairly effective in getting money allocated that will almost certainly end up benefiting the industry that coined the phrase. Ostensibly, learning loss is a term that sprung from educational research that identified and quantified an effect of pandemic-related disruptions on schools and learning. In actuality, it’s the result of campaigns by test publishers and Wall Street consultants’.

While not entirely true—learning loss has a longer academic history as we’ve seen—it seems accurate to say the concept has been actively reframed from its initial usage in the identification of summer loss. Rather than relying on psychometric instruments to assess cognitive development, it has now been narrowed to reading and numeracy assessments. What was once a paper and pen psychometric survey in the 1980s has now become a commercial industry in computerized testing and the production of policy-influencing data. But this is not the only reframing that learning loss has experienced, as the measurements produced by the assessment industry have been paralleled by the development of alternative measurements by economists working for large international organizations.

Economic hysteresis

While early learning loss studies were based in psychometric research in localized school district settings, and the assessment industry has focused on national-level results in reading and numeracy, other recent large-scale studies of learning loss have begun taking a more econometric approach, at national and even global scales, derived from the disciplinary apparatus of economics and labour market analysis.

Influential international organizations such as the OECD and World Bank, for example, have promoted and published econometric research calculating and simulating the economic impacts of learning loss. They framed learning loss as predicted skills deficits caused by reduced time in school, which would result in weaker workforce capacity, reduced income for individuals, overall ‘human capital’ deficiencies for nations, and thereby reduced gross domestic product. The World Bank team calculated this would costs the global economy $11 trillion, while the economists writing for the OECD predicted ‘the impact could optimistically be 1.5% lower GDP throughout the remainder of the century and proportionately even lower if education systems are slow to return to prior levels of performance. These losses will be permanent unless the schools return to better performance levels than those in 2019’.

These gloomy econometric calculations are based on particular economic concepts and practices. As another OECD publication framed it, learning loss represents a kind of ‘hysteresis effect’ usually studied by labour economists as a measure of the long-term, persistent economic impacts of unemployment or other events in the economy. As such, framing education in terms of hysteresis in economics assumes learning loss to be a causal determinant of long-term economic loss, and that mitigating this problem should be a major policy preoccupation for governments seeking to upskill human capital for long-term GDP growth. Christian Ydesen has recently noted that the OECD calculations about human capital deficits caused by learning loss are already directly influencing national policymakers and shaping education policies.

It’s obvious enough why the huge multitrillion dollar deficit projections of the World Bank and OECD would alarm governments and galvanize remedial policy interventions in education. But the question remains how these massive numbers were produced. My following notes on this are motivated by talks at the excellent recent conference Quantifying the World, especially a keynote presentation by the economic historian Mary Morgan. Morgan examined ‘umbrella concepts’ used by economists, such as ‘poverty’, ‘development’ and ‘national income’, and the ways each incorporates a set of disparate elements, data sets, and measurement systems.

The production of numerical measurements, Morgan argued, is what gives these umbrella concepts their power, particularly to be used for political action. Poverty, for example, has to be assembled from a wide range of measurements into a ‘group data set’. Or, as Morgan has written elsewhere, ‘the data on population growth of a society consist of individuals, who can be counted in a simple aggregate whole’, but for economists ‘will more likely be found in data series divided by occupational classes, or age cohorts, or regional spaces’. Her interest is in ‘the kinds of measuring systems involved in the construction of the group data set’.

Figures published by the OECD on the economic impacts of learning loss on G20 countries

Learning loss, perhaps, can be considered an umbrella concept that depends on the construction of a group data set, while that group data set too relies on a particular measuring system that aligns disparate data into the ‘whole’. For example, then, if we look at the OECD report ‘The Economic Impacts of Learning Loss’, it is based on a wide range of elements, data sets and measuring systems. Its authors are Eric Hanushek and Ludger Woessmann, both economists and fellows of the conservative, free market public policy think tank the Hoover Institution based at Stanford University. The projections in the report of 1.5-3% lower GDP for the rest of the century represent the ‘group data set’ in their analysis. But this consists of disparate data sets, which include: estimates of hours per day spent learning; full days of learning lost by country; assessments of the association between skills learned and occupational income; correlational analyses of educational attainment and income; effects of lost time in school on development of cognitive skills; potential deficits in development of socio-emotional skills; and how all these are reflected in standardized test scores.

It’s instructive looking at some excerpts from the report:

Consistent with the attention on learning loss, the analysis here focuses on the impact of greater cognitive skills as measured by standard tests on a student’s future labour-market opportunities. …  A rough rule of thumb, found from comparisons of learning on tests designed to track performance over time, is that students on average learn about one third of a standard deviation per school year. Accordingly, for example, the loss of one third of a school year of learning would correspond to about 11% of a standard deviation of lost test results (i.e., 1/3 x 1/3). … In order to understand the economic losses from school closures, this analysis uses the estimated relationship between standard deviations in test scores and individual incomes … based on data from OECD’s Survey of Adult Skills (PIAAC), the so-called “Adult PISA” conducted by the OECD between 2011 and 2015, which surveyed the literacy and numeracy skills of a representative sample of the population aged 16 to 65. It then relates labour-market incomes to test scores (and other factors) across the 32 mostly high-income countries that participated in the PIAAC survey.

So as we can see, the way learning loss is constructed as an umbrella concept and a whole data set by the economists working for the OECD involves the aggregation of many disparate factors, measures and econometric measurement practices. They include past OECD data, as well as basic assumptions about learning as being synonymous with ‘cognitive skills’ and objectively measurable through standardized tests, and a host of specific measuring systems. Data projections are constructed from all these elements to project the economic costs of learning loss for individual G20 countries, and then calculated together as ‘aggregate losses in GDP across G20 nations’ using the World Development Indicators database from the World Bank as the base source for the report’s high-level predictions.

It is on the basis of this ‘whole’ calculation of learning loss—framed in terms of economic hysteresis as a long-term threat to GDP—that policymakers and politicians have begun to take action. ‘How we slice up the economic world, count and refuse to count, or aggregate, are contingent and evolving historical conventions’, argues Marion Fourcade. ‘Change the convention … and the picture of economic reality changes, too—sometimes dramatically’. While there may well be other ways of assessing and categorizing learning loss, it is the specific econometric assembly of statistical practices, conventions, assumptions, and big numbers that has made learning loss into part of ‘economic reality’ and into a powerful catalyst of political intervention.

Counting the costs of learning loss calculations

As a final reflection, I want to think along with Mary Morgan’s presentation on umbrella concepts for a moment longer. As the three examples I’ve sketchily outlined here indicate, learning loss can’t be understood as a ‘whole’ without disaggregating it into its disparate elements and the various measurement practices and conventions they rely on. I’ve counted only three ways of measuring learning loss here—the original psychometric studies; testing companies’ assessments of reading and numeracy; and econometric calculations of ‘hysteresis effects’ in the economy—but even these are made of multiple parts, and are based on longer histories of measurement that are contested, incompatible with one another, sometimes contradictory, and incoherent when bundled together.

As Morgan said at the Quantifying the World conference, ‘the difficulties—in choosing what elements exactly to measure, in valuing those elements, and in combining numbers for those many elements crowded under these umbrella terms—raise questions about the representing power of the numbers, and so their integrity as good measurements’.

Similar difficulties in combining the numbers that constitute learning loss might also raise questions about their power to represent the complex effects of Covid disruptions on students, and their integrity to produce meaningful knowledge for government. As my very preliminary notes above suggest there is no such thing as learning loss, but multiple conceptual ‘learning losses’ based on their own measurement systems. There are social lives behind the methods of learning loss.

Regardless of the incoherence of the concept, learning loss will continue to exert effects on educational policies, school practices and students. It will buoy up industries, and continue to be the subject of research programs in diverse disciplines and across different sites of knowledge production, from universities to think tanks, consultancies, and international testing organizations. Learning loss may come at considerable cost to education, despite its contradictions and incoherence, by diverting pedagogic attention to ‘catch-up’ programs, allocating funds to external agencies, and attracting political bodies to focus on mitigation measures above other educational priorities.

Posted in Uncategorized | Tagged , , , | Leave a comment

Nudging assets

The acquisition of learning management system Blackboard has opened up opportunities for the new company to generate value from integrating data and nudging student behaviour. Photo by Annie Spratt on Unsplash

The acquisition of the global education platform Blackboard by Anthology has brought the mundane Learning Management System back to attention. While full details of the deal remain to be seen, and it won’t be closed until the end of the year, it surfaces two important and interlocking issues. One is the increasing centrality of huge data integrations to the plans of education technology vendors, and the second is the seeming attractiveness of the data-driven approach to edtech financiers.

Primarily, the acquisition of Blackboard by Anthology centres on business interests, according to edtech consultant Phil Hill, who notes that Blackboard’s owners have been seeking to sell the company for three years. The purpose of the deal, Hill argues, ‘is a revenue growth opportunity driven by cross-selling, international growth, and the opportunities to combine products and create new value, particularly at the data level.’ This approach, Hill further suggests, makes sense on the ‘supply side’ for vendors and investors who see value in combining data and integrating systems, if less so on the ‘demand side’ of universities and schools, whose primary concerns are with usability.

There are two things going on here worth questioning a little further. First, what exactly are Blackboard/Anthology hoping to achieve by combining data, and second, why is this attractive to investors? Based on some recent company blog posts from Blackboard, the answer to the first question appears to be about the capacity for ‘nudging’ students towards better outcomes through ‘personalized experiences’ based on data analytics, and the second question might be addressed by understanding those data as ‘assets’ with expected future earnings power for their owners. This post is an initial attempt to explore those issues and their interrelationship.

Nudging

One of the key features of the Blackboard/Anthology announcement was that it would enable much greater integration of the existing software systems of the two companies, including Learning Management System, community engagement, student success, student information system and enterprise resource planning. ‘Combining the two companies will create the most comprehensive and modern EdTech ecosystem at a global scale for education’, the CEO, president and chair of Blackboard, Bill Ballhause wrote in a company blog. ‘It will enable us to break down data silos, and surface deeper insights about the learner so we can deliver unmatched personalized experiences across the full learner lifecycle and drive better outcomes’.

The idea of breaking down ‘data silos’ and integrating data systems is part of a longer Blackboard strategy on making the most of cloud computing for cross-platform interoperability. Blackboard migrated most of its services to Amazon Web Services starting in 2015, with reportedly significant effects on how it could make use of the data collected by its LMS. ‘Our new analytics offering, Blackboard Data, is a good example where we are leveraging AWS technologies to build a platform that provides data-driven insight across all our solutions’, Blackboard reported in 2017. These insights will now be generated across the entire Blackboard/Anthology portfolio, raising data privacy and protection implications that Blackboard was quick to address just a day after the announced acquisition.

Beyond data privacy issues, though, the stated purpose of integrating data is to enact ‘Blackboard’s vision of personalizing experiences’. Writing earlier in the summer, Blackboard’s CEO Bill Ballhaus set out the company’s longer-term vision for personalizing learning experiences. Drawing on examples of online shopping, healthcare and entertainment, Ballhaus argued that a ‘critical mass of data powers proactive nudges’ based on highly granular personal data profiles. Education, however, had not yet ‘kept pace with the shift to customized experiences that other industries achieved’. This, he said, had now changed with the disruptions of the previous year.

‘The massive shift to online learning driven by the COVID-19 global pandemic enabled continuity of education in the near term, while opening the door for education to move forward on a journey toward more personalized experiences’, Ballhaus argued. ‘We’ve had our sights set on the future for the past few years and have the ability to securely harness data, with robust privacy protections, from across our ecosystem of EdTech solutions with the specific intent of enabling personalized experiences to drive improved outcomes’.

The discourse of ‘nudges’ as the central technique of personalized learning runs throughout this vision. ‘Students need nudges’ to reach better outcomes, Ballhaus continued, with ‘the 25 billion weekly interactions in our learning management and virtual classroom systems’ enabling Blackboard to operationalize such a nudge-based approach to personalized learning.

By emphasizing student nudges fuelled by masses of data as the basis of personalized learning, Blackboard has tapped into the logics of the psychological field of behavioural economics and its political uptake in the form of behavioural governance. Mark Whitehead and coauthors describe how behavioural governance has proliferated across public policy in many countries in recent years—especially the UK and US—through the application of nudge strategies. This has been amplified by digital ‘hypernudge’ techniques based on personal data profiles, which, as Karen Yeung argues, ‘are extremely powerful and potent due to their networked, continuously updated, dynamic and pervasive nature’.

So, the business plan behind the Blackboard/Anthology merger appears to be to enact a form of behavioural governance in digital education, operationalizing personalized hypernudges within the architectures of vast edtech ecosystems. While such a form of ‘machine behaviourism’ has existed in imaginary form for some years, it may now materialize in the seemingly mundane machinery of the learning management systems used by institutions across the globe. And that potential capacity for nudging also appears to be the source of expected future value for financial backers.

Assets

While the Blackboard/Anthology deal has been presented by the two companies as a merger, and interpreted by most as an acquisition of the former by the latter, in reality this is a deal between their financial backers and owners. Anthology is majority owned by Veritas Capital (a private equity firm investing in products and services to government and commercial customers), with Leeds Equity Partners (a private equity firm focused on investments in the Knowledge Industries) as a minority owner, while Blackboard is owned by Providence Equity Partners (a global private equity investment firm focused on media, communications, education, software and services investments). Veritas is providing new funding and retaining majority shareholder status, with both Leeds and Providence as minority shareholders following the acquisition.

The exact value of the deal remains unknown—Phil Hill has suggested it may be in the region of $3bn—but clearly these three private equity firms see prospects for value creation in the future. To interpret this, we need to understand some of the logic of investment. Recent economic sociology work can help here, particularly the concepts of capitalization and assetization.

As Fabian Muniesa and colleagues phrase it, capitalization refers to the processes and practices involved in ‘valuing something’ in terms of ‘the expected future monetary return from investing in it’. Capitalization, they continue, ‘characterizes the reasoning of the banker, the financier and the entrepreneur’, and calculating future expected returns is central to any form of investment. Capitalization then also depends on seeing something as an asset with future value, or making it into one. Kean Birch and Muniesa define an asset as any resource controlled by its owner as a source of expected future benefits, and ‘assetization’ thus as the processes involved in making that resource into a future revenue stream. Transforming something into an asset is therefore central to capitalization. 

Capitalization and assetization may be useful concepts for exploring the Blackboard/Anthology deal. Clearly, Veritas, Leeds and Providence as owners and shareholders are seeking future value from their assets. Their entire business is capitalizing on the assets they hold investments in, in expectation of return on investment. In part, the platforms that Blackboard and Anthology will combine are the assets. It is expected that more customers will purchase from them through cross-selling compatible products (e.g. by integrating Blackboard LMS with Anthology student information systems and making them interoperable for ease of use).

But given the prominence in the deal announcement and other posts of ‘breaking down data silos’ and ‘the possibilities of delivering personalized experiences fueled by data through our combination’, it seems likely that there is a process of assetizing the data themselves going on here. If the platforms and services themselves have future value, that is dependent upon the 25 billion weekly interactions of users as a new source of value creation. How are data made valuable?

In a recent study, Birch and coauthors highlight how ‘Big Tech’ companies transform personal digital data into assets with future earnings power both for the companies and their investors. They transform personal data into assets to generate future revenue streams. And Birch and colleagues argue that this assetization of user data occurs through the ‘transformation of personal data into user metrics that are measurable and legible to Big Tech and other political-economic actors (e.g., investors)’. In similar ways, then, the new Big EdTech company emerging from the combination of Blackboard and Anthology aims to transform student data into measurable and legible forms for their investors. 25 billion weekly interactions leave traces which can be made valuable.

As Janja Komljenovic has recently argued, ‘the digital traces that students and staff leave behind when interacting with digital platforms’ can be ‘made valuable by processing data into intelligence for either improving an existing product or service, or creating a new one, selling data-based products (such as learning analytics or other data intelligence on students), various automated matching services, automated tailored advertising, exposure to the audience, and so on’. The value comes not from the data themselves, but ‘from their predictive power and inducing behaviour in others’. In other words, as Komljenovic elaborates, ‘what becomes valuable in digital education is power over the direction of student and staff teaching, learning and work patterns. It is first about the power over calculating predictions and thus performing future, and second, about tailoring experience and nudging behaviour’.

In this particular sense, then, we can see how the objective of ‘nudging’ students through data-fuelled personalized experience may be a core part of the assetization process involved in the merger of Blackboard and Anthology. The platforms and services themselves as marketable products for institutions to pay for, or the weekly 25 billion data points of interaction with them, are not the only sources of expected value. Instead, the predictive capacity to shape education by personalizing experience and nudging student behaviours appears to be the key to unlocking future revenue streams.

Assetizing the nudge and nudging the asset in Big EdTech

The Blackboard/Anthology deal seems to foreground two complementary trends in the edtech sector. The first is that the ‘nudge’ has become the source of expected future value to asset owners. Personalized learning via digital nudges is clearly a core part of the expected value that Blackboard will return its new private equity owners and shareholders. This is assetizing the nudge.

The second is that student data have become the focus of the nudge, with digital nudges expected to increase student outcomes. In this sense, the masses of student data held by Blackboard/Anthology are being transformed into assets too. And if we understand those data to produce ‘data subjects’ or informational identities of a student, then we might conceivably think of students themselves as assets with value that can be increased through predictive nudging. This is nudging the asset, although it’s too early to see quite how this will work out in practice at the new company or in the institutions that use its services,

Maybe later details on the deal will help us clarify the precise ways assetization and nudging complement one another in an emerging environment of Big EdTech deals and integrations. It is important for critical edtech research to get up-close to these developments at the intersections of nudging and assetization, as practical techniques of behavioural governance and capitalization, even in the most mundane places like the LMS.

Posted in Uncategorized | Tagged , , , | 1 Comment

New biological data and knowledge in education

Research centres and laboratories have begun conducting studies to record and respond to the biological aspects of learning. Photo by Petri Heiskanen on Unsplash

Novel sources of data about the biological processes involved in learning are beginning to surface in research on education. As the sciences of the human body have been transformed by advances in computing power and data analysis, researchers have begun explaining learning and outcomes such as school attainment and achievement in terms of their embodied underpinnings. These new approaches, however, are generating controversy, and demand up-close social science analysis to understand what processes of knowledge production are involved, as well as how they are being received in public, academic and political debates.

Late last year, the Leverhulme Trust awarded us a research project grant to study the rise of data-intensive biology in education. As we now kick off the project, I’m really pleased to be working with a great interdisciplinary team that includes Jessica Pykett, a social and political geographer at Birmingham University; Martyn Pickersgill, a sociologist of science and medicine at Edinburgh; and Dimitra Kotouza, a political sociologist joining us at Edinburgh straight from an excellent previous project on the policy networks, data practices and market-making involved in addressing the ‘mental health crisis’ in UK higher education.

The project focuses on three domains of data-intensive biology in education:

  • the emergence of ‘big data’ genetics in the shape of ‘genome-wide association studies’ utilizing molecular techniques and bioinformatics technologies including biobanks, microarray chips, and laboratory robot scanners to identify complex ‘polygenic patterns’ associated with educational outcomes
  • neurotechnology development in the brain sciences, such as wearable electroencephalography (EEG) headsets, neuro-imaging, and brain-computer interfaces with neurofeedback capacities, and their application in school-based experiments
  • rapid advances in the development and utilization of ‘affect-aware’ artificial intelligence technologies, such as voice interfaces and facial emotion detection for interactive, personalized learning, that are informed by knowledge and practice in the psychological and cognitive sciences

We are planning to track these developments and their connections with cognate advances in the learning sciences, AI in education, and recent proposals around ‘learning engineering’ and ‘precision education’. Across this range of activities, we see a concerted effort to employ data-scientific technologies, methodologies and practices to record biological data related to learning and education, and in some cases to develop responses or interventions based on it. We’re only just starting the project with the full team in place, but a couple of very recent developments help exemplify why we consider the project important and timely.

Controversy over the genetics of education

On the very same day our Leverhulme Trust grant arrived, 6 September, The New Yorker published a 10,000 word article entitled ‘Can Progressives Be Convinced That Genetics Matters?’ Primarily a long-form profile of the psychology professor Paige Harden, the article describes the long and controversial history of behaviour genetics, a field in which Harden has become a leading voice—as signified by the forthcoming publication of her book The Genetic Lottery: Why DNA Matters for Social Equality.

The main thrust of the article is about Harden’s attempts to develop a ‘middle ground’ between right wing genetic determinists and left wing progressives. She is described in the piece as a ‘left hereditarian’ who acknowledges the role played by biology in social outcomes such as educational attainment, but also the inseparability of such outcomes from social and environmental factors (‘gene x environment bidirectionality’). The article is primarily focused on the politics of behaviour genetics, which has long been a major field of controversy even within the scientific disciplines of genetics due to its ‘ugly history’ in eugenics and scientific racism.

Judging from reactions on Twitter among genetics researchers and educators, these are problems—both disciplinary and political—which are more complex and intractable than either the article or the science lets on. Concerns remain, despite optimistic hopes of a ‘middle ground’, that new molecular behaviour genetics insights will be mobilized and reframed by ideologically-motivated groups to reinforce dangerous genetically-reductionist notions of race, gender and class.

The New Yorker profile also notes that recent developments in genome-wide association studies (GWAS) have begun producing significant findings about the connections between genes and educational outcomes. These are ‘big data’ endeavours using samples of over a million subjects and complex bioinformatics infrastructures of data analysis, and are part of a burgeoning field known as ‘sociogenomics’. Again, many of these sociogenomics studies appear informed by the left hereditarian perspective—seeing complex, biological polygenic patterns related to educational outcomes as operating bidirectionally with environmental factors, and arguing that genetically-informed knowledge can lead to better, social justice-oriented outcomes.

But educational GWAS research and polygenic scoring informed by a sociogenomics paradigm is not itself a settled science. As I began illustrating in some recent preparatory research for this project, the scientific apparatus of a data-intensive, bioinformatics-driven approach to education remains in development, is producing very different forms of interpretation, and is leading to disagreement over its pedagogic and policy implications. Even from within the field, a behaviour genetics approach to education based on big data analysis remains a fraught enterprise. Outside the field, it is prone to being appropriated to support ideological right-wing positions and as fuel to attack so-called ‘progressives’ and their ‘environmental determinism’.

The controversy over behaviour genetics and education is not new, as Aaron Panofsky has shown. As part of a long-running series of critical studies and publications on behaviour genetics, he analyzed its involvement in promoting ideas about genetically-informed education reform. Focusing in particular in the work of behaviour geneticist Robert Plomin, Panofsky notes that his vision of genetically-informed education utilizing high-tech molecular genomics technologies represents a form of ‘precision education’ modelled on ‘precision medicine’ in the biomedical field. In precision medicine, doctors ‘could use genetic and biomarker information to divide individuals into distinct diagnosis and treatment categories’. A precision education approach would ostensibly use similar information to support ‘personalization’ according to students’ ‘different genetic learning predispositions’.

According to Panofsky, however, precision medicine ‘represents an approach to health and healing very much in line with our neoliberal political times’. It focuses, he argues, ‘toward “me medicine” that seeks to improve health through high-tech, expensive, privatized, individualized, and decontextualized intervention and away from “we medicine” that aims to improve health and illness in the broad public through focusing on widely available interventions and targeting health’s social determinants’.

Thus, for Panofsky, Plomin’s vision of precision education raises the risk that ‘while genetically personalized education is represented as a tool to help educate everyone, it represents more of a “me” approach than a “we” approach’. He argues it risks deflecting attention away from other educational problems and their social determinants–such as school funding, policy instability, workforce quality and labour relations, and especially underlying inequalities and poverty–by focusing instead on the identification of individuals’ biological traits and the cultivation of ‘each individual’s genetic potential’.

Overall, The New Yorker article helps illustrate the controversies that genetics research in education may continue to generate in coming years. It also shows how advances in data-intensive bioinformatics technologies and sociogenomics theorizing are already beginning to play a role in knowledge production on educational outcomes. As the high-profile publication of Harden’s The Genetic Lottery indicates, these advances and arguments are likely to continue, albeit perhaps in different forms and with different motivations. Robert Plomin’s team, for example, argues that ‘molecular genetic research, particularly recent cutting-edge advances in DNA-based methods, has furthered our knowledge and understanding of cognitive ability, academic performance and their association’, and will ‘help the field of education to move towards a more comprehensive, biologically oriented model of individual differences in cognitive ability and learning’.

A key part of our project will involve tracking these unfolding developments in biologically oriented education, their historical threads, technical and methodological practices, and their ethics and controversies.

Engineering student-AI empathy

The second development is related to ‘affect-aware’ technologies to gauge and respond to student emotional states. Recently, the National Science Foundation awarded almost US$20m to a new research institute called the National AI Institute for Student-AI Teaming (iSAT), as part of its huge National AI Research Institutes program.  One of three AI Institutes dedicated to education, iSAT is focused on ‘turning AI from intelligent tools to social, collaborative partners in the classroom’. According to its entry on the NSF grants database, it spans the ‘computing, learning, cognitive and affective sciences’ and ‘advances multimodal processing, natural language understanding, affective computing, and knowledge representation’ for ‘AI-enabled pedagogies’.

The iSAT vision of ‘student-AI teaming’—a form of human-machine collaborative learning—is based on ‘train[ing] our AI on diverse speech patterns, facial expressions, eye movements and gestures from real-world classrooms’. To this end it has recruited two school districts, totalling around 5000 students, to train its AI on their speech, gestures, facial and eye movements. The existing publications of iSAT are instructive of its planned outcomes. They include ‘interactive robot tutors’, ‘embodied multimodal agents’, and an ‘emotionally responsive avatar with dynamic facial expressions’.

The last of those iSAT examples, the ‘emotionally responsive avatar’, is based on the application of ‘emotion AI’ technology from Affectiva, an MIT Affective Computing lab commercial spin-out. The lead investigator of iSAT was formerly based at the lab, and has an extensive publication record focused on such technologies as ‘affect-aware autotutors’ and ‘emotion learning analytics’. In this sense, iSAT represents the advance of a particular branch of learning analytics and AI in education, supported by federal science funding and the approval of the leading US science agency.

Emotion AI-based approaches in education, like molecular behaviour genetics, are deeply controversial. Andrew McStay describes emotion AI as ‘automated industrial psychology’ and a form of ‘empathic media’ that takes ‘autonomic biological signals’ captured through biosensors as proxies for a variety of human affective processes and behaviours. Empathic media, he argues, aims to make ‘emotional life machine-readable, and to control, engineer, reshape and modulate human behaviour’. This biologization and industrialization of the emotions for data capture by computers therefore raises major issues of privacy and human rights. Luke Stark and Jesse Hoey have argued that ‘The ethics of affect/emotion recognition, and more broadly of so-called “digital phenotyping” ought to play a larger role in current debates around the political, ethical and social dimensions of artificial intelligence systems’. Google, IBM and Microsoft have recently begun rolling back plans for emotion sensing technologies following internal reviews by their AI ethics boards.

Over the last few years, several examples have emerged of education technology applications utilizing emotion AI-based approaches. They generally tend to provoke considerable concern and even condemnation, as part of broader public, media, industry and political debates about the role of AI in societies. Given that such technologies are already currently the subject of considerable public and political contestation, it is notable then that similar biosensor technologies are being generously supported as cutting edge AI developments with direct application in educational settings. While iSAT certainly has detailed ethical safeguards in place, some broader sociological issues remain outstanding.

The first is about the apparatus of data production involved in such efforts. iSAT employs Affectiva facial vision technology, which is itself based on the taxonomy of ‘basic emotions’ and the ‘facial action coding system’ developed in the 1970s by the psychologist Paul Ekman and colleagues. As researchers including McStay, Stark and Hoey have well documented, basic emotions and facial coding are highly contested as seemingly ‘universalist’ and mechanistic measures of the diversity of human emotional life. So iSAT is bringing highly controversial psychological techniques to bear on the analysis of student affect, in the shape of biosensor-enabled automated AI teaching partners. There remains an important social science story to tell here about the long historical development of this apparatus of affect measurement, its enrolment into educational knowledge production, and its eventual receipt of multimillion dollar federal funding.  

The second concerns the implications of engineering ‘empathic’ partnerships between students and AI through so-called ‘student-AI teaming’. This requires the student being made machine-readable as a biological transmitter of signals, and a subject of empathic attention from automated interactive robot tutors. Significant issues remain to explore here too about human-machine emotional relations and the consequences for young people of their emotions being read as training data to create empathic educational media.

In the research we are planning, we aim to trace the development of such apparatuses and practices of emotion detection in education, and their consequences in terms of how students are perceived, measured, understood, and then treated as objects of concern or intervention by empathic automatons.

Bio-edu-data science

Overall, what these examples indicate is how advances in AI, data, sensor technologies, and education have merged with scientific research in learning, cognitive, and biological sciences to fixate on students’ bodies as signal-transmitters of learning and its embodied substrates. While the apparatus of affective computing at iSAT tracks external biological signals from faces, eyes, speech and gestures as traces of affect, learning and cognition, the apparatus of bioinformatics is intended to record observations at the molecular level.

The bioinformatics apparatus of genetics, and the biosensor apparatus of emotion learning analytics are beginning to play significant parts in how processes of learning, cognition and affect, as well as outcomes such as attainment and achievement, are known and understood. New biologized knowledge, produced through complex technical apparatuses by new experts of both the data and life sciences, is being treated as increasingly authoritative, despite varied controversies over its validity and its political and ethical consequences. This new biologically-informed science finds traces of learning and its outcomes in polygenic patterns and facial expressions, as well as in traces of other embodied processes.

In our ongoing research, then, we are trying to document some of the key discourses, lab practices, apparatuses, and ethical and political implications and controversies of an emerging bio-edu-data science. Bio-edu-data science casts its gaze on to students’ bodies, and even through the skin to molecular dynamics and traces of autonomic biological processes. We’ll be reporting back on this work as we go.

Posted in Uncategorized | Tagged , , , , , | 1 Comment

Breaking open the black box of edtech brokers

Mathias Decuypere and Ben Williamson

Education technology brokers build new connections between the private edtech industry and state schools. Photo by Charles Deluvio on Unsplash

A new kind of organization has appeared on the education technology landscape. Education technology ‘brokers’ are organizations that operate between the commercial edtech industry and state schools, providing guidance and evidence on edtech procurement and implementation. Staffed by new experts of evaluation and decision-making, they act as connective agencies to influence schools’ edtech purchasing and use, as well as to shape the market prospects of the commercial edtech companies they represent or host. As a new type of ‘evidence intermediary’, these brokerage organizations and experts possess the professional knowledge and skills to mobilize data, platform technologies, and evidence-making methods to provide proof of ‘what works’, demonstrate edtech ‘impact’, and provide practical guidance to school decision-makers about edtech procurement.

Although brokers represent a novel point of connection between the edtech market and state systems of schooling, little is known about the aims or practical techniques of these organizations, or their concrete effects on schools. Edtech brokers are ‘black boxes’ that need opening up for greater attention by researchers and educators.

We are delighted to have received an award from a global research partnership between KU Leuven and the University of Edinburgh, which is funding a 4-year full-time PhD studentship to research the rise of edtech brokers with Mathias Decuypere (KU Leuven) and Ben Williamson (Edinburgh). The project will examine and conceptualize edtech brokerage as part of a transnational policy agenda to embed edtech in education, the operations of brokers in specific national contexts, and their practical influences on schools. The research will build on and advance our shared interests in digital platforms and edtech markets in education, as well as our broader concerns with data-intensive governance and digitalized knowledge production.

We have already identified a wide range of brokers to examine. One illustrative case is the Edtech Genome Project, ‘a sector-wide effort to discover what works where, and why’, developed by the Edtech Evidence Exchange in the US with partnership support from the Chan Zuckerberg Initiative, Carnegie Corporation, and Strada Education Network. It is building a digital ‘Exchange’ platform to enable ‘decision-makers to access data and analysis about edtech implementations’ with a view to both ‘increase’ student learning and save schools billions of dollars on ‘poor’ edtech spending. 

To a significant extent, we anticipate edtech brokers such as the Exchange becoming highly influential platform and market actors in education, across a range of contexts, in coming years.   

Post-Covid catalytic change agencies

In the context of the Covid-19 educational emergency, the role, significance and position of educational brokers have already grown: they are able to marshal their knowledge and expertise to advise schools on the most impactful edtech to address issues such as so-called ‘learning loss’ or ‘catch-up’ requirements.

These evolutions are not radically new. They reflect the increasing participation of private technology companies as sources of policy influence (supported by external consultancies, think tanks and international organizations) in education systems worldwide; and the rise in new types of ‘evidence’ production, including ‘what works’ centers and ‘impact’ programs, and the related emergence of new kinds of professional roles for evaluation and evidence experts in education.

However, especially during the Covid-19 emergency, brokers have begun asserting their expertise and professionality to support schools’ post-pandemic recovery, and creating practical programs and platforms to achieve that aim. Not only are edtech brokers positioning themselves as experts in evaluation and evidence, or as connective nodes between private companies and public education; they act as catalytic change agencies advising schools on the appropriate institutional pathways and product purchases to make for digital transformation.

Ambassadors and engines

We have initially identified two types of brokers:

  1. Ambassador brokers represent either a single technology provider or a selected sample of industry actors. They provide sales, support and training for specific vendor products, including global technology suppliers such as Google and Microsoft, acting as supporting intermediaries for the expansion of their platforms and services into schools.
  2. Search engine brokers function as public portals presenting selected evidence of edtech quality and impact to shape edtech procurement decisions in schools. They function as searchable databases of ‘social proof’ of ‘what works’ in the ‘edtech impact marketplace’, enabling school staff to access product comparisons and evaluative review materials.

Edtech brokers represent significant changes in the ways state education is organized. Both types of brokers operate as or through platforms that offer (part of) their services through digital means, exemplifying as well as catalyzing fast-paced digital transformations in education systems.

Ed-tech brokering is furthermore a global phenomenon, with initiatives variously funded by international organizations, philanthropies, national government agencies, and associations of private companies, representing concerted transnational and multisector reformatory ambitions to embed edtech in schools.

Edtech brokers all draw on ‘evidence’ and ‘scientific evaluations’, making it accessible and attractive for decision makers in schools. They are thereby shifting the sources of professional knowledge that inform schools’ decisions towards particular evaluative criteria of quality, impact, or ‘what works’. More particularly, ed-tech brokers are emblematic of the rise to power of new types of professionals and new forms of expertise in education.

Overall, we approach brokers as new intermediary actors in state education that are shifting the cognitive frames by which educators and school leaders think and act in relation to edtech. Brokers not only generally guide users’ decision-making processes and cognition; they equally contribute to structure particular forms of education and make specific forms of education visible, knowable, thinkable, and, ultimately, actionable.

The social lives of brokers

This project examines the transnational expansion of edtech brokering as a new organizational type and a new form of professionality in education, and provides an up-close empirical examination of its practical work and concrete effects, by opening the ‘black box of edtech brokering’ in different national contexts. We will utilize a social topology framework to study the policy ecosystem, platform interfaces, and data practices of edtech brokers, as well as their effects on school users of these services.

Exploring the fast-developing intermediary role of edtech brokers is crucial both for academic purposes as well as for the educational field itself: brokers are assembling the knowledge, expertise and platforms through which post-pandemic education will be defined. Because of their very position as connective intermediaries between specific schools and the edtech corporate world, brokers translate both the objectives of edtech companies and educational institutions into shared and context-specific aims. In doing so, they reformat, redo, restructure, and reconceive what education is or could be about.

Moving from the transnational level of edtech brokering as an emerging phenomenon to the ‘social lives’ of edtech brokers in action, the project will drill down to their influence on decision-making in schools in comparative national contexts. In countries such as Belgium and the UK, we have already observed how both ambassador and search engine brokers are actively seeking to influence the uptake and use of edtech in schools. The project will commence autumn 2021, with fieldwork to be carried out in Belgium and the UK.


Posted in Uncategorized | Tagged , , , | 1 Comment

Valuing futures

Ben Williamson

Education technology investors are imagining new visions of the future of education while calculating the market valuation of their investment portfolios. Photo by Lukas Blazek on Unsplash

The future of education in universities is currently being reimagined by a range of organizations including businesses, technology startups, sector agencies, and financial firms. In particular, new ways of imagining the future of education are now tangled up with financial investments in education technology markets. Speculative visions and valuations of a particular ‘desirable’ form of education in the future are being pursued and coordinated across both policy and finance.

Visions and valuations

Edtech investing has grown enormously over the last year or so of the pandemic. This funding, as Janja Komljenovic argues, is based on hopes of prospective returns from the asset value of edtech, and also determines what kinds of educational programs and approaches are made possible. It funds unique digital forms of education, investing speculatively in new models of teaching and learning to enable them to become durable and, ideally, profitable for both the investor and investee.

We’ve recently seen, for instance, the online learning platform Coursera go public and reach a multibillion dollar valuation based on its reach to tens of millions of students online. New kinds of investment funds have emerged to accelerate edtech market growth, such as special purpose acquisition companies (SPACs) that exist to raise funds to purchase edtech companies, scale them up quick and return value to both the SPAC and its investor, plus new kinds of education-focused equity funds and portfolio-based edtech index investing that select a ‘basket’ of high-value edtech companies for investors to invest in.

The result of all this investment activity has been the production of some spectacular valuation claims about the returns available from edtech. The global edtech market intelligence agency HolonIQ calculated venture capital investment in edtech at $16bn last year alone, predicting a total edtech market worth $400bn by 2025.

But, HolonIQ said, this isn’t just funding seeking a financial return—it’s ‘funding backing a vision to transform how the world learns’. These edtech investments tend to centre on a particular shared vision of how the future of education could or should be, and on particular products and companies that promise to be able to materialize that future while generating shareholder value. To this end, it just announced three ‘prototype scenarios‘ for the future of higher education, ‘differentiated by market structure’, as a way of developing consensus about desirable imaginaries and market opportunities for investment. The scenarios are imaginary constructs backed by quantitative market intelligence that HolonIQ has calculated with its in-house valuation platform. These are, to draw on the economic sociologist Jens Beckert, instruments of ‘fictional expectations’ that investment organizations craft to showcase their convictions and hopes, supported by specific devices of financial speculation that provide a more ‘calculative preview of the future’.

The aim of such instruments of expectation here is to stimuate speculative investments in new forms of education, and stabilize them as durable models for prospective future returns. The vision and the valuation of educational futures are intricately connected, and as Keri Facer recently noted, speculative investment of this kind is about making ‘bets’ on certain ‘valued’ educational futures while ‘shorting’ or foreclosing other possible futures for education.

What bets are being made? These bets are being made, for example, on the vision contained in the 2021 Global Learning Landscape report and infographic from HolonIQ. The landscape is a taxonomy of 1,250 edtech companies that HolonIQ has assessed in terms of their market penetration, product innovation, and financial prospects. As a fictional expectation inscribed in material form, the purpose of the infographic here is both to attract investors—for whom HolonIQ provides bespoke venture capital services—and to attract educational customers to ‘invest’ in institutional digital innovation through procuring from these selected services.

A persuasive vision or fictional expectation of the future of education is contained and transmitted in this infographic. As an instrument of expectation it emphasizes companies and products promising data-driven teaching and learning and analytics; online platforms such as MOOCs, online program management and other forms of public-private platform partnerships; AI in education, smart learning environments and personalized learning; workforce development and career matching apps, and other forms of student skills measurement and employability profiling. The infographic distills both an imaginative educational vision and a speculative investment valuation of the digital future of teaching and learning.

Education reimagined

The vision and valuation of educational futures are currently being joined together powerfully in the UK by an ongoing partnership between Jisc—the HE sector non-profit digital agency—and Emerge Education, a London-based edtech investment company. Jisc and Emerge have recently produced a series of visionary reports and strategy documents dedicated to Reimagining Learning and Teaching towards a vision of higher education in 2030. Together, the reports function as instruments of expectation with the intention of producing conviction in others that the imaginaries they project are desirable and attainable.

All the reports, written by Emerge with Jisc input, focus on the central fictional expectation of ‘digital transformation’ or ‘rebooting’ HE through partnerships with edtech startups, for example, in teaching, assessment, well-being, revenue diversification, and employability. They have produced an ‘edtech hotlist’ of companies to deliver those transformations, and created a ‘Step Up’ programme of partnerships between startups and universities to actively materialize the imaginary they’re pursuing.

The Jisc-Emerge partnership highlights how investment and policy are being coordinated towards a shared aim with expected value for HE institutions and for edtech companies and their investors at the same time. Exemplifying how investors’ fictional expectations catalyse real-world actions, this valued vision of HE in 2030 appears across the partnership’s reports, and especially in the main report also supported by UUK and Advance HE.

The report offers a vision of revolutionary digital acceleration, university adaptation and reimagining as digital organizations, characterized by personalized learning experience driven by artificial intelligence and adaptive learning systems that are modified automatically and dynamically. Universities are told to invest in their digital estates, learning infrastructure, personalized and adaptive learning, and AI. The sector is urged to adopt new data standards for the exchange of learner data, new micro-credentials, forms of assessment and well-being analytics.

The vision of learning and teaching ‘reimagined’ here, with the approval of Jisc, UUK and Advance HE, is highly congruent with the investment strategy of Emerge itself, with its emphasis on investing in a portfolio of ‘companies building the future of learning and work’. The fictional expectations and investment imaginary of Emerge have therefore been inscribed both into policy-facing documents and into its own strategic portfolio of investments.

Portfolio futures

So what this indicates is how edtech investment has become highly significant to how the future of teaching and learning is being imagined and materialized. Education futures are being imagined in parallel with market calculations and speculative investments, inscribed in graphical scenarios and calculative previews as instruments of expectation. Investment portfolios are being fused to policy imaginaries of education by way of shared fictional expectations that coordinate both policy and investment towards the same aims. Certain possible futures are being funded into existence or to scale.

Investment organizations are not just funding fortunate companies, but actively shaping how the future of education is imagined, narrated, invested in, and made into seemingly actionable strategies for institutions. By coordinating both policy and investment portfolios towards shared objectives, they’re valuing and betting on visions of digital transformation that promise prospective investment returns while devaluing and shorting alternative imaginaries of possible HE futures. This begs the question of how other futures of education can be produced, negotiated dialogically by educators, and invested in as a collective portfolio of counter-imaginaries of teaching and learning.

Posted in Uncategorized | Tagged , , , , | Leave a comment

Edtech sci-fi

Ben Williamson

Artistic sci-fi depiction of a futuristic classroom. Image by Josan Gonzales.

Before making a career out of studying education technology, I was a student of literature. As an undergraduate student of English Lit at Cardiff University, we were taught it was possible to critique the canon, analyze cultural objects as mundane as cereal packets, and engage with ‘genre’ fiction such as crime, horror and sci-fi. Later, as a part-time PhD literature student working full-time for an edtech ‘futurelab’, I read Neal Stephenson’s 1995 sci-fi novel The Diamond Age; among many elements, it features an edtech device called the Primer. It was a strange moment as my PhD, partly about Stephenson’s novels, came into contact with my edtech day-job.

The idea of exploring edtech in sci-fi has remained in the background of my work ever since, but I’ve never properly figured out what to do about it or even if it was too niche an area of literary interest.

Doing something about edtech sci-fi came up again during a recent workshop to develop a new taught course. Might edtech sci-fi open up students to critical perspectives on current edtech issues such as datafication, inequalities, commercialization and so forth?

As a way of finding out, on Twitter, I asked “Anyone got good examples of education technologies in sci-fi, text or film? Got the Primer in the Diamond Age, roboteachers in Class of 1999, but what else? Possibly for a course #edtechscifi”. Below I’m listing all the responses I received, partly for my own benefit but hopefully in case others are interested too. But first a quick discussion of why studying edtech in sci-fi may be a useful way of approaching a range of critical current issues in research on education.

Science fiction has, for well over a century, provided authors with a way of speculating about the future from current trends, and by doing so to explore major concerns, tensions and anxieties characterizing its historical, social and political context. From fears of nuclear destruction–such as A Canticle for Liebowitz by Walter Miller in the 1950s–and anxieties over neural implants–in Gibson’s Neuromancer and Stephenson’s Snow Crash in the 80s/90s–today, much contemporary sci-fi is grappling with the consequences of social media, data profiling, surveillance, automation and inequalities.

Recent favourites are Zed by Joanna Kavenna, the novella collection Radicalized by Cory Doctorow, and Burn-In: a novel of the real robotic revolution by PW Singer and August Cole. The latter is a heavily endnoted, research-based novel about the dangers of automation and right-wing extremism authored by two intelligence analysts. They’ve termed it “fiction intelligence” that blends narrative with nonfiction. I’ve also got Kim Stanley Robinson’s The Ministry for the Future on my shelf, a near-future fiction about environmental destruction. In the book How to Run a City like Amazon, and other fables, a group of academic social scientists and geographers even produced a collection of social science fiction stories and poetry about corporate digital urbanism.

Fiction may even animate social theory: as David Beer argues, “fiction has been used to encounter and interrogate far-reaching and vital questions about the social world, some of which are deeply political and global in their scope”.

So fiction in general and sci-fi specifically can speak to urgent contemporary social, technical, political and environmental concerns. As academic geographer (and fiction writer) Rob Kitchin points out,

science fiction employs the tactics of estrangement (pushing a reader outside of what they comfortably know) and defamiliarisation (making the familiar strange) as a way of creating a distancing mirror and prompting critical reflection on society, now and to come. Perhaps unsurprisingly, there is a long history of academics drawing on the imaginaries of science fiction in their analyses, and also science fiction writers using academic ideas in their stories.

I’d suggest this should prompt more engagement with edtech in sci-fi – not to treat as sci-fi as a model for the future of education, but as a way of exploring the far-reaching personal, social, political and envionmental impacts of edtech development from recent trends.

Artist vision of the future of education: the Edu Ocunet by Tim Beckhardt

One suggestion to my Twitter query from several people was the 2002 dystopian novel Feed by MT Anderson, a fabulous near-future novel featuring neural interfaces and the complete handover of state responsibility to corporations. We don’t have to think too hard to come up with examples of individual tech entrepreneurs and corporations already pursuing the development of brain-computer interfaces that could bring the dystopia of Feed to fruition.

Feed also features a very ominous depiction of education in the shape of “SchoolTM“, a completely corporatized education system that teaches students, through their direct-to-brain feed, to value rampant consumerism and environmental destruction over history, politics and civic participation. The novel explores the consequences of such an technology-centred, corporate education system for its teenaged protagonists and, moreover, for democracy itself.

David Golumbia and Frank Pasquale were kind enough to send me a copy of a recent chapter in which they analyze Feed as a way in to understanding a current “corporate-political world” characterized by the “primacy of the corporate form”. It’s a brilliant chapter, and offers a compelling justification for focusing analytical attention on fiction as a way of studying contemporary social, technical, economic and political problems.

Fiction, they argue,

frees authors to extrapolate from current trends to thick descriptions of the futures they portend. Corporations and governments often use scenario analysis to understand a range of possible futures to prepare for, but such analyses tend to eschew the visceral, subjective, and psychological insights that good fiction embodies. A novelist can imagine the ways in which the minds of individuals both reflect and reinforce their social environment. These considerations are just as worthy of policy-makers’ attention as the economic and political models that now dominate discussions of corporate rights.

Beyond the depiction of the interior lives of characters, novels engage with the complex social, political, economic and and environmental crises of our time.

The depiction of education in Anderson’s novel, they go on, “forms the critical backdrop for the world depicted in Feed, since so much of the novel turns out to depend on the characters’ lack of critical thinking skills and ignorance of fundamental issues of history and politics.” This, for me, offers a rich opening for the further examination of edtech in sci-fi, or, indeed, “social science fiction” writing as critical academic practice in edtech research. I’m interested to explore further how to engage with edtech sci-fi in possible future research and teaching.

In the meantime, however, here’s a list the lovely people on Twitter suggested of edtech sci-fi texts, TV, and film. Three were even suggestions of existing compilations of edtech sci-fi: a 2015 piece by Audrey Watters on Education in Science Fiction, a collection by Stephen Heppell, and an entry on Education in SF at the Encyclopedia of Science Fiction. Check those out too. I’ve alphabetized the list but nothing more. Some people added short descriptions, which I’ve paraphrased, and others links, which you’ll have to mine the replies to find, I’m afraid.

A Clockwork Orange, novel by Anthony Burgess, film by Stanley Kubrick – technologized socialization

AI film by Steven Spielberg – Dr Know, a holographic answer engine

Anathem by Neal Stephenson – anti-tech monasteries

And Madly Teach by Biggle

An Enterprising Man by Joe Frank

A.R.T.H.U.R. poem by Laurence Learner – “metal people / And movers” who “make what they call mistakes”

Beyond Freedom by BF Skinner – behaviourist utopia

Brave New World by Aldous Huxley – hypnopaedia and audio conditioning

Chronopolis by JG Ballard – education after civilization has tried to forget measuring time

Class of 1999 – robot teachers

Computer Friendly by Eileen Gunn

Copying Toast – memory-printed bread

Cypher – psychedelic brainwashing

Cyteen by CJ Cherryh – muscle memory and hypnopaedia through AV/nerve stimulation input

Deep Space 9 – future classroom and school

Die Fernschule (The Distance Learning School) by Kurd Lasswitz

Doomsday and others by Connie Willis – Oxford uni students educated for time travel

Doraemon – 18th generation robot academy

Electric Dreams – safe and sound episode

Ender’s Game by Orson Scott Card – novel and film – 50% about edtech

Erewhon by Samuel Butler – intelligent machines and futuristic university

ET – Speak & Spell

Firefly Srenity – futuristic classroom scenes

Futuretrack 5

Hitch Hiker’s Guide – Babel Fish

Hunger Games – training simulations

Idiocracy – testing

Jetsons – robot teacher

Knight Rider – KTT helps with Hoff with planning and problem solving

Limitless – NZT bio-stimulant

Never Let Me Go by Kazuo Ishiguro – boarding school for student clones raised and educated for body organ donation

Old Man’s War – Brainpal

Orbital Resonance by John Barnes

Otherland by Williams

Pern and Pegasus series by Anne McCaffrey – AIVAS system and online learning

Profession by Isaac Asimov – students educated for specific professions by direct brain-computer interfaces (“Taping”)

Quantum Logic series by Greg Bear – plot about universities and privatized education

Rainbow’s End by Vernor Vinge – high school immersive environments

Raised by Wolves – the teacher is the tech

Ready Player One – Oasis, school in VR

Robot Revolt by Nicholas Fisk – robot tutor

Star Trek – the Holodeck, Kobayashi simulation, Vulcan learning sphere

Star Wars – lightsabre training, robot lecturer, clone training centre

Starship Troopers – 3D bug training models

Stranger in a Strange Land by Robert Heinlein – teaching via Martian telepathy

2000AD – Tharg’s Future Shock

TeleAbsence by Michael Burstein

The Child Garden by Geoff Ryman – learning about Derrida from viral injections

The Diamond Age by Neal Stephenson – personalized learning Primer

The Dispossessed by Ursula LeGuin – interstellar communication

The Fun They Had by Isaac Asimov

The Last Book in the Universe by Rodman Philbrick

The Machine Stops by EM Forster – predicts online education by a hundred years

The Matrix – “I know Kung Fu”

The Prisoner – ‘The General’ episode – mind-altering edtech called Speed Learn

The Simpsons – ‘Miseducation of Lisa Simpson’ episode

The Thing Under the Glacier by Brian Aldiss – student wearable brain-controlled ‘miniputer’

The Veldt by Ray Bradbury

Thirty Days Had September by Robert F Young – second-hand robot teacher

Time in Thy Flight by Ray Bradbury

To Live Again by Silverberg

Ulysses 31 – the Cortex

Venture Brothers – learning beds

Walden Two by BF Skinner – intersection of sci-fi, imaginaries and edtech

WarGames – Joshua and machine learning

Years and Years – cyborg training technologies

If you come across any others, please do tag #edtechscifi and @BenPatrickWill on Twitter and I’ll keep adding.

Posted in Uncategorized | Tagged , , , | Leave a comment

Pandemic privatization and digitalization in higher education

Ben Williamson and Anna Hogan

The state of emergency in higher education systems around the world during the Covid-19 pandemic has opened up the sector to an expanding range of education technologies, commercial companies, and private sector ambitions. In our new report commissioned by Education International (the global federation of teacher unions), entitled ‘Pandemic Privatisation in Higher Education: Edtech and University Reform’, we examine various ways commercialization and privatization of higher education have been pursued and advanced through the promotion of edtech and ‘digital transformation’ agendas during campus closures and disruptions over the last year. Although we recognize that digital technologies and private or commercial organizations can bring many benefits to HE, they also raise significant challenges with long-term implications for HE staff, students and institutions. Many of these challenges are long-term political and economic matters as much as they are short-term practical matters of online teaching.

The report is detailed and long enough, but even since we finished it in late 2020, the developments we identified have accelerated and expanded. These include investors seeking to capitalize on new visions of teaching and learning, and multisector coalitions coming together to reimagine the future of HE through digital infrastructure and platform-based transformations — ultimately ‘re-infrastructuring’ and ‘platformizing’ universities to operate according to design principles imported by the digital tech industry. These are profoundly political issues about control, power, influence and governance in HE, mirrored by similar shifts of control to technology in the health sector.

Maybe most of the proposed changes associated with so-called digital transformation won’t work out in practice. That may be for several reasons: large-scale transformative proposals are rarely realized in their ideal form, and technologies can always be resisted, subverted, ignored, or simply mobilized in much more mundane ways than their architects intended. But we hope the report at least raises awareness of the changes that many powerful organizations are imagining and seeking to materialize in the very near future. The form, role and functions of higher education may be profoundly reimagined and reconstructed during post-pandemic recovery, and all stakeholders in the sector need to be involved in debates over the sector’s future.

Here is the summary from our full report as a starter for such debates:

  • Pandemic privatisation through multi-sector policy. Emergencies produce catalytic opportunities for market-oriented privatisation policies and commercial reforms in education. The COVID-19 pandemic has been used as an exceptional opportunity for expanding privatisation and commercialisation in HE, particularly through the promotion of educational technologies (edtech) as short-term solutions to campus closures and the positioning of private sector actors as catalysts and engineers of post-pandemic HE reform and transformation. The pandemic privatisation and commercialisation of HE during the COVID-19 emergency is a multi-sector process involving diverse actors that criss-cross fields of government, business, consultancy, finance, and international governance, with transnational reach and various effects across geographical, social, political, and economic contexts. It exemplifies how ‘disaster techno-capitalism’ has sought to exploit the pandemic for private sector and commercial advantage.
  • Higher education reimagined as digital and data-intensive. Diverse organisations from multiple sectors translated the public health crisis into an opportunity to reimagine HE for the long term as a digitally innovative and data-intensive sector of post-pandemic societies and economies. While face to face teaching constituted an urgent global public health threat, it was also constructed by organisations including education technology businesses, consultancies, international bodies and investors as a longer-term problem and threat to student ‘upskilling’, ‘employability’, and global post-coronavirus economic recovery. Framed as a form of ‘emergency relief’ during campus closures, education technologies were also presented as an opportunity for investment and profit-making, with the growing market of edtech framed as a catalytic enabler of long-term HE reconstruction and reform.
  • Transformation through technology solutionism. Education technologies and companies became highly influential actors in HE during the pandemic. Private organisations and commercial technologies have begun to reform colleges and universities from the inside, working as a social and technical infrastructure that shapes institutional behaviours and, as programmed pedagogical environments, determines the possible organisation of teaching and learning. In the absence of the physical infrastructure of campuses and classrooms during the pandemic, institutions were required to develop digital infrastructure to host online teaching. This opened up new and lucrative market opportunities for vendors of online learning technologies, many of which have actively sought to establish positions as partners in long-term transformations to the daily operations of colleges and universities. New kinds of technical arrangements, introduced as temporary emergency solutions but positioned as persistent transformations, have affected how teaching is enacted, and established private and commercial providers as essential infrastructural intermediaries between educators and students. These technologies are enacting significant changes to the teaching and learning operations and practices of HE institutions, representing a form of solutionism that treats all problems as if they can be fixed with digital technologies.
  • New public-private partnerships and competition. New public-private partnerships developed during the pandemic blur the boundaries between academic and industry sectors. Partnerships between academic institutions and the education and technology industries have begun to proliferate with the development of business models for the provision of online teaching and learning platforms. Global technology companies including Amazon, Google, Alibaba and Microsoft have sought to extend their cloud and data infrastructure services to an increasing number of university partners. Colleges and universities are also facing increasing competition from private ‘challenger’ institutions, new industry-facing ‘digital credential’ initiatives, and employment-based ‘education as a benefit’ schemes offering students the convenience of flexible, affordable, online learning. These developments enhance the business logics of the private sector in HE, privileging education programs that are tightly coupled to workplace demands, and expand the role of for-profit organisations and technologies in the provision of education.
  • Increasing penetration of AI and surveillance. Edtech companies and their promoters have increased the deployment of data analytics, machine learning and artificial intelligence in HE, and emphasised the language and practices of ‘personalised learning’ and ‘data-driven decision-making’. Organisations from across the sectoral spectrum have highlighted the importance of ‘upskilling’ students for a post-pandemic economy allegedly dominated by AI and automation and demanding new technical competencies. AI has also been enhanced through the deployment of large-scale data monitoring tools embedded in online learning management software, surveillance technologies such as distance examination proctoring systems, and campus safety systems such as student location and contact tracing apps. In imaginaries of the AI-enabled future of HE, next-generation learning experiences will be ‘hyperindividualised’ and scaled with algorithms, coupled with digital credentialing and data-driven alignment of education with work.
  • Challenges to academic labour, freedom and autonomy. The professional work of academic educators has been affected by the increasing penetration of the private sector and commercial technology into HE during the pandemic. Staff have had little choice over the technologies they are required to employ for their teaching, resulting in high-profile contests over the use, in particular, of intrusive surveillance products or concerns over the potential long-term storage and re-use of recorded course materials and lectures. Academic educators have been required to double up their preparation and delivery of classes for both in-person and online formats. Classes and events featuring ‘controversial’ speakers or critical perspectives have been cancelled due to the commercial terms of service of providers of online video streaming platforms. The expansion of data analytics, AI and predictive technologies also challenges the autonomy of staff to make professionally informed judgments about student engagement and performance, by delegating assessment and evaluation to proprietorial software that can then prescribe ‘personalised learning’ recommendations on their behalf. Finally, academic freedom is at risk when online teaching and learning conducted in an international context runs counter to the politics of certain state regimes, leading to concerns over censorship and the suppression of critical inquiry in remote education.
  • Alternative imaginaries of post-pandemic HE. Online teaching and learning is neither inevitably transformative nor necessarily deleterious to the purpose of universities, the working conditions of staff, or the experience of students. However, the current reimagining of HE by private organisations, and its instantiation in commercial technologies, should be countered with robust, critical and research-informed alternative imaginaries centred on recognising the purpose of higher education as a social and public good. The appearance of manifestos and networks dedicated to this task demonstrates a widespread sense of unease about the ways emergency measures are being translated into demands to establish a new ‘digital normalcy’ in HE. Educators, students, and the unions representing them should dedicate themselves to identifying effective practices and approaches, countering the imposition of commercial models that primarily focus on profit margins or pedagogically questionable practices, and developing alternative imaginaries that might be realised through collective deliberation and action. 

We hope educators, unions, leaders and others will engage with some of these issues in the months to come. The full report is available to view or download here, or you can access PDF versions of the summary in English, French and Spanish.

Posted in Uncategorized | Tagged , , , , , , , , , | Leave a comment

New financial actors and valuation platforms in education technology markets

Ben Williamson

New financial and investment organizations have become important influences in the education technology sector. Photo by Andreas Klassen on Unsplash

Prepared for the Education/Globalization/Marketization virtual workshop hosted by Malmo University, 9-10 December 2020

Investment in educational technologies has grown fast over the last ten years or so. Investors annually inject billions of dollars into edtech companies, helping fund the technologies that will shape the future practices of schools and universities. However, little is known about these financial and investment actors, the practicalities and materialities of their work, and their potential power to exert influence over education systems and practices.

This post describes some initial digital fieldwork on one new financial actor in the edtech sector. HolonIQ is an education market intelligence agency with a very considerable role in edtech investment, as well as a key source of edtech market information which is cited extensively in the media and published research. I’ve been collecting data from watching HolonIQ online webinars and YouTube presentations; gathering weekly newsletters, its reports and research notes; collecting website content, social media updates on Twitter, Facebook and Pinterest, and staff details from LinkedIn; mapping its organizational relations, and tracing external citations and social media @mentions. This all gives us some glimpses into the professional, technical and practical work of HolonIQ. There is also a lot of potential research data ‘hidden’ behind the annual subscription costs that HolonIQ charges for access to its proprietary platform and other client-only services.

What I want to suggest here is that HolonIQ acts as a ‘meta-edtech’ platform—an educational technology whose central concern is processing data about edtech, or edtech about edtech. As a meta-edtech platform and an emerging financial actor in education, HolonIQ not only catalogues edtech market movements but actively catalyzes future edtech market dynamics. It exemplifies the growing power of new kinds of market and finance actors to influence education, particularly as the edtech sector and its investors seek to capitalize on the ‘catalytic effects’ of the Covid-19 pandemic in 2020.

New financial actors

The role of financial actors has been the subject of previous studies of the ‘global education industry’, and edtech investment is now the subject of emerging political economy studies that have mapped the sprawling networks of edtech investment. Besides the wider political economy of edtech investment, very little research has examined the specific social, economic and technical practices of this new domain of financial work in education. As Janja Komljenovic notes in her research agenda on ‘assetization’ in digitalized education, ‘The opportunity recognised by investors and entrepreneurs lies in calculating the digital share in the global spending on education,’ and edtech’s ‘asset value is constructed in the light of expectations about future returns on investments’. As such, financial and investment actors are significant because they are seeking to shape the direction of edtech development and stimulate market growth, with a view to generating financial returns.

Making sense of edtech investment actors therefore requires some engagement with economic sociology, particularly its emphasis on the practices and devices of all economic activity. To simplify greatly, economic sociology insists, for example, that markets have to be actively made and maintained through specific micropractices and sociotechnical devices. Capitalization is produced through operations that include practices of assetization and valuation—how things get valued for investment based on calculations about their prospective future income. The recognition of market-making and valuation as operations and processes means any inquiry requires description of the actors, relations, settings, and actions of such operations, as well as the databases, technical literature, methodologies, disciplinary standards, and more, that are all involved in turning things into capitalizable assets with future value for investors.

From this perspective, we can approach HolonIQ as an agent of edtech market-making and valuation. It actively prompts edtech markets, is deeply involved in making edtech into objects of investment, and produces valuations that attract investors to the future income available from their investment in edtech. But this approach requires getting up close to those operations—which this post can only begin to identify for future detailed examination.

Capitalization professionals

The first way in to studying HolonIQ is to view it in terms of its organizational history, the professionals who work there, their wider relationships, and their specialized technoeconomic practices and methodologies. As Fabian Muniesa and colleagues argue, capitalization is a kind of job, performed by ‘capitalization professionals’ in particular kinds of organizations within a wider system of professions and geopolitical locations, using specialized techno-economic practices and methodologies.

The history of HolonIQ is very short, only being founded in May 2018, and yet in those few years it has expanded from an office in Sydney to London, San Francisco, New York and Beijing, and a larger network of international research partners. The website appears in two languages, English and standard Chinese, reflecting the geopolitical importance of edtech in China and HolonIQ’s attempt to embed itself in that context. Its founders are on LinkedIn, so we can begin getting some sense of their personal and professional backgrounds too. One co-founder, for example, has an MBA in corporate strategy, finance and management, an online degree in machine learning, and a background in maths, computer science, military strategy and leadership. Another has a background in enterprise education, as well as an MBA and a prestigious ‘global entrepreneur-in-residence’ position at a leading US university. Both worked at a global education services company before co-founding HolonIQ. HolonIQ also runs a virtual Global Innovation Internship Program to train new edtech market professionals.

But HolonIQ is not just a company of human capitalization professionals or embodied technoeconomic practice. A page on its website describes HolonIQ as a ‘trusted global source of market intelligence’. It connects ‘people, ideas and capital’ to support ‘the future of education’ through a ‘global market intelligence platform’ that ‘provides data and analysis’ of ‘global markets’. The platform, it claims, powers ‘governments, institutions, companies and investors by connecting billions of data points’ and by applying ‘machine learning to analyse, evaluate and identify patterns’ for ‘data-driven decisions’. The market professionalism of HolonIQ is partly constituted by its platform algorithms, and by the machine learning techniques it has enrolled to the task of edtech market analysis.

As a platform company, what HolonIQ primarily does is make predictions about the future of education and edtech’s role (and potential share) in it—as its detailed report Education in 2030: Global scenarios indicates. Published just a month after the company’s launch as a statement of its ambition and analytics capacity, the five scenarios were built using natural language processing algorithms and cluster analysis to identify patterns from a very large quantity of texts about the future of education, cross-checked against data and reports from the World Bank, OECD, and UNESCO. As such, the scenarios are the result of a complex methodology of algorithmic futurism.

This is what making objects suitable for investment requires—the identification of trends and circumstances in which an investment made today might multiply in years to come. For HolonIQ this is about forecasting future scenarios and predicting financial returns available from the actualization of those algorithmically-identified futures. So in a sense the HolonIQ 2030 global scenarios are an attempt to ‘de-risk’ investment, by claiming a limited selection of possible futures—in all of which edtech plays a major role–based partly on machine learning analysis. It then enables venture capital firms ‘to easily discover companies that are a strategic fit, ready for funding or primed for acquisition’ in order to ‘price your investments with confidence’.

As an organization constituted of both capitalization professionals and a machine learning platform, then, HolonIQ has positioned itself as a powerful new financial intermediary and source of technoeconomic expertise in education. It is seeking to catalogue the edtech market and its dynamics, but also to catalyze investment and procurement in ways that might realize certain future scenarios of education that promise high return on investment. It also organizes events including global innovation summits, ‘fast-paced and data-packed’ webinars and client-only executive roundtables around the world to create market encounters between edtech founders and investors, where such future prospects can be discussed and investment deals brokered.

Valuation claims and devices

One of HolonIQ’s most significant roles is the production of valuation claims that lubricate these relations. Through its platform, it performs technoeconomic work to calculate the value of edtech markets and their growth as a way of making edtech suitable for investment. It makes these valuation claims through a variety of narratives and representations, such as its 20 year graph of global education stocks. This valuation representation depicts near relentless growth, projecting prospective returns that ascend, literally, off the chart.  

Some of its valuation claims are packaged up in research note narratives featuring very large numbers and supported by eye-catching illustrative charts that ‘explain the Global Education Technology Market’. These include its year-end calculations about $16bn venture capital investment in edtech in 2020 alone–described as ‘funding backing a vision to transform the way the world learns’–and its prediction the global edtech market will reach $404bn by 2025—itself predicted from its in-house economic model and ‘tens of thousands’ of ‘machine learning revenue estimates’. Overall, HolonIQ predicts, the entire ‘Global Education Market’ will be valued at $10trillion by 2030, and argues for greater share of this spending on edtech. And these glossy, persuasive valuation claims travel through its weekly email newsletter to tens of thousands of inboxes each week.

These newsletters and images—as material transmitters of valuation claims—can be considered important market and valuation devices that make the edtech market legible and describable in terms of past, present, and future prospective value. The infographics and interactives HolonIQ produces are especially powerful market valuation devices designed to incite investment interest.

For example, one of its major outputs is the Global Learning Landscape 2021, an infographic and associated report and website, described as ‘An open-source taxonomy for the future of education’. In order to produce the Global Learning Landscape HolonIQ examined 60,000 edtech providers to come up with a global taxonomy of edtech by core functions and individual providers—1,250 are included on this one graphic. This taxonomy is a really catalytic market device—directing both the investor’s gaze and the purchasing decisions of institutions to a selection of the market that HolonIQ has determined to be of most value. Users can easily copy its images to social media too—these are easily tweetable valuation claims designed to incite edtech market excitement and optimism.

The research methodology behind the device is based on data-driven machine learning and artificial intelligence, identifying ‘natural patterns’ and clusters and segmentations in the data that are ‘not biased’ as other established taxonomies of education are. In other words, HolonIQ claims it has found the ‘natural’ shape of the edtech market based on ‘unbiased’ objective AI analysis. Its algorithms perform a significant role in organizing and ordering education into an intelligible shape to which investors as well as customers might then react.

Valuation platforms

So this is where HolonIQ’s proprietary platform comes in. It’s an advanced AI-based valuation platform made up of both human experts from the company’s Intelligence Unit and nonhuman expertise from its Intelligence Platform, which together function as a new form of technoeconomic expertise in the valuation of education. As its homepage indicates, the HolonIQ platform is made up of ‘human and machine learning smarts’ that combine to produce ‘predictive intelligence’. Kean Birch and Fabian Muniesa argue that ‘things become assets’ by being constructed through sociotechnical entanglements of human valuation practices and technoeconomic devices; the HolonIQ platform turns edtech into assets for investment by ascribing them prospective value through predictive technoeconomic machine learning analysis.

An important feature here is that not only is HolonIQ turning edtech into objects for investment with future value. As a for-profit company, the platform and the billions of data points it has indexed are also assets with high valuation potential for HolonIQ itself. It invites clients to ‘rent’ access to the data ‘on demand’ and subscribe to the platform in order to make use of its Analytics Studio, Power Tools, and data visualization studio, for annual costs ranging from $10,000 per year for limited functionality, to $120,000 per year for its full stack of services and support. Subscribing clients get access to interactive tools to model market segments by sector, region, and so on, perform competitor analysis, market mapping, generate market trends, predict VC deals, and more, but only as a paying customer. So HolonIQ also invites its users to share its market intelligence gaze, and to see education in terms of segments, products and valuations that are themselves represented in HolonIQ’s database as millions of data points.

The value of HolonIQ’s valuation platform, then, derives from the prospective value it ascribes to other edtech companies based on its extensive datasets. It even maps the world in edtech. This year HolonIQ has produced top 50 or top 100 compilations of edtech in every global region. It has made education markets internationally intelligible in these graphics, performing a key task of making edtech visible to investors either as single vendors or market clusters with high projected worth. Many of the companies selected for inclusion on the edtech regional maps take to social media to celebrate, or are invited to present their success story at HolonIQ webinars—this isn’t just cataloguing or indexing the world of edtech, but catalysing investment in and reconfiguring the world of edtech.

The maps are produced through a particular apparatus of valuation that HolonIQ calls its Scoring Fingerprint—a methodology that weights the market ‘attractiveness’ of specific edtech segments, rates product quality and team expertise, assesses a company’s financial health and ‘ability to generate or secure funding’, and its momentum, size and market velocity over time, that is, its future prospects for investors. If an edtech company wants to feature on HolonIQ’s maps, it has to ensure its organizational and market fingerprint is strong enough to be measured, scored, and ranked. This is a valuation methodology for prospective market-making as much as retrospective valuation.

Finally, HolonIQ has become a direct edtech investment partner too, mobilizing its valuation platform as an investment device itself. In September it announced a partnership with Rize, a London-based investment company, and the index investing firm Foxberry, to launch an exchange-traded fund dedicated to edtech. Exchange traded funds are like ‘baskets’ of shares that investors then invest in as a whole, rather than in individual companies, with the fund administered by asset management firms. The Rize ETF holds about $5m of edtech assets in 34 companies. HolonIQ’s particular role in this partnership is to translate its global datasets and valuations into the portfolio of companies that the fund invests in, seeking to capitalize on the rapid growth in the value of edtech companies during the Covid-19 pandemic.

Again, the fund is underscored by complex technoeconomic valuation devices. HolonIQ’s Scoring Fingerprint methodology is used to value companies included in the basket, ‘determined using publicly available data provided by the company through its published financial statements, company presentations and/or official earnings conference call transcripts’. The company fingerprint determines whether a company can be included in the ‘Global Education Stock Universe’ first launched by HolonIQ in 2018 (by 2020 it included 250+ edtech companies), from which the basket of companies to be included in the ETF basket is then selected. The result was the production of an Education Technology and Digital Learning Index. It then used one of its ‘core computational engines’, named HUM, to calculate the performance and reach of the top performing edtech organizations in the index. By mapping, indexing and valuing the (future) educational world in these ways, HolonIQ has defined both the ‘stock universe’ and the selected basket that promises the best return on investment. In other words, besides analyzing edtech markets, HolonIQ is itself actively intervening in the world of edtech investment as a new kind of asset-managing financial intermediary in private capital markets.

Meta-edtech

Overall, as a new financial intermediary and source of technoeconomic expertise in education, HolonIQ itself plays a kind of governing role in the edtech investment ecosystem—shaping and channelling investment towards certain selected futures and associated companies. Its platform may even be considered one of the most powerful educational technologies in the edtech sector. HolonIQ is a meta-edtech platform—edtech about edtech—that taxonomizes the entire global edtech market by various segments, hierarchies and valuations through the deployment of machine learning, and directs funding towards a future vision of education.

Through complex technoeconomic practices and platform algorithms, it makes the edtech field intelligible and attractive to investors, whose investments then shape the fortunes of individual companies and products, subtly shaping practices in the schools where they are used. As a meta-edtech platform with ‘predictive intelligence’ it steers the edtech market towards particular futures that HolonIQ’s experts and algorithms have ascertained to offer high return on investment prospects. In this respect, the HolonIQ meta-edtech platform is itself a highly significant educational technology that is likely to shape educational realities, albeit at a distance from schools or classrooms, to fit prospective market trends that have been predicted with algorithms and machine learning.

These fragments of digital fieldwork surface a number of issues for further study of financial and investment actors in edtech. New studies should seek to examine the practical technoeconomic work of a range of edtech investors and financial intermediaries–from VC firms to portfolio fund asset managers–and the significant market analysis and valuation efforts involved in the edtech industry. Research should more carefully conceptualize how edtech is capitalized and assetized, and how future prospective value is calculated to attract investment by a range of financial professionals, market analysts, asset managers, investors and other intermediaries. Studies might also follow specific investment actions through to the edtech developments they fund–ultimately following the money through to the materialization of edtech products, and on out into concrete practices in schools and universities. Such studies would help to reveal the significant role of financial and market-making organizations to the functioning and fortunes of the edtech sector, and their effects on educational settings and practices.

Posted in Uncategorized | Tagged , , , | Leave a comment

The rise of data-intensive biology in education – a new project!

The combination of biology and data science is leading to the production of new knowledge about learning and education. Photo by Louis Reed on Unsplash

Advanced technologies that can process complex biological data have transformed the human sciences, and are now being used to conduct studies and generate new knowledge in the field of education. The Leverhulme Trust has just awarded a research project grant to study the rise of data-intensive, computational biology in education to Ben Williamson (University of Edinburgh), Jessica Pykett (Birmingham) and Martyn Pickersgill (Edinburgh). We’re thrilled to be collaborating as a team on this project, which builds on previous work we have separately completed on the application of biology in education and policy, including epigenetics, brain-based teaching, neurotechnologies, and bioinformatics-based polygenic scoring. The project also represents an exciting opportunity to build interdisciplinary connections across our respective fields of education governance, social and political geography, and sociology of science and medicine.

As a way of initially characterizing the developments we will study, we see data-scientific biology in education emerging from three core developments. First, advanced computer technologies are transforming the biological sciences and leading to new ways of understanding and treating human bodies, such as in the biomedical field of ‘precision medicine’. Second, biological understandings of learning are returning to educational debates as new scientific knowledge about the biological underpinnings of learning and educational outcomes are produced by scientists working in fields of neuroscience, psychology, and genomics. And third, learning sciences and analytics experts increasingly use advanced computer techniques such as biosensors and brain scanners to assess the biological aspects of learning. As such, we will examine both data-intensive biology as a science-in-the-making and its positioning as a potentially policy-relevant science with significant practical and political implications in education.

The project is grounded in previous research studying such developments as data-centric biology, precision medicine, post-genomics, digital psychometrics, emotion analytics, neurotechnologies, and bioinformatics. Such work points to the considerable impact that data science and computation have exerted on biological discovery and knowledge production, and the scientific and ethical problems accompanying them. We will be asking questions about whether or how data-intensive biology in education constructs new knowledge about embodied learning processes, and whether novel biological conceptions of learners and learning produced through data science are being deployed as forms of policy or practice intervention.

The overarching objective of the study is to identify and interrogate the apparatuses, organizations, expertise, laboratory practices, and technological machinery that make data-scientific biology in education possible. This empirical objective will specifically enable us to understand the methodological and technical processes that underpin knowledge claims about learning and education emerging from data-scientific biology. The second key objective is to examine how new biological understandings and knowledge of learning might transform educational research, policy, practice, and public understandings of education, and to identify the practical, political and ethical consequences of these new ways of thinking about biology in education.

We’re delighted the Leverhulme Trust has awarded us a research project grant to start this program of work, including funding for a full-time postdoctoral research fellow for two years from September 2021. It will be a really exciting post for someone interested in the empirical social scientific study of data-intensive biology, and its implications for domains of public policy such as education.

Posted in Uncategorized | Tagged , , , , , | Leave a comment