Critical edtech gets a conference

Photo by Arthur Lambillotte on Unsplash

Critical perspectives on educational technology (edtech) are more important than ever. Just in recent days, I’ve seen an AI-education entrepreneur claiming that Tesla’s Optimus humanoid robot demonstrates how AI could “save a troubled education system”. I’ve read a lengthy post about the “buttonification” of AI in education, as new interfaces make it possible for educators to design and create lessons “at the push of a button”. And I’ve heard of wealthy parents getting litigious with a school that gave their son a bad grade for using AI in his assignment, with the parents concerned it would prevent him getting into a prestigious university.

Despite what the edtech consultants and entrepreneurs insist in the press and on social media, clearly AI is not straightforwardly a transformative force for education, as technology never is. The Tesla Optimus robots are not even automated enough to pour a beer, let alone teach a class. Tesla Teacherbots should not be taken seriously. Buttonification is the most reductive, semi-automated, efficiency-driven approach to incorporating AI into education it’s possible to imagine. Push-button pedagogy isn’t even a new imaginary – Audrey Watters did the historical homework on this idea of robotized schooling nearly a decade ago. As for parents slapping the law down on schools – here AI is just another wildly proliferating problem with serious unexpected but real-world consequences for educational institutions.

Critical edtech studies

While up-to-the-minute critical commentary on these developments is extremely welcome, what the last couple of years has really demonstrated is the need for detailed, sustained and critical research on the complex interactions between education, technology and society. Over the last decade there has been a rapid growth in critical edtech research. But what has been lacking are dedicated spaces for sharing knowledge and building the relationships necessary to forming what we might think of as an emerging field of critical edtech studies.

Critical edtech research sits as the intersection of education studies, critical data studies, digital sociology, platform studies, history of technology, and other disciplinary and interdisciplinary approaches. It has powerful potential to generate insights and intervene in the ongoing digitalization of schools, universities, and nonformal sites and practices of learning. And now it has a forthcoming conference for knowledge sharing and building a community of scholarship.

European Conference on Critical Edtech Studies

I’m really pleased to have been asked to team up with Mathias Decuypere, Sigrid Hartong and Jeremy Knox to co-organize a European Conference on Critical Edtech Studies (ECCES) in Zurich, 18-20 June 2025. ECCES is intended to help build a field of critical scholarship on edtech by bringing together researchers and students from Europe and internationally. While it certainly won’t slow the rapid flow of hype and controversy around contemporary technologies in education, our hope is it will help support the development of a collective identity for critical edtech scholarship, catalyze new research, and lay the foundations to reshape how edtech is understood and treated in our education systems.

If we want to contend with edtech, AI, or whatever comes next in our education systems, we need thoughtful, creative, theory-informed and critical researchers to take up the ongoing challenge of conducting painstaking studies – and then to challenge persistent waves of technological hype and expectation with actual research-informed insights.

The conference is aimed at established, early career, and doctoral researchers alike, and we’ve sought funding to keep fees as low as possible, particular for PhD students. Here is the call text.

ECCES call for abstracts

The rapid evolution of educational technologies (edtech) has transformed, and continues to transform, the landscape of education, particularly through the ongoing growth of digital networks, data-based and, more recently, AI-driven technologies. As these technologies become ubiquitous, a critical examination of their implications for teaching, learning, and society has become increasingly imperative. Responding to this need, over the last decades, a growing number of studies dedicated to the critical analysis, evaluation, and (re)design of educational technologies has emerged. More specifically, by examining the pedagogical, social, technical, political, economic and cultural dimensions of edtech, Critical EdTech Studies have sought to uncover the underlying power dynamics, biases, and unintended consequences that often accompany the introduction of technological innovations into educational policy and practice.

Despite their growth in number, however, Critical EdTech Studies have remained dispersed and lack a dedicated space for debate, networking, knowledge building, and agenda-setting – practices vital to the establishment, identity, and maturing of the field. To address this need, we invite junior and senior scholars, as well as educational practitioners and edtech developers, to participate in the inaugural European Conference on Critical Edtech Studies (ECCES). Open to contributors from anywhere in the world, the first edition of ECCES aims to establish a foundational understanding of Critical Edtech Studies, but also to provide a forum for intense discussions around potential futures for the field. The conference invites participants to share in this agenda, through engagement in an informal and supportive community that can stimulate debate and further research in Critical EdTech Studies.

The ECCES conference is particularly dedicated to critical scholarship around the following areas:

  • Technological Artifacts: Educational platforms, apps, AI, VR, data visualizations, and other digital tools.
  • Policy and Governance: The role of governments, institutions, actor networks, and particular discourses in shaping edtech development and adoption.
  • Political Economy: Business practices, capitalization, assets, value creation, corporations, EdTech industry, startups, edu-businesses.
  • Social Justice and Diversity: The impact of edtech on marginalized communities, the (re-) production of inequalities, and how edtech is (not) addressing heterogeneous or postcolonial audiences.
  • Learning, Pedagogy and Assessment: Types and visions of learning, teaching, pedagogy and assessment enhanced or inhibited by interfaces, data analytics, and algorithmic modelling.
  • Ethical Considerations: Privacy, surveillance, and the ethical implications of data-driven education.
  • Methodological Approaches: The various ways in which Critical Edtech Studies can investigate and contribute to (re-)shaping edtech, including evolutions towards more participatory and co- design approaches.
  • Sustainability and Planetary Futures: The environmental impact of edtech, how it matters, and how it can be mitigated.
  • Histories of EdTech: patterns and repetitions, hype cycles, persistent discourses, antecedents and early traces, hidden histories.
  • Future Visions: Speculative futures, utopian and dystopian scenarios, alternative pathways for edtech development and education policy, literacy frameworks for professionalization.

We hope the event will help showcase and stimulate critical edtech research, and are especially keen to attract early career and doctoral students to share their work and help build the field of critical edtech studies. Check out the call for full abstract submission details.

Posted in Uncategorized | Tagged , , , , , , , , , | Comments Off on Critical edtech gets a conference

Automated austerity schooling

The UK Labour government has launched a project to transform education with AI. Photo by Feliphe Schiarolli on Unsplash

The Department for Education for England has announced a £4 million plan to create a “content store” for education companies to train generative AI models. The project is intended to help reduce teacher workload by enabling edtech companies to build applications for marking, planning, preparing materials, and routine admin. It is highly illustrative of how AI is now being channelled into schools through government-industry partnerships as a solution to education problems, and strongly indicates how AI will be promoted in English education under the new Labour government.

Most of the investment, allocated by the Department of Science, Innovation and Technology as part of its wider remit to deploy AI in the public sector, will be used to create the content store.

The £3m project will, according to the education department’s press release, “pool government documents including curriculum guidance, lesson plans and anonymised pupil assessments which will then be used by AI companies to train their tools so they generate accurate, high-quality content, like tailored, creative lesson plans and workbooks, that can be reliably used in schools.”

A further £1m “catalyst fund” will be awarded to education companies to use the store to build “an AI tool to help teachers specifically with feedback and marking.”

As Schools Week pointed out, none of the money is going to schools. Instead, the early education minister Stephen Morgan claims it will “allow us to safely harness the power of tech to make it work for our hard-working teachers, easing the pressures and workload burdens we know are facing the profession and freeing up time, allowing them to focus on face-to-face teaching.”  

Whether £4m is enough to achieve those aims is debatable, although it appear to signify how the government would rather allocate a few million on tech development than other ways of addressing teacher workload.

Edtech solutionism

Putting aside for a moment the question of the reliability of language models for marking, feedback, planning and preparation — or the appropriateness of offloading core pedagogic tasks from professional judgment to language-processing technologies and edtech firms, or all the many other problems and hazards — the project exemplifies what the technology and politics critic Evgeny Morozov has termed “technological solutionism.”   

Technological solutionism is the idea that technology can solve society’s most complex problems with maximum efficiency. This idea, Morozov argues, privileges tech companies to turn public problems into private ones, to produce “micro-solutions to macro-problems,” from which they often stand to gain financially.

One consequence of this is that many decisionmakers in positions of political authority may reach for technological solutions out of expedience — and the desire to be seen to be doing something — rather than addressing the causes of the problem they are trying to solve.

The DfE/DSIT project can be seen as edtech solutionism in this sense. Rather than addressing the long-running political problem of teacher workload — and its many causes: sector underfunding, political undermining of the teaching profession… — the government is proposing teachers use AI to achieve maximum efficiency in the pedagogic tasks of planning and preparation, marking and feedback. A similar approach was previously prototyped, when the DfE under the prior government funded Oak National Academy to produce an AI lesson planner

The trend represented by these projects is towards automated austerity schooling.

Schools in the UK have experienced the effects of austerity politics and economics for almost 15 years. The consequences have been severe. According to UK Parliament records, the overall number of teachers in state schools has failed to keep pace with student numbers, resulting in an increase in the student to teacher ratio and some of the highest working hours for teachers in the world, compounded by continued failure to meet teacher recruitment targets.

The government is investing in a low-cost technological solution to that problem, but in a way that will also reproduce it. School austerity will not be solved by automated grading; it will sustain it by obviating the need for political investment in the state schooling system.

Algorithmic Thatcherism

The UK’s finances are pretty parlous, and the government is warning the country of  further economic pain to come, so it may seem naïve to imply we should simply allocate public funds to schools instead of investing it in AI. But failure to address the underlying problems of the state schooling sector is likely to lead to layering on more technological solutions in pedagogic and administrative processes and practices with few regulatory safeguards, and continued automation of austerity schooling with unknown long-term effects.

The critic Dan McQuillan argued last year that efforts under the Conservative government to deploy AI across public services represented a form of “algorithmic Thatcherism.” “Real AI isn’t sci-fi but the precaritisation of jobs, the continued privatisation of everything and the erasure of actual social relations,” McQuillan argued. “AI is Thatcherism in computational form.”

Algorithmic Thatcherism prioritizes technical fixes for social structures and institutions that have themselves been eroded by privatization and austerity, privileging the transfer of control over public services to private tech firm. “The goal isn’t to ‘support’ teachers and healthcare workers,” concluded McQuillan, “but to plug the gaps with AI instead of with the desperately needed staff and resources.”

Automated Blairism

Despite a change of government, automated austerity schooling looks like being supercharged. The new Labour government is strongly influenced by the Tony Blair Institute, the former Prime Minister’s organization that has been termed a “McKinsey’s for world leaders.”

The TBI has heavily promoted the idea of AI in UK government, including education. Blair’s New Labour administration in the late 1990s and early 2000s was characterized by investment in public services, often through public-private partnerships and increasing private sector influence; a push for data-driven accountability measures in education; increased choice and competition; and significant boosting for technology in schools.

The TBI recently announced a vision for a “reimagined state” that would use “the opportunities technology presents – and AI in particular – to transform society, giving government the tools it needs to do more with less, make better public services a reality and free up capital for other priorities. … More, better, cheaper, faster.”

This is a vision the Labour party is now acting on hastily. According to a report in Politico, the “plans have tech firms — some of whom have partnerships with Blair’s institute — swarming, lured by the tantalizing prospect of millions of pounds of public contracts.”

Doing “more, better, cheaper, faster” with AI in government services represents the triumph of automated Blairism. It combines political technology solutionism with privatized influence over the public sector, all under the guidance of policy and technology consultants bankrolled by tech billionaires.

The TBI’s latest manifesto for AI governance in the UK was co-produced with Faculty AI, the private tech firm that previously worked with the Conservative government on, among other things, plans for AI in education. Ideals of putting AI into our governance institutions and processes are not intrusions from the commercial realm; they are already embedded in contemporary forms of computational political thinking in the UK.   

Real-time environments

Under the Blairist imaginary of technological transformation of the UK state, its visionary prospectus for “tech-enabled” UK education invokes the promise of “personalized learning” — formerly a favoured education slogan under Blair’s New Labour administration in the early 2000s – and AI “to revolutionise the experience of pupils and teachers.”

Among five key priorities for technology in education, the TBI set the stage for the current DfE project on AI and teaching with its claim that “Technology and AI can provide new ways of organising the classroom and working days, and supporting marking, lesson planning and coordination.”

But the TBI vision goes beyond the immediate aims and constraints of the £4m DfE fund. It imagines “expanding the use of edtech” and giving all school children a “digital learner ID.” The digital ID would contain all their educational records and data, enabling real-time automated “analysis of students’ strengths and weaknesses,” which it argues would “simplify coordination between teachers and allow them to use AI tools to plan lessons that are engaging and challenging for all pupils.”

“Most importantly,” the TBI insists, “it would allow teachers to shift repetitive marking tasks to adaptive-learning systems. A move to a real-time data environment would mean AI could mark a class’s work in less than a second and provide personalised feedback. This wouldn’t replace teachers but instead free them up to do what they do best: teach.”

In keeping with the Blairist approach of old, the TBI envisages a greater role for the private sector in building the digital ID system; sharing student data with the edtech industry for innovation and development; and a wholly data-driven approach to school accountability by using the digital ID data for school performance measurement and enhancing parent choice.

The current £4m DfE project to use AI to help teachers, then, looks like just the first step in a possible longer program of technosolutionist policy — focused on turning schools into real-time data analysis and adaptive environments — that will sustain and reinforce automated austerity schooling as the new normal.

Whether the other TBI proposals on AI and “real-time” data analysis and adaptive technologies in education come to fruition is a matter for speculation just now (they’re not new ideas, and haven’t come to much yet despite many industry and investor efforts). But with the international clout of Blair, and the influence of the TBI in the fresh Labour government, the vision will certainly have considerable visibility and circulation within the government departments responsible for public services.

The edtech industry will certainly be queuing up for potential future contracts to participate in the proposed transformation of English state schooling. [Update: in the 9th September 2024 call for the competition, it was stated that: “You must demonstrate a credible and practical route to market, so your application must include a plan to commercialise your results” and “Only the successful applicants from this competition will be invited to potential future funding rounds.”]

AI fails

Going all-in on AI in education is a risky move. The recent case of the Los Angeles school district is cautionary. After $6m of public investment in an edtech company to build a chatbot for the entire state schooling system, the contractor folded only months later, potentially imperilling masses of student data that it had failed to adequately protect.

Observers suggested the technological vision was far too ambitious to be achievable, and that the company’s senior management had little idea of the realities of the schooling system. The CEO is said to have claimed that they were solving “all the problems” facing large schooling systems just before the company collapsed.

Public opinion may not support AI in schools either. When the new education minister Stephen Morgan announced the DfE project, he did so from South Korea, at a major event on AI in education, favourably comparing the two countries’ technological prowess and approach to educational innovation. But only a few days before, the Financial Times reported a significant public backlash against the South Korean government’s own plans for AI in education. It was only a few years ago that school students in London chanted “fuck the algorithm” to protest their grades being adjusted by an automated system. Failure to consider public opinion can be disastrous for big tech projects in the public sector.

More broadly, investment organizations and businesses are beginning to sense that the “AI bubble” of the last two years may be about to burst, with companies reporting lack of productivity benefits and investors spooked by weak financial returns. The prospects for AI-based edtech are unclear, but could equally be affected by investor flight and customer disinterest. Building public services on AI infrastructure despite industry volatility and public concern may not be the wisest allocation of public funds.

It is understandable that teachers may be using AI in the preparation of materials, and to automate-away administrative tasks under current conditions. But the risks of automated austerity schooling — eroding pedogagic autonomy, garbling information, privacy and data protection threats, enhancing classroom surveillance, and far more — remain significant and underaddressed. Letting AI in unobstructed now will likely lead to layering further automation on to pedagogic and administrative practices, and locking in schools to technological processes that will be hard and costly to undo.

Rather than seeing AI as a public problem that requires deliberation and democratic oversight, it is now being pushed as a magical public-private partnership solution, while both old problems with school structures and the many new problems AI raises in public service provision remain neglected. The DfE’s AI content store project is a first concrete sign of the solutionism that looks set to characterize automated austerity schooling in England under the new government.

Posted in Uncategorized | Tagged , , , , , | Comments Off on Automated austerity schooling

Genetic IQ tests are bad science and big business

A consumer genetics company plans to launch a genetic IQ testing service, raising scientific and ethical concerns. Photo by National Cancer Institute on Unsplash

A personal genomics startup company has announced plans to launch a genetic intelligence testing service. Backed by technology investors, Nucleus Genomics released a disease screening product in March 2024, followed by a beta version of the “Nucleus IQ” test in late June – a product it eventually aims to roll our for all customers. News of the test is resurfacing controversies over the accuracy and ethics of using genetic data to identify and rate “innate” human abilities.

The company makes a big pitch about its “whole genome” sequencing and screening services. The Nucleus Genomics founder and CEO Kian Sadeghi announced on Twitter it was “launching a closed beta for Nucleus IQ — the first intelligence score based on your DNA”, with founding partner and chief operating officer Caio Hachem adding that its “analysis offers an unprecedented insight into the genetic factors that contribute to our cognitive abilities”.

The startup’s claims to innovation and novelty are backed by investment, scientific and industry partnerships. Nucleus has received almost $18 million dollars in funding from tech investors including Peter Thiel’s Founders Fund and Reddit co-founder Alexis Ohanian’s  venture capital firm 776. Ohanian urged his twitter followers to join the waitlist for the Nucleus IQ test.

Scientifically and technically, the Nucleus Genomics service is built on the foundations of impute.me, a free open source website allowing users to upload their consumer genetics data for polygenic risk calculation, which Nucleus acquired in 2022. A partnership with Illumina, a global biotech firm, gives Nucleus access to advanced genomic technologies, while the analysis is undertaken by a genomics laboratory in Massachusetts and an informatics lab in North Carolina.

Like other investor-driven consumer genetics companies, such as 23andme, Nucleus is capitalizing on the promises of “precision medicine” and “personalized healthcare”, as well as the commercialization of previously not-for-profit scientific enterprises. In precision medicine approaches, the individual becomes treated as a “data transmitter” whose personal bioinformation is a valuable commodity. Nucleus offers customers the opportunity to share their data for future third party research and also advertises the benefits of “upgrading” from a basic to a premium plan for “more accurate assessment of your genetic risk”. Like 23andme, Nucleus has applied the platform business model of commercial datafication to personal health.

Though details about the genetic IQ test itself haven’t been made public (Hachem’s tweet suggested the “tech is still in its early stages” and they would be “rolling this out slowly”), available information about its other tests show that Nucleus Genomics produces polygenic scores for various traits and conditions. Describing polygenic scores as “common genetic scores”, it suggests that its “state-of-the-art algorithms unlock previously unavailable insights into your diseases and traits” to provide “personalized reports tailored to you”.

Polygenic scores underpin claims about a DNA revolution in intelligence research, and the prospects of genetic intelligence testing (including genetic IQ testing of children). Nuclueus’s genetic IQ test therefore translates the promise of algorithmic precision into seemingly precise cognitive screening. The test will treat users as biodata transmitters for algorithmic analysis whose intelligence ratings are a source of value for the company.

Bad science?

The Nucleus IQ test is framed in the language and imagery of high-tech algorithmic accuracy and innovation, but it rests on a controversial history of intelligence testing – particularly the proliferation of the IQ test – that has its roots in twentieth-century eugenics. It’s for this reason other consumer genetics companies, like 23andme, have steered clear of producing intelligence ratings — although it has been possible for users to upload their data to other providers to do so instead.

In 2018 the behaviour geneticist Robert Plomin suggested that genetic IQ tests for children were likely to be developed in the future, with parents using direct-to-consumer tests to predict their children’s mental abilities and make educational choices. Plomin termed this “precision education”, but critics saw it as a sign of the arrival of “personal eugenics” and a forthcoming “genotocracy” where wealthy families could afford IQ-test tech services to maximize their children’s chances, while poorer families could not. More prosaically, there remain significant questions over the underpinning theories, measurement instruments and construct validity of IQ tests, and particularly claims that genetic data can be used to discover the “biological reality” of IQ.

Given existing controversies over genetic intelligence testing, the announcement of Nucleus IQ surfaces once more these longstanding concerns about both the scientific validity of such tests and the ethical implications of using genetic data to calculate complex human capacities.

On the scientific side, critics point out that polygenic scores for things like intelligence are highly confounded by social and environmental factors, making genetic prediction of IQ little better than “junk science” or “snake oil”. This is because polygenic scores can only account for around 5% of the variance in intelligence, often measured in proxies like educational attainment, which suggests that marketing and publicity claims of calculating “genetic IQ” are wildly overinflated.    

Arguments about the value of calculating genetic IQ are characterized by hard hereditarianism, where assertions are made of the innate biological processes that shape qualities like intelligence. However, polygenic scores do not simply capture causal genetic effects; they also capture a wide range of complex environmental effects that may be mistakenly interpreted as biological in origin. This is because complex social traits like intelligence or educational attainment are not purely biological states, but, as Callie Burt argues, social constructions “based on social distinctions inevitably layered on top of other social forces that exist irreducibly in a social matrix”.

On the ethical side, too, genetic IQ tests – like other social and behaviour genetics findings – raise the real dangers of biological fatalism, stigmatization, discrimination, distraction from other ways of understanding or addressing a phenomenon, and the reification of race as a biological category. Findings from educational genomics studies have already been appropriate and misused to support racist arguments about the heritability of intelligence, and there are serious ethical debates about the possibility of using polygenic IQ tests for embryo screening. Even those scientists supportive of using polygenic scores for research into complex traits and outcomes regard the idea of “DNA tests for IQ” as overstated and misleading. The general consensus seems to be that genetic IQ tests are bad science.

But those scientific and ethical shortcomings are not stopping companies like Nucleus Genomics from claiming to provide a world-first commercial DNA test for intelligence – and they are politically bullish about doing so.

Democratizing genetics?

In response to criticisms of the Nucleus IQ test on twitter, CEO Kian Sadeghi wrote a 350-word tweet defending it from accusations that it was a eugenic technology:

Yesterday, @nucleusgenomics announced a closed beta for the first genetic IQ score, Nucleus IQ. Lots of people were curious. Some people said genetic analyses for intelligence will devolve into new eugenics.

We disagree. Eugenics is antithetical to my vision for @nucleusgenomics

Instead of eugenics, he argued, Nucleus Genomics was “democratizing” access to genetic data.

To some extent, describing genetic IQ tests as eugenic may be over-dramatic, compared to the appalling historical record of eugenic extermination and reproductive control in the twentieth century. Consumer IQ tests are clearly not in the same terrain. Nonetheless, there certainly is family resemblance with broadly eugenic forms of hereditarianism, genetic determinism and reductionism, evaluations and ratings of desirability, and actions intended to improve individual capacities.

And if genetic IQ testing for embryo screening or precision-targeted educational interventions followed from innovations like Nucleus IQ, then it would be even harder not to view such technologies as at least bordering on the territory of eugenics — a kind of “flexible eugenics” that mobilizes genetic technologies for individualized interventions and improvements. Nucleus clearly sees big business opportunities in the biotechnological improvement of human health and cognition.

But Sadeghi’s response to criticism also indicated the company taking a particular political position in relation to ongoing ethical concerns about the mis-use of genetic data. Rather than restricting genetic science on the grounds of ethical concern, Sadeghi argued that:

This is about information access and liberty. … Suppressing controversial genetic insights that are prone to abuse and misinterpretation doesn’t prevent that information from being abused and misinterpreted. … We believe history and ideology should not outweigh your right to benefit from technological progress.

In an earlier blog post, he also suggested that “ideological battles have led the public health and medical elite to restrict access to genomic insights and their utility”.

Genetic IQ testing, then, has become linked by Nucleus Genomics to current contests over scientific freedom, in contrast to supposed elite ideological control, which have become heated in some areas of social and behavioural genetics. Here the argument is that science is being censored by scientific elites due to an overemphasis on ethical practice and control over “forbidden topics” and “stigmatizing research”, with scientists having their access to genetic data restricted at the expense of innovation and knowledge.

Nucleus Genomics has therefore positioned itself as a defender of scientific freedom, and a source of democratized genetic knowledge, as a way of deflecting from existing and well-founded concerns over the dangers of hereditarian genetic IQ testing. This political defensiveness around scientific freedom to conduct controversial research is mobilized to make genetic IQ testing technologies seem desirable, acceptable, and non-ideological. Additionally, big tech investors see potential value in them, and Nucleus clearly anticipates a market opportunity for consumer genetic IQ testing. Left unsaid is the actual value of genetic IQ tests for users and customers, or the potential longer-term implications of such (contested) technologies being introduced into other sectors and industries.

This political positioning, backed by investor dollars, raises the danger that ethically risky genetic technologies may become normalized and used to quantify and evaluate human capabilities, despite their documented shortcomings. The example of Nucleus Genomics may also anticipate the expanding use of genetic technologies in sectors like education, as using biological signals to predict outcomes is argued to be scientifically viable, accurate, and objective. Some researchers have already argued that data from direct-to-consumer genetics companies could be used in the future to construct polygenic scores and inform educational policy and teaching.

All of this indicates how the highly contested science of genetic IQ testing is now being brought into the mainstream thanks to tech startups, biotech firms and investors seeking valuable market opportunities, twinned with researchers engaging in ethically-risky experiments under the banner of democratizing access to genetics, in a context where frameworks of scientific and regulatory control are increasingly viewed as ideological impositions on scientific freedom.

Posted in Uncategorized | Tagged , , , , , , , | Comments Off on Genetic IQ tests are bad science and big business

Polygenic scores as political technologies in educational genomics

Genomic technologies are being used to study the genetic basis of educational outcomes, and generate proposals for genetically-informed education policy. Photo by National Cancer Institute on Unsplash

Polygenic scores are summary statistics invented in biomedical genetics research to estimate a person’s risk of developing a disease or medical condition, and are often envisaged as the basis for “personalized” or “stratified medicine”. In recent years, social and behavioural genetics researchers have begun suggesting polygenic scores could be used in education too, raising significant concerns along scientific, ethical and political lines.

The publication in June 2024 of a research article titled “Exploring the genetic prediction of academic underachievement and overachievement” shows that polygenic scoring remains a popular methodology in studies of genetics and education. Its authors argue that school achievement can be “genomically predicted” using “genome-wide polygenic scores”. The paper is part of a long-running series of studies by a team mostly associated with the Twins Early Development Study (TEDS, established in 1994 as a longitudinal study of around 15,000 pairs of twins in the UK). Over the past decade, the team has increasingly used polygenic scores (as an earlier paper is titled) for “Predicting educational achievement from DNA”.

In this post I approach polygenic scores for predicting educational achievement as technologies with political properties. Part of our ongoing BioEduDataSci research project funded by the Leverhulme Trust, it follows up from a previous post outlining how “educational genomics” research may be paving the way for the use of genetic samples in educational policy and practice, and another highlighting the severe ethical problems and scientific controversies associated with educational genomics.1 Here I use the new predictive genetic achievement paper to foreground some of the political implications of educational genomics.

Biomarker methodologies

Understanding the political aspects of polygenic scores2 requires some engagement with their construction as methodological technologies. Polygenic scores are artefacts of a complex technoscientific infrastructure of statistical genetics, molecular genomic databases, bioinformatics instruments, analytics algorithms, and the institutions that orchestrate them, which together function to make valued social outcomes—such as educational outcomes—appear legible at a molecular level of analysis.

To construct polygenic scores, researchers require genotyped DNA data, which they then analyze through genome-wide association study methods and technologies. These identify minute genetic differences—genetic biomarkers known as single nucleotide polymorphisms, or SNPs—that are associated with a phenotype (an observable behaviour, trait, or social outcome). One aim of such studies is to identify the “genetic architecture” of a trait or outcome–such as the genetic architecture of educationa attainment.

The SNPs associated with the phenotype can then also be added up into a genetic composite known as a polygenic score. In education, the most common polygenic scores are for educational attainment (years of school), said to predict around 11% of the variance. Individuals can ultimately be ranked on a scale of the genetic probability of success at school.

The use of polygenic scores and associated methods and measures represents the data-centric “biomarkerization” of education, where biological signals are taken as objective evidence of the embodied substrates of academic attainment and achievement. This has only become possible with the development of an infrastructure of biobanks of genetic information and bioinformatics technologies, which can be used to generate and analyze genetic data for markers associated with educational outcomes.

In the latest genetic prediction of academic achievement paper, for example, the authors claim a “DNA revolution has made it possible to predict individual differences in educational achievement from DNA rather than from measures of ability or previous achievement”.3 Their basic claim is that technologies to calculate polygenic scores can operate as “early warning systems” to predict school achievement from infancy. The latest study design used TEDS data collected from children at age 7 to construct polygenic scores, based on a previous study of the educational attainment of a 3 million sample (which I’ve discussed before).

The paper introduces “the concept of genomically predicted achievement delta (GPAΔ), which reflects the difference between children’s observed academic achievement and their expected achievement”, where the former are standardized test achievements and the latter are polygenic predictions. So, the methodological invention of the paper—the measure of genomically predicted achievement—is ultimately a way of comparing a child’s observed academic achievement, as assessed by school test results, with a polygenic score predicting “genomically expected achievement” from DNA samples collected in childhood.

Biosocial sorting

These conceptual inventions and large stats certainly lend the study the quality of digital objectivity. But the critical point here is that the polygenic scores used in the study, and the genomically predicted achievement measures, are the results of social, technical and scientific practices, each of which can affect the results. As Callie Burt has noted in a detailed critical examination of how polygenic scores are made (and their limitations), there are multiple ways to create polygenic scores, each involving different assumptions and goals, measurement instruments, technical adjustments, calculation methods, and analysis specifications, which can introduce further technical biases.

Detailed analysis of the shortcomings of the methodology and findings of the genomic achievement study were posted on twitter by statistical geneticist Sasha Gusev, questioning its causal claims and predictive accuracy. He also showed how methodological choices and limitations in the research (particularly insufficient acknowledgement of social factors) meant that the “underachievers” it identified were actually individuals with high socioeconomic status and high early years achievement, who subsequently underperform at school.4

The study risks labelling and lowering expectations of “underachievers” as having lower education-related “genetic propensity” (as the TEDS team terms it) for achievement, while also privileging well-off kids by directing additional resources their way. And as Gusev points out, any allocation of resources from the study findings would therefore be targeted at “students from high-SES/high-edu backgrounds, while telling ‘overachievers’ (poor kids with good grades) that they’re swimming upstream”, seemingly against the genetic currents determining their achievement prospects.

The implication, then, is that polygenic methods could be used to classify children into groups defined and labelled in terms of genomically predicted levels of achievement. This would amount to a strategy of biosocial classification of children. By biosocial classification is meant the categorization of social groupings as defined by biological measures. In this case, it means sorting children into polygenic biosocial categories through the analysis of SNP biomarkers corresponding with school achievement, in ways that appear to reproduce and reinforce socioeconomic categories and biases.

What this indicates, then, is that despite the seeming objectivity imputed to genomic technologies, polygenic scores and associated measures remain methodologically problematic and potentially skewed in their results. Such studies can harden social biases and inequalities even as major claims are made that they could inform decisions about the just allocation of resources in schools.

Promissory politics

Beyond its biosocial sorting, this kind of polygenic scoring project can also exert other kinds of political effects. The political allure of genetic objectivity and biological authority in polygenic scoring studies appears to be growing, supported by promissory claims of the future potential of genomic technologies to further reveal genetic insights at even larger scale.

As already noted, one political implication of educational genomics research is that the results—predictions of educational outcomes from DNA—could be used as the basis for political interventions targeting children genomically predicted as at risk of underachievement. As discussed elsewhere, some authors of the study were involved in a report for the Early Intervention Foundation (a UK government “what works” centre), which made the case for genetic “screen and intervene” programs in the early years.

The collection of TEDS data from 7 year-olds in the 1990s has given these researchers tremendous bioinformational advantage to make claims to policy relevance. A main claim of latest genetic achievement paper is that “screening for GPAΔ could eventually be a valuable early warning system, helping educators identify underachievers early in development”. From such genetic early warning signals, it seems, should flow early interventions “targeting students underachieving genomically”.

The seeming relevance of this work to policy and practice needs to be understood as deriving from political interest in the potential and promise of data-driven science, supported by the development of genomics technologies by major biotech firms. The methods section of the genomically predicted achievement paper, for example, details how “DNA for 12,500 individuals in the TEDS sample was extracted from saliva and buccal cheek swab samples and hybridized to one of two SNP microarrays (Affymetrix GeneChip 6.0 or Illumina HumanOmniExpressExome chips)”. It also involved use of the application LDPred2 to “compute GPS for all genotyped participants”, and “training” a “model to maximize prediction”. 

This existing apparatus of technologies, however, is presented as just the first step necessary to fully compute genomically expected achievement across the whole population of children, which will only become possible with increased DNA data.

GPAΔ seems impractical now because it requires DNA, genotyping, and the creation of GPS. However, the rise in direct-to-consumer DNA testing suggests a future where GPAΔ becomes more accessible. At least 27 million people have paid direct-to-consumer DNA testing companies for this service, and these companies are increasingly marketing their product to encourage parents to test their children. … Once genotyping is available by whatever means, it will be possible to create GPS for educationally relevant traits, a process that is becoming routinized.

Educational genomics articles like this one routinely invoke promissory claims of future potential, once the existing infrastructure of mass biodata storage, genotyping platforms and polygenic scoring software has been sufficiently upgraded. As this excerpt indicates, the biological authority of educational genomics depends to a significant degree on biotech firms and consumer genetics companies.

It is this promissory quality associated with technological advances that enables researchers involved in educational genomics studies to claim moral and political authority to not only understand but to improve social institutions like schooling—and likewise to criticize forms of social science and policy that do not incorporate genetic measures as ideologically irresponsible.

In other words, genomic technologies are invoked to support the political project of advancing the power and authority genetic sciences in social policy areas like education. A recent report by the UK Government Office for Science, for example, asked “What could genomics mean for wider government?” It highlighted how existing medical infrastructures of medical genomics could be capitalized on for other social policy areas, and proposed education as one key area of potential application.

Educational genomics studies, enabled by new genetic technologies, therefore support visions of future policy possibilities. The idea is that genetic testing and screening could become policy technologies, if only the necessary infrastructure upgrades are put in place.

Genoeconomic policy

The idea of genetic testing as a policy approach is obviously controversial, given the history of eugenic interventions in education. It does, however, appear to link neatly with current mainstream policy approaches. Critics have pointed out that educational genomics proposals often reinforce “technocratic” or “neoliberal” policy models that treat education as a kind of laboratory for boosting economic outcomes and social mobility, and which promise to reduce costs and save money for government agencies and taxpayers. Such promises may reduce the seeming controversy associated with the science by appealing to political expedience.

Along these lines, in the genomic achievement paper, the authors claim that “Targeting GPAΔ might also prove cost-effective because such interventions seem more likely to succeed by going with the genetic flow rather than swimming upstream, helping GPAΔ underachievers to reach their genetic potential”. Later in the paper, they add that the “findings suggest that GPAΔ can help identify underachievers in the early school years, with the rationale of maximizing their achievement by personalizing their education”.

So the policy relevance of the paper appears again to be “cost-effective” interventions in early school years, driven by the aim to increase individual achievement through “personalized” learning. Such proposals certainly look like biomedicalized neoliberal policy, where measurable individual achievement might be bumped up through the efficient genomically-targeted allocation of resources. The cost-saving argument for using genetic data for decision-making in education has also been made in the popular science book The Genetic Lottery.

As the opening sentence of the paper reads, “Underachievement in school is costly to society and to the children who fail to maximize their potential”—with a citation to a paper about the “economics of investing in disadvantaged children” by economist James Heckman. Heckman is well known for his work calculating the economic payoffs of investment in early years child development – the “economization of early life” as Zach Griffen describes it – which is central to the model of “human capital development” he promotes to policymakers.

Other papers by the same TEDS team and their collaborators invoke studies by the OECD similarly citing the importance of education to economic outcomes, in ways that appear to amount to a program of hunting for biological signals of human capital in the genome. Many other educational genomics studies are, in fact, led by economists—or self-described “genoeconomists”—who first latched on to the idea that genetic data about educational outcomes could be used to understand the genetic basis of other downstream socioeconomic outcomes. Ultimately, this work suggests political investments in genetic testing as an investment in economic outcomes, potentially diverting resources from other forms of intervention based on non-genetic analyses.

Educational genomics research and advocacy therefore suggests the emergence of genoeconometric education policy, buttressing and fortifying existing econometric tendencies in international education policy with seemingly objective data about the genetic substrates of outcomes. Whether there is genuine political (or public) appetite for this remains to be seen, but clearly the data and the proposals are being presented and circulated in ways that are intended to promote genoeconometric solutions—such as early years screen and intervene programs—to address the relationship between children’s outcomes, human capital development and economic prospects.  

Biopolitical technologies

There are several reasons to question the assertion that genomic or genoeconometric education policy based on polygenic scores would be a good idea socially, politically or ethically. They include risks that the use of genetic information may lead to forms of biological reductionism, discrimination, stigmatization, racism, self-fulfilling prophecies, or distract from other forms of intervention.

Even if a genomic prediction of achievement outcomes can be made reliably, as the TEDS paper claims, it remains unclear exactly what causal biological mechanisms are associated with it. Although educational genomics research studies are increasingly high-powered in computational and data processing terms, they have very partial explanatory power and remain far from specifying the genetic mechanisms that underpin educational outcomes like achievement or attainment. Statistically speaking, the “genetic architecture” of educational outcomes may have become legible–as thousands of SNP associations–but the actual biology remains unknown.

Another major problem is the thorny issue of race and ethnicity in social and behavioural genetics research, and the eugenic legacy underpinning such science. As the TEDS authors themselves acknowledge, polygenic scores are affected by “cultural bias” because existing datasets over-represent healthy, white, well-educated, and wealthier than average individuals of European ancestry. Any intervention based on genomic data would necessarily exclude all other groups, since the data do not exist to support polygenic prediction beyond European population groups, and would therefore be politically untenable on equity grounds. The findings from such studies can also be appropriated to support racist assertions of biological superiority and inferiority in intelligence, or “function normatively to reinforce conceptions of race as an innate and immutable fact that produces racial inequalities”.

A final issue, for now, is that educational genomics studies persistently obscure the social and environmental factors that shape educational achievement, while overplaying the influence of genetic transmission. Even where social and environmental factors are considered, they may be simplified into reductive measures of socioeconomic status or family factors, rather than taking account of complex social and political structures, dynamics and their impacts. As in other studies of gene-environment interactions, social factors may even be “re-defined in terms of their molecular components”, shifting away “from efforts to understand social and environmental exposures outside the body, to quantifying their effects inside the body”.

Given these issues—unknown biology, non-representativeness, spectre of race science, and obscuring social factors—it is hard to see how the genomically predicted educational achievement findings could translate into genomically targeted educational interventions.

The study does, though, show how polygenic scores and associated genomic methods and measures can function as political technologies. They enable social and behavioural genomics scientists to claim objective, data-based biological authority, despite methodological limitations, while criticizing other forms of non-genetic investigation into the social determinants of school achievement as morally and ideologically irresponsible. The use of genomic technologies also supports particular kinds of political interventions that prioritize cost efficiency and achievement maximization according to economic “human capital” conceptions of educational purpose.

Polygenic scores support a biomarkerized model of schooling that centres the idea of genetic testing and predicting academic achievement in order to target interventions on genetic groupings of students to boost economic metrics, rather than alternative kinds of reform. They help support the solidification of economic models of schooling that have dominated education policy and politics for decades, albeit with a genetic twist that treats societal progress and human capital as embodied in the human genome.

Perhaps it is more accurate, therefore, to call polygenic scores “biopolitical” technologies–that is, techniques that enable knowledge about living processes to be produced and used as the basis for governing interventions. As biopolitical technologies used in educational genomics research, polygenic scores now support the production of knowledge about the genetic correlates of learning achievements and the potential biosocial sorting of children.

That genetic knowledge is now being promoted as the basis for proposing genetically-informed education policy interventions targeting children’s school achievement. But there remain many important reasons to question whether biopolitical technologies of early years mass genetic testing and screening should ever make the leap from the lab to school systems.

Notes

  1. To be clear “educational genomics” is not a unique scientific field, but our name for a body of research on the genetic underpinnings of educational outcomes–and gene-environment interactions–largely carried out by scientists in fields of behaviour genetics and social science genomics (sociogenomics). Different groups and individuals do not always agree about findings, and there is particular controversy among them about the policy relevance (or not) of such work. ↩︎
  2. Polygenic scores (PGS) are also sometimes referred to as genome-wide polygenic scores (GPS), polygenic risk scores (PRS), or more recently polygenic indices (PGI). Callie Burt critically discusses the recent proposal to term them PGIs, convincingly noting that ‘the shift to index potentially obscures the fact these are “rankings” (i.e., positions on a scale) of genetic associations with socially valued outcomes, whether we call them scores or indices’. ↩︎
  3. A distinction is often made between “prediction” in the biostatistical sense–that a genetic measure is strongly correlated with an outcome or trait–and prediction as a way of making forecasts about the future. In the study discussed here, and elsewhere, that distinction dissolves, and genetic prediction through polygenic scores becomes “fortune telling“. ↩︎
  4. Gusev has also written a thorough technical analysis of the heritability of educational attainment, where he argues that “Cultural transmission and environment is much more important than genetic transmission”, though this is often under-reported in published studies and particularly in press coverage. ↩︎

Posted in Uncategorized | Tagged , , , | Comments Off on Polygenic scores as political technologies in educational genomics

Oblongification of education

Photo by Kelly Sikkema on Unsplash

According to Microsoft and Google, artificial intelligence is going to be fully integrated into teaching and learning in the very near future. In the space of just a few days, Google announced its LearnLM automated tutor running on the Gemini model, and Microsoft announced it was partnering with Khan Academy to make its Khanmigo tutorbot available for free to US schools by donating access to the MS Azure Open AI Service. But it remains very hard to know from these announcements what the integration of AI into classrooms will actually look like in practice.

The promotional videos released to support both announcements are not especially instructive. Google’s LearnLM promo video doesn’t show students interacting with the tutor at all, and the main message is about preserving the “human connection” of education.

The Microsoft promo for Khanmigo doesn’t really reveal yhe AI in action either, though it does feature a self-confessed “defeated” teacher watching the “miracle” bot automatically produce a lesson plan, with Khan Academy’s director of engineering suggesting it will remove some of the “stuff off of their plate to really actually humanize the classroom”.

You’re unlikely to see many more idealized representations of “humanized” school classrooms than these two videos, not least because you barely see any computers in them—except the odd glimpse of a laptop—and the AI stuff is practically invisible.

A better indication of what AI will look like when it hits schools is a promotional video from Sal Khan showcasing the OpenAI GPT-4o model’s capacity for math tutoring just a week earlier. Now, this isn’t a great representation of a typical school either – it’s Sal and his son in a softly-lit lounge with an OpenAI mug on the desk, not 30 students packed into 100 square metres of classroom.

But it is revealing of how entrepreneurs like Khan—and presumably the big tech boys at Microsoft and OpenAI who are supporting and enabling his bot—envisage AI being used in schools. Sal Khan’s son interacts with an iPad, at dad’s vocal prompting, to work out a mathematical problem, with the bot making encouraging noises and prompting Khan jr when he seems to be faltering.

Sal Khan’s video clearly illustrates how AI in classrooms means students in one-to-one dialogue with a portable device, a tablet or laptop, to work on very tightly constrained tasks. Khan himself has frequently talked up the idea of every student having a “Socratic tutor” (invoking Bloom’s 2-sigma achievement effect of 1:1 tutoring in a weird mashup of classical philosophy and debunked edu-stats).

Beyond the lofty Socratic rhetoric and cherrypicked evidence, however, it’s clearly a kind of pristine “showhome” demo rather than any indication whatsoever of how such an automated tutor could operate in the actual social context of a classroom. Marc Watkins sees it exemplifying a kind of automation of learning that is imagined by its promoters to be as “frictionless” as possible, based on a highly “transactional” view of learning.

“When you reduce education to a transactional relationship and start treating learning as a commodity”, Watkins argues, “you risk turning education into a customer-service problem for AI to solve instead of a public good for society”.

Oblong professors

AI tutors are a vision of the impending “oblongification” of education (if you can forgive yet another suffixification). In Kazuo Ishiguro’s novel Klara and the Sun, a minor feature is “screen professors” who deliver lessons via “oblongs”—these are instructors who appear on a child’s portable device to offer “oblong lessons” at a distance rather than in person, in a near future where home-schooling is the norm for many children. .

The oblong professors of the novel are embodied educators—one is described as perspiring heavily—but I found myself thinking of Ishiguro’s depiction of oblong professors while watching the Khan/OpenAI demo. Here, AI tutors appear to students from the oblong of a tablet or laptop—they are automated oblong professors that are imagined as always-available personal pedagogues.

Characterizing them as oblongs, after Ishiguro, rightly robs them of their promotional rhetoric. Oblong tutors aren’t “magic” or a “miracle” but mathematically defined flat 2D objects that can only operate in the idealized environment of a quiet studio space where every student has an oblong to hand.     

The Khan demo also arrived about the same time as Apple released a controversial advertisement for its new iPad. The ad, called “Crush!”, depicted all of human creativity and cultural production—musical instruments, books, art supplies, cameras—being squished into the “thinnest” iPad that Apple has ever made by a giant industrial vice. It’s a representation of the oblongification of culture itself, accurately (if inadvertently on Apple’s part) capturing the threat that many feel AI poses to any kind of cultural or knowledge production.

The ideal of the AI tutor is very similar to the Apple Crush! ad—it crushes teaching down into its flattest possible form, as a kind of transaction between the student and the tutor that can be modelled in a big computer. And enacted on an oblong.

The recent long paper released by Google DeepMind to support the LearnLM tutor similarly flattens teaching. The report aims to identify models of “good pedagogy” and use the relevant datasets for “fine-tuning” the Gemini-based tutor. Page 11 features a striking graphic, with the text caption:

Hypothetically all pedagogical behaviour can be visualised as a complex manifold lying within a high-dimensional space of all possible learning contexts (e.g. subject type, learner preferences) and pedagogical strategies and interventions.

The manifold image is a multidimensional (incomprehensible) representation of what it terms the “pedagogical value” of different “pedagogical behaviours”. In the same report the authors acknowledge that “we have not come even close to fully exploring the search space of optimal pedagogical strategies, let alone operationalising excellent pedagogy beyond the surface level into a prompt”.

Despite that, then they suggest using AI techniques of “fine-tuning” and “backpropagation to search the vast space of pedagogical possibilities” for “building high-quality gen AI tutors”. But this involved creating their own datasets since little data exists on good pedagogy, so it’s not even a model based on actual teaching.

The “ultimate goal may not be the creation of a new pedagogical model”, the Google DeepMind team writes, “but to enable future versions of Gemini to excel at pedagogy under the right circumstances”.

Despite the surface complexity of the report and its manifold graphic of good pedagogy, it still represents the oblongification of teaching insofar as it seeks to crush “optimal pedagogy” into a measurable model that can then be reproduced by Gemini. This is a model built from a small set of datasets constructed by the Google DeepMind team itself that it intends to place in schools, no doubt to compete with Khan/Microsoft/OpenAI.

But much about teaching and pedagogy remains outside of this flat model, and beyond the capacity of any tutor that can only interact with a student via the surface of an oblong device. Like Apple crushing culture into an iPad, Google has tried to crush good pedagogy into its device, except all it could find to put in the vice were some very limited datasets that it had created for itself.

Oblong students

As for the “humanizing” aspects of the AI tutorbots promoted by Microsoft and Google, it is worth considering what image of the “human” appears here. Their promo videos are full of humans, with a very purposeful emphasis on showing teachers interacting with students in physical classroom environments, unmediated by machines.

In a recent essay, Shannon Vallor has suggested that big AI companies and scientists have shifted conceptions of the “human” alongside their representations of “artificial general intelligence” (AGI). Vallor notes that OpenAI has recently redefined AGI as “highly autonomous systems that outperform humans at most economically valuable work”, which she argues “wipes anything that does not count as economically valuable work from the definition of intelligence”.

Such shifts, Vallor argues, not only narrow the definition of artificial intelligence, but reduce “the concept of human intelligence to what the markets will pay for”, treating humans as nothing more than “task machines executing computational scripts”. In the field of education, Vallor suggests, the “ideal of a humane process of moral and intellectual formation” is now overshadowed by AI imaginaries of “superhuman tutors” which position the student as “an underperforming machine”.  

Deficit assumptions of students as underperforming machines, which require prompting by AI to perform future economically valuable work, seem as odds with the rosy rhetoric of humanizing education with AI. AI tutors, as well as being oblongified teachers, also oblongify students—treating them as flattened-out, task-completing machines. Like iPads, but with fingers and eyes.   

Oblong education

My metaphorical labouring of the “oblong” as a model of education is a fairly light way of trying to illuminate some of the limitations and constraints of current approaches to AI in education. Most obviously, despite the rhetoric of transformation, all these AI tutors really seem to promise is a one-to-one transactional model of learning where the student interacts with a device.

It’s an approach that might work OK in the staged setting of a promo video recording studio, but is likely to run up hard against the reality of busy classrooms.

AI tutors are also just models that, as the Google DeepMind report illuminates, are highly constrained because there’s simply not good enough data to build an “optimal pedagogy” engine. And that’s before you even start assessing how well a language model like Gemini performs.

These limitations and constraints are important to consider as Microsoft and Google—among many many others—are now making concerted efforts to make flattened model teachers inside computers, then set them free in classrooms at significant scale.

Ishiguro’s notion of the “oblong professor” is useful because it helps to deflate all of the magical thinking that accompanies AI in education. It’s hard to get excited about an oblong.

Sure, AI might be useful for certain purposes, but a lot of the current promises could also lead to real problems that need serious consideration before activating autopedagogic tutors in classrooms. Currently, AI is being promoted to solve a huge range of complex issues in education.

But AI tutors are simplified models of the very complex, situated work of pedagogy. We shouldn’t expect so much from oblongs.

Posted in Uncategorized | Tagged , , , , | Comments Off on Oblongification of education

Edtech has an evidence problem

Edtech brokers have begun producing new evidence and measurements of the impact of technologies in schools. Photo by Alexander Grey on Unsplash

Schools spend a lot of money on edtech, and most of the time it’s a waste of their limited funds. According to the Edtech Evidence Exchange, educators estimate that “85% of edtech tools are poor fits or poorly implemented”, indicating very weak returns for the $25 billion or more annually spent on edtech in the US alone. The problem is that school procurement of edtech is rarely based on rigorous or independent evidence. The Edtech Evidence Exchange is one example of a new type of organization in education that is aiming to address this problem, by constructing an evidence base to support edtech spending decisions.

In a new paper just published in Research in Education, Carlos Ortegon, Matthias Decuypere and I conceptualize these new edtech evidence intermediary organizations as edtech brokers. Edtech brokers perform roles such as guiding local schools in “evidence-based” procurement, adoption, and pedagogical use of edtech, and have the mission to support teachers and school authorities to modernize in safe, reliable, and cost-effective ways. Edtech brokers are appearing around the world yet they have not, as yet, captured much critical attention. We kicked off our project on edtech brokers a couple of years ago, with Carlos Ortegon taking the lead for his doctoral research and lead-authoring the paper entitled “Mediating educational technologies: Edtech brokering between schools, academia, governance and industry” as the first major output.

Edtech brokers are significant emerging actors in education because they are gaining the authority and capacity to shape the future direction of edtech in schools, at a time of rapid digitalization of the schooling sector in many countries around the world. They can also be powerful catalyzers of the edtech market. As expenditure in edtech from governments, companies, and consumers has increased in the past decade and as the edtech industry continues to seek new market opportunities, such as the application of AI, edtech brokers play a role by connecting technical products to the specific social and political contingencies of different local settings.

Edtech brokers

In the paper we identify three distinctive kinds of edtech brokers:

Edtech ambassador brokers, which act as representatives (or ambassadors) of specific edtech brands. Edtech embassador brokers encourage the procurement of their products and promote their educational potential. Ambassador brokers are a global phenomenon, as the growing number of Google and Microsoft specialized organization partners across different countries makes clear, and they usually offer services such as streamlined procurement and professional development for teachers.

Edtech search engine brokers operate as search portals that focus on providing on-demand evidence about “what works” in edtech, thereby shaping procurement and usage from a wide range of market providers. They place strong emphasis on providing “bias-free advice” and “evidence-based recommendations” that can prevent problems of over-expenditure as the Edtech Evidence Exchange puts it. Edtech search engine brokers often combine multi-sector mixtures of academic, industry, policy, and philanthropic expertise, though some are commercial companies and others directly goverment-funded.

Edtech data brokers support schools in managing, regulating, and analyzing their digital data. Edtech data brokers are gatekeepers of the data produced by schools when using edtech, whose core activity is securing data flows between schools and vendors. Data brokers offer distinct tools for schools to analyze their data, facilitating school-level educational decisions. 

Though they are relatively unknown in the digital education landscape, edtech brokers are therefore becoming important figures that make claims to expertise in edtech effectiveness, filter purchasing options, shape edtech procurement decisions, manage data flows, and lead the professional development of teachers in schools.

Beyond this seemingly straightforward definition of their role, we also see edtech brokers as strategically mediating between schools, industry, evidence and policy settings. In this mediating role, this means edtech brokers construct relations between a variety of different constituents. For example, they connect vendors to schools, act as relays of evidence produced in research centres, and they strengthen policy agendas on evidence-based edtech. They also act as transmitters and brokers of normative ideas about tech-enabled transformation and reform, assisting the circulation of powerful imaginaries and expectations of educational futures into the attention of school decision makers. One initiative even brokers relations between startups, learning scientists and investors for evidence-based edtech financing.

But this means edtech brokers also have some capacity to affect each of the constituents they connect. First and foremost, edtech brokers take up powerful positions in determining which and how edtech is used in schools, according to particular standards of evidence. This means, second, that edtech brokers can influence edtech markets, shaping the financial prospects of startups and incumbents, as they either promote or devalue specific products, and thus affect the procurement decisions of schools. And third, they can influence policy settings and priorities, by positioning themselves as arbiters of “what works” and thus amplifying policy attention on certain affordances and functionalities.

Mediating edtech

In the paper we highlight the mediating practices of edtech brokers and their implications. The first set of mediating practices we refer to as infrastructure building. In their documents and promotional discourse, edtech brokers frequently invoke the idea of school modernization, and of using evidence-based edtech to update and upgrade schools’ digital infrastructures for teaching and learning. In the case of ambassador brokers, this updating of digital infrastructure also involves synchronizing schools and teachers’ pedagogic practices with the broader digital ecosystems of big companies like Google and Microsoft. Edtech data brokers emphasize interoperability and the synchronization of student data flows across different edtech applications. More than merely offering technical products and support, these efforts shape the digital architecture of the school through the promotion of rapid, easy, and safe processes of transformation.

The second key brokering practice is evidence making. Edtech brokers use different evidentiary mechanisms and instruments to produce evidence of “impact” and “efficacy”. By doing so, edtech search engine brokers in particular guide the adoption and usage of edtech in schools, ultimately mediating and shaping the production of “what works” evidence and its circulation into school decision-making sites. One edtech search engine broker studied in the paper, for example, operates as a kind of database of edtech products that are ranked and promoted in terms of online reviews provided by teachers. The broker calls this kind of evidence “social proof”, with its legitimacy derived from front-line teachers’ active participation in its production, though it is also shaped and constrained by a series of specific criteria the organization has derived for “assessing impact”.

Another search broker, by contrast, rates edtech according to specific variables and measurement instruments, enabling schools to define their needs and receive contextualized recommendations through a “matching” program. As such, edtech brokers reinforce the political ideal that “what works” can be repeated in diverse settings, by incorporating educators themselves into the evidence-making process and by producing locally contextualized guidance via new instruments. Edtech brokers’ evidence is not neutral but imprinted by specific assumptions and interests.

The final practice of brokers is professionality shaping through professional development and training programs. By mediating between edtech vendors and pedagogic practice, brokers aim to transform teachers into knowledgeable edtech users, while simultaneously extending edtech vendors’ reach into everyday professional routines. Edtech brokers therefore project a particular normative image of the digitally-competent teacher who, armed with evidence and training, can capably choose the right edtech for the job at hand and deploy it to beneficial effect in the classroom.

Examining edtech brokers

The article is now the basis for ongoing empirical work with edtech brokers across Europe. They are mediating edtech into schools, and while doing so laying claim to expertise in edtech evidence and practice. This makes them significantly powerful yet little-studied actors in shaping how and which digital technology is promoted to schools, how schools make procurement decisions, and how teachers incorporate edtech into their routine pedagogic practices.

In turn, these brokering practices open up important questions about the nature and production of evidence about edtech impact, about the role of little-known intermediary organizations in shaping the future of edtech use in classrooms, the interests, assumptions and financial and industrial support underpinning their judgements, and their capacity to affect the market prospects of edtech startups. Edtech brokers may be putting efforts into solving the evidence problem in edtech, but by doing so they are also positioning themselves as powerful influences on the digital future of schooling.

The full paper, “Mediating educational technologies: Edtech brokering between schools, academia, governance and industry”, is available (paywalled) from Research in Education, or as an open access version.

Posted in Uncategorized | Tagged , , , , , , , , , | Comments Off on Edtech has an evidence problem

AI in education is a public problem

Photo by Mick Haupt on Unsplash

Over the past year or so, a narrative that AI will inevitably transform education has become widespread. You can find it in the pronouncements of investors, tech industry figures, educational entrepreneurs, and academic thought leaders. If you are looking for arguments in favour of AI in education, you can find them in dedicated journals, special issues, handbooks and conferences, in policy texts and guidance, as well as on social media and in the educational and technology press.

Others, however, have argued against AI in education. They have probed at some of the significant problems that such technologies could cause or exacerbate, and deliberately centred those issues for public deliberation rather than assuming AI in education is either inevitable or necessary. (Of course, there are also many attempts to balance different views, such as a recent UK Parliament POSTnote.)

The recent critiques of AI in education resonate with Mike Ananny’s call to treat generative AI as a ‘public problem’:

we need to see it as a fast-emerging language that people are using to learn, make sense of their worlds, and communicate with others. In other words, it needs to be seen as a public problem. … Public problems are collectively debated, accounted for, and managed; they are not the purview of private companies or self-identified caretakers who work on their own timelines with proprietary knowledge. Truly public problems are never outsourced to private interests or charismatic authorities.

Schools and universities are by no means pristine institutions to be protected from change. However, they are part of the social infrastructure of societies, with purposes that include the cultivation of knowledgeable, informed citizens and publics. Efforts to transform them with AI should therefore be seen as a public problem.

In this post I surface a series of 21 arguments about AI in education. These started as notes for an interview I was asked to do, and are based on working with the National Education Policy Center on a report on AI and K-12 schools. In that report we accept AI may prove beneficial in some well defined circumstances in schools, but we also caution against its uptake by school teachers and leaders until its outstanding problems have been adequately addressed and sufficient mechanisms for ensuring public oversight put in place. This post is more like an accessible, scrollable list of problems and issues from monitoring recent debates, media and scholarship on the topic, a kind of micro-primer on critiques of AI in education, though it will no doubt be incomplete.

21 arguments against AI in education

Definitional obscurity.  The term ‘artificial intelligence’ lacks clarity, mystifies the actual operations of technologies, and implies much more capability and ‘magic’ than most products warrant. In education it important to separate different forms of AI that have appeared over the last half-century. At the current time, most discussion about AI in education concerns data systems that collect information about students for analysis and prediction, often previously referred to as ‘learning analytics’ using ‘big data‘; and ‘generative AI’ applications like chatbot tutors that are intended to support students’ learning through automated dialogue and prompts. These technologies have their own histories, contexts of production and modes of operation that should be foregrounded over generalized claims that obscure the actual workings and effects of AI applications, in order for their potential, limitations, and implications for education to be accurately assessed.

Falling for the (critical) hype. Promotion of AI for schools is frequently supported by hype. This takes two forms: first, industry hype is used to attract policy interest and capture the attention of teachers and leaders, positioning AI as a technical solution for complex educational problems. It also serves the purpose of attracting investors’ attention as AI requires significant funding. Second, AI in education can be characterized by ‘critical hype’—forms of critique that implicitly accept what the hype says AI can do, and inadvertently boost the credibility of those promoting it. The risk of both forms of hype is schools assume a very powerful technology exists that they must urgently address, while remaining unaware of its very real limitations, instabilities and faults or the complex ethical problems associated with data-driven technologies in education..

Unproven benefits. AI in education is characterized by lots of edtech industry sales pitches, but little independent evidence. While AIED researchers suggest some benefits based on small scale studies and meta-analyses, most cannot be generalized, and the majority are based on studies in specific higher education contexts. Schools remain unprotected against marketing rhetoric from edtech companies, and even big tech companies, who promise significant benefits for schools without supplying evidence that their product ‘works’ in the claimed ways. They may just exacerbate the worst existing aspects of schooling.

Contextlessness. AI applications promoted to schools are routinely considered as if context will not affect their uptake or use. Like all technologies, social, political and institutional contexts will affect how AI is used (or not) in schools. Different policy contexts will shape AI’s use in education systems, often reflecting particular political priorities. How AI is then used in schools, or not, will also be context specific, reflecting institutional factors as mundane as budgetary availability, leadership vision, parental anxiety, and teacher capacity, as well as how schools interpret and enact external policy guidance and demands. AI in schools will not be context-free, but shaped by a variety of national and local factors, and inflected by the varied ways different stakeholders construct and understand AI as a technology with educational relevance.

Guru authority. AI discourse centres AI ‘gurus’ as experts of education, who emphasize narrow understandings of learning and education. Big names use platforms like TED talks to speculate that AI will boost students’ scores on achievement tests through individualized forms of automated instruction. Such claims often neglect critical questions about purposes, values and pedagogical practices of education, or the sociocultural factors that shape achievement in schools, emphasizing instead how engineering expertise can optimize schools for better measurable outcomes. 

Operational opacity. AI systems are ‘black boxes’, often unexplainable either for technical or proprietary reasons, uninterpretable to either school staff or students, and hard to challenge or contest when they go wrong. This bureaucratic opacity will limit schools’ and students’ ability to hold accountable any actors that insert AI into their administrative or pedagogic processes. If AI provides false information based on a large language model produced by a big tech company, and this results in student misunderstanding with high-stakes implications, who is accountable, and how can redress for mistakes or errors be possible?

Curriculum misinfo. Generative AI can make up facts, garble information, fail to cite sources or discriminate between authoritative and bad sources, and amplify racial and gender stereotypes. While some edtech companies are seeking to create applications based only on existing educational materials, others warn users to double check responses and sources. The risk is that widespread use of AI will pollute the informational environment of the school, and proffer ‘alternative facts’ to those contained in official curriculum material and teaching content.

Knowledge gatekeeping. AI systems are gatekeepers of knowledge that could become powerful determinants of which knowledge students are permitted or prohibited from encountering. This can happen in two ways: personalized learning systems prescribing (or proscribing) content based on calculations of its appropriateness in terms of students’ measurable progress and ‘mastery’; or students accessing AI-generated search engine results during inquiry-based lessons, where the model combines sources to produce content that appears to match a student’s query. In these ways, commercial tech systems can substitute for social and political institutions in determining which knowledge to hand down to the next generation.

Irresponsible development. The development of AI in education does not routinely follow ‘responsible AI’ frameworks. Many AIED researchers have remained complacent about the impacts of the technologies they are developing, emphasizing engineering problems rather than socially, ethically and politically ‘responsible’ issues.

Privacy and protection problems. Adding AI to education enhances the risk of privacy violations in several ways. Various analytics systems used in education depend on the continuous collection and monitoring of student data, rendering them as subject of ongoing surveillance and profiling. AI inputs such as student data can risk privacy as data are transported and processed in unknown locations. Data breaches, ransomware and hacks of school systems are also on the rise, raising the risk that as AI systems require increased data collection, student privacy will become even more vulnerable.

Mental diminishment. Reliance on AI for producing tailored content could lead to a diminishment of students’ cognitive processes, problem solving abilities and critical thinking. It could also lead to a further devaluation of the intrinsic value of studying and learning, as AI amplifies instrumentalist processes and extrinsic outcomes such as completing assignments, gaining grades and obtaining credits in the most efficient ways possible—including through adopting automation.

Commercial infrastructuralization. Introducing AI into schools signifies the proliferation of edtech and big tech industry applications into existing infrastructures of public education. Schools now work with a patchwork of edtech platforms, often interoperable with administrative and pedagogic infrastructures like learning management and student information systems. Many of these platforms now feature AI, in both the forms of student data processing and generative AI applications, and are powered by the underlying facilities provided by big tech operators like AWS, Microsoft, Google and OpenAI. By becoming infrastructural to schools, private tech operators can penetrate more deeply into the every routines and practices of public education systems. 

Value generation. AI aimed at schools is treated by the industry and its investors as a highly valuable market opportunity following the post-Covid slump in technology value. The value of AI derives from schools paying for licenses and subscriptions to access AI applications embedded in edtech products (often at a high rate to defray the high costs of AI computing), and the re-use of the data collected from its use for further product refinement or new product development by companies. These are called economic rent and data rent, with schools paying both through their use of AI. As such, AI in schools signifies the enhanced extraction of value from schools.

Business fragility. Though AI is promoted as a transformative force for the long term, the business models that support it may be much more fragile than they appear. AI companies spend more money to develop and run their models than they make back, even with premium subscriptions, API plus-ins for third parties and enterprise licenses. While investors view AI favourably and are injecting capital into its accelerated development across various sectors, enterprise customers and consumers appear to be losing interest with long term implications for the viability of many AI applications. The risk here is that schools could buy in to AI systems that prove to be highly volatile, technically speaking, and also vulnerable to collapse if the model provider’s business value crashes.

Individualization. AI applications aimed at schools often treat learning as a narrow individual cognitive process that can be modelled by computers. While much research on AI in education has focused on its use to support collaboration, the dominant industry vision is of personalized and individualized education—a process experienced by an individual interacting with a computer that responds to their data and/or their textual prompts and queries via an interface. In other contexts, students have shown their dissatisfaction with the model of automated individualized instruction by protesting their schools and private technology backers.

Replacing labour. For most educators the risk of technological unemployment by AI remains low; precariously employed educators may, however, risk being replaced by cost-saving AI. In a context where many educational institutions are seeking cost savings and efficiencies, AI is likely to be an attractive proposition in strategies to reduce or eliminate the cost of teaching labour.

Standardized labour. If teachers aren’t replaced by automation then their labour will be required to work with AI to ensure its operation. The issue here is that AI and the platforms it is plugged in to will make new demands on teachers’ pedagogic professionalism, shaping their practices to ensure the AI operates as intended. Teachers’ work is already shaped by various forms of task automation and automated decision-making via edtech and school management platforms, in tandem with political demands of measurable performance improvement and accountability. The result of adding further AI to such systems may be increased standardization and intensification of teachers’ work as they are expected to perform alongside AI to boost performance towards measurable targets.

Automated administrative progressivism. AI reproduces the historical  emphasis on efficiency and measurable results/outcomes, so-called administrative progressivism, that has characterized school systems for decades. New forms of automated administrative progressivism will amplify bureaucracy, reduce transparency, and increase the opacity of decision-making in schools by delegating analysis, reporting and decisions to AI.

Outsourcing responsibility. The introduction of AI into pedagogic or instructional routines represents the offloading of responsible human judgment, framed by educational values and purposes, to calculations performed by computers. Teachers’ pedagogic autonomy and responsibility is therefore compromised by AI, as important decisions abut how to teach, what content to teach, and how to adapt to students’ various needs are outsourced to efficient technologies that, it is claimed, can take on the roles of planning lessons, preparing materials and marking on behalf of teachers.

Bias and discrimination. In educational data and administrative systems, past data used to make predictions and interventions about present students can amplify historical forms of bias and discrimination. Problems of bias and discrimination in AI in general could lead to life-changing consequences in a sector like education. Moreover, racial and gender stereotypes are a widespread problem in generative AI applications; some generative AI applications produced by right wing groups can also generate overtly racist content and disinformation narratives, raising the risk of young people accessing political propaganda.

Environmental impact. AI, and particularly generative AI, is highly energy-intensive and poses a threat to environmental sustainability. Visions of millions of students worldwide using AI regularly to support their studies, while schools deploy AI for pedagogic and administrative purposes, is likely to exact a heavy environmental toll. Given today’s students will have to live with the consequences on ongoing environmental degradation, with many highly conscious of the dangers of climate change, education systems may wish to reduce rather than increase their use of energy-intensive educational technologies. Rather than rewiring edtech with AI applications, the emphasis should be on ‘rewilding edtech’ for more sustainable edtech practices.

These 21 arguments against AI in education demonstrate how AI cannot be considered inevitable, beneficial or transformative in any straightforward way. You do not even need to take a strongly normative perspective either way to see that AI in education is highly contested and controversial. It is, in other words, a public problem that requires public deliberation and ongoing oversight if any possible benefits are to be realized and its substantial risks addressed. Perhaps these 21 critical points can serve as the basis for some of the ongoing public deliberation required as a contrast to narratives of AI inevitability and technologically deterministic visions of educational transformation.  

Posted in Uncategorized | Tagged , , , , , , , , , , | Comments Off on AI in education is a public problem

Saliva samples and social policy

Genetic data and genomic technologies are used to produce new knowledge about the biological underpinnings of educational outcomes. Photo by National Cancer Institute on Unsplash

Claims that genetic data could be used to inform educational policy or practice have been growing for the last decade. Studies examining the connections between genetics and educational outcomes have captured media and public attention, as well as leading to significant criticism. Two UK reports on the potential of social and behavioural genomics indicate growing interest in the possibility of using genetic data as the basis for certain forms of policy intervention. As we discuss in a new research article, this raises important questions about the potential for genetic explanations to become authoritative in debates about the appropriate policy approaches to tackling long-standing educational problems.

Genetically-informed policy

One of the reports, Genetics and early intervention: Exploring ethical and policy questions, was published by the Early Intervention Foundation (part of the UK government’s ‘What Works Network’). It suggests that as genetic science is advancing rapidly, ‘it is increasingly possible to identify at birth children who have an elevated likelihood of outcomes such as struggling at school or being diagnosed with a learning, behaviour or mental health condition’. In the other, Genomics Beyond Health: What could genomics mean for wider government?, the UK Government Office for Science considers the potential implications of social and behavioural genetics research findings for education. The report highlights how scientists have produced ‘insight into the biological architecture of learning and education processes’, and suggest its potential to ‘inform more beneficial interventions to improve pupils’ educational outcomes’.

A synthesis of both reports for the Royal Society’s Open Science journal suggests ‘cause for optimism that behavioural genomic research may be able to offer policy-makers a new “genetic lens” … and provide information that could make a useful contribution to evidence-based decision-making’ in social policy areas like education. While the reports are optimistic, they are also cautious in their claims, considering a wide range of ethical issues and problems that would need resolving prior to any form of policy intervention. These issues, as well as the ugly history of eugenic attempts to deploy genetics in educational research and policy, are more fully detailed in a special report, The Ethical Implications of Social and Behavioral Genomics, subsequently published by the Hastings Center, a US bioethics research and policy institute.

Despite the ethical cautions and caveats, these reports indicate incremental movement towards a possible scenario where saliva samples could be used as the basis for social policy, particularly for early years screening and intervention. But getting from saliva samples to social policy would not be a simple process. It would involve mass sampling of children using spit swabs or blood samples (one of the EIF recommendations is increasing the collection of genetic data through longitudinal cohort studies). It would also require a complex scientific network of multi-disciplinary specialism, technologies for translating ‘wet’ samples into ‘dry’ samples for computer analysis, and funders to support the necessary studies.

In short, a whole scientific infrastructure of investigation would be needed to generate and examine the genetic data for genetically-informed policy and interventions in educational practice.

Educational genomics in formation

In a newly published open access paper, we show how an international infrastructure for ‘educational genomics’ has formed over the past 15 years. The term ‘educational genomics’ does not designate a specific bounded field of research or a distinctive discipline. Rather, educational genomics refers to an emerging set of scientific practices and knowledge that, for some scientists involved in such studies, can be characterized as a ‘genomic revolution for education research and policy’. The promise of genomics in education relies on the infrastructure-building that has been undertaken to make such studies possible and seemingly desirable.

The paper, ‘Infrastructuring educational genomics,’ is an outcome of a research project grant awarded by the Leverhulme Trust funding me and my colleagues Dimitra Kotouza, Martyn Pickersgill and Jessica Pykett to investigate how data science and biology are converging in research on education. In this part of the study we focused particularly on the range of actors, concepts and technologies that together make educational genomics possible as a domain of investigation and knowledge production. Some scientists suggest their aim is to ‘open up the black box of the genome’ to explain educational outcomes; our aim was to open up the black box of educational genomics itself. As a ‘science-in-the-making’, we found that educational genomics is currently being constituted by complex interorganizational and interpersonal relationships; a shared way of conceiving of educational outcomes in terms of their molecular biological underpinnings; and by the deployment of bioinformatics technologies and bioinformational storage facilities that mediate and shape scientists’ knowledge work. We refer to these as the network associations, epistemic architecture and technoscientific apparatus that comprise the infrastructure of educational genomics.

The associations of educational genomics include large-scale international consortia and regional research networks, as well as satellite institutes and members, mostly identifying as interdisciplinary ‘sociogenomics’, ‘behavioural genetics’ and ‘genoeconomics’ specialists. Their work is bound together and ‘harmonized’ by large scale databases of curated bioinformation. Such associations and their operations resemble ‘big biology’ far more than conventional educational research, bringing together highly diverse disciplinary specialists, including economists, psychologists, political scientists and sociologists together with bioinformaticians, technicians and new data scientific methods for ‘big data’ genomic analysis.

The associations practising educational genomics research subscribe to a particular conceptual framework that we refer to as the epistemic architecture of such work. This framework is guided by a so-called ‘law’ of social and behavioural genomics, which understands complex human traits, behaviours and other observable phenotypes as being influenced by highly ‘polygenic’ interactions of minuscule genetic variants in interaction with environmental factors. There is no search for a monogenic ‘gene for x’ explanation in social and behavioural genomics or education genomics studies. Instead, studies are guided by the search for thousands of polygenic associations that might together explain a statistical portion of educational outcomes. The end result of such studies is to produce a ‘polygenic score’ as a summative statistic to predict one’s genetic propensity for outcomes such as educational attainment. The surveying of masses of genetic bioinformation required to calculate these polygenic scores requires a complex apparatus of technologies and scientific methodologies.

The technoscientific apparatus of educational genomics includes a range of technologies and methods developed both for medical genomics research and by consumer genetics companies. The educational genomics studies with the largest samples, for example, could not have been completed without contracts for data access with the UK Biobank, a publicly funded medical genetics databank, and the Silicon Valley consumer genetics company 23andme, the owner and operator of the world’s largest private biobank. Other studies rely on longitudinal cohort data, including original data collection efforts to gather saliva samples from children at scale. In turn, the collection of the samples in the biobanks or cohort studies often depends on contracts with major biotechnology companies for access to devices like microarrays and laboratory scanning robots.

Analysing the digital data from these biobanks requires educational genomics consortia to use a range of bioinformatics technologies and data science methods. These include genomic data-mining instruments and applications for calculating polygenic scores from digital bioinformation. As the authors of one educational genomics paper contend, ‘molecular genetic research, particularly recent cutting-edge advances in DNA-based methods, has furthered our knowledge and understanding of cognitive ability, academic performance and their association’.   

It is only through the complex infrastructuring work of pulling together these interorganizational associations, epistemic architecture and technoscientific apparatus that it has become possible to conceive of saliva samples, and their translation into digital genetic data for analysis, as the basis for educational policy and intervention.

Data-centric educational genomics

The potential application of genetics in educational research and policy of course raises significant issues, both ethical and scientific. These include concerns about the non-representativeness of the data, its potential to be appropriated for ideological ends, the possibility of biological discrimination or determinism, lack of causal biological explanation, and the reduction of complex socioeconomic problems to apparently embodied genetic influences as well as the simplification of environmental influences to family or neighbourhood factors. In our analysis, we take a slightly different line of critique, focusing on the consequences of ‘infrastructuring’.

Infrastructuring highlights how building new social and technical systems affects scientific practice and knowledge production, as scientific investigation and knowledge become ‘inextricably bound up with the technical, social, and organizational practices of large-scale computer-enabled information infrastructures’. By foregrounding the ongoing process of infrastructuring and the forms of investigation and knowledge it makes possible in educational genomics, we illuminate how choices about selecting and curating the data, the setup of the biobanks, the collection of the cohort samples, the processing of digital bioinformation through software applications, and the forms of data scientific analysis that are employed, all format, mediate and shape how educational outcomes and other relevant behaviours are understood and explained by educational genomics.

Proponents of such studies claim they are providing a biologically realist understanding of the genetic substrates of educational outcomes. Educational genomics is a gene-centric endeavour. We claim, however, that its gene-centricity might be better understood in terms of what Sabina Leonelli has described as ‘data-centric biology’, where vast digital databases of genomic bioinformation and data mining methods have become central to producing understandings of biological structures and processes. Infrastructures consisting of databases, analysis software and associated methods, Leonelli argues, ‘have come to play a crucial role in defining what counts as knowledge of organisms in the postgenomic era’.

Through its ongoing infrastucturing, data-centric educational genomics is formatting a bioinformational rendering of educational outcomes. It defines what counts as knowledge about the biological processes that enable or inhibit student achievements. And its well-publicized findings support the emerging biological authority of educational genomics as a source of explanation for educational outcomes and student behaviours, potentially closing out other forms of non-genetic explanation.

Attending to the infrastructural orchestration through which such results are fabricated can better help us appreciate the longer term implications of educational genomics amidst growing interest in the incorporation of genetic data in education. It can also help surface the limitations of a data-mining approach to biology in education—for example its privileging of correlational associations lacks mechanistic explanation of the pathways from somatic substance to social outcomes. The biological mechanisms that lead to educational achievements remain black boxes, obscured behind all the correlational associations that polygenic scores represent in simplified summative form.

Rather than opening up the black box of the genetic substrates of student achievement, or offering clear explanations for how saliva samples could become the basis of social policies, educational genomics constructs a black-boxed bioinformational substitute of the student out of algorithmic associations. While many advocates of educational genomics research remain cautious about prescribing policy implications, the construction of an infrastructure of knowledge production nonetheless advances the possibility of bioinformational accounts of student outcomes being used to inform educational interventions.

The full paper, ‘Infrastructuring educational genomics: associations, architectures and apparatuses’ is available open access.  

Posted in Uncategorized | Tagged , , , , , , , , | Comments Off on Saliva samples and social policy

The power of edtech investors in education

Venture capital edtech investors are influencing the future directions of education while seeking return on investment. Photo by Andre Taissin on Unsplash

Venture capital investors are increasingly powerful in education, with the wealth and resources to shape schooling and universities in the future. Edtech has become a major presence in education over the last decade or so, promoted through ideals of transforming pedagogy, curriculum, assessment and management processes. And it is edtech investors that, to a significant degree, have supported the edtech industry’s expansion. Research on edtech investors is scarce, but understanding how they operate, and the ideas they promote, is important as they are influential in shaping how technological developments like artificial intelligence may be financed, integrated into edtech products and marketed to schools in months and years to come.

In the last couple of years, colleagues and I have been working on a series of case studies of edtech investment organizations, and this post offers a brief summary of observations from four publications: two pieces by Janja Komljenovic and myself, and two others by Janja and me with Rebecca Eynon and Huw Davies. Together, these pieces make a case for detailed analysis of edtech investors as powerful and influential organizations in education, with operations that expand beyond the simple financial allocation of funds for edtech startup companies.

In our most recent collaborative article analyzing two US-based edtech investors, “When public policy ‘fails’ and venture capital ‘saves’ education”, we argued that edtech investors are not only economic actors injecting capital into startups, but political actors making consequential decisions about the future of education. In a previous short open access chapter investigating “the financial power brokers behind edtech”, we highlighted how investors’ investment decisions ultimately determine what products and services are funded into existence or not, and are therefore consequential in shaping edtech markets. And in two pieces examining a UK-based edtech investor, Janja and I looked at the ways investors are—as our titles indicated—both “capitalising the future of higher education” and “investing in imagined digital futures” through a variety of techno-financial methods and techniques.

All these pieces take their cue from sociological analyses of markets and economics. Such studies insist on close empirical investigation of how economic and market activities and practices, such as those of investors, are performed, and what specific devices and techniques are involved. We therefore treat edtech investors as particular kinds of “capitalization professionals” that work in distinctive contexts and relations with others by deploying a variety of “techno-economic” instruments and social and financial practices. Approaching edtech investors in this way reveals the range of ways and means by which they are intervening in education and its possible futures.

Investing in scale

Most obviously, edtech investors inject capital into companies. There are many kinds of investors, including private equity and SPACs, but venture capital investors are crucial because they make high-risk investments in startups. Some of the biggest and most highly-valued edtech companies today, like Coursera, Byju’s, ClassDojo and 2U, owe their existence to exuberant venture capital support in their early startup days.

But what does it take for an edtech VC firm to invest in an edtech startup? In our work we have seen how edtech investors actually shape the edtech industry by influencing startups’ business models and priorities. Investors, of course, make decisions on the basis of calculations or valuations of future return on investment. Their funding decisions are contingent on business models and valuations signalling promising growth and revenue potential. That means VCs often prioritize models such as “platforms” with high “network effects” potential and prospects to “scale up” to vast user numbers.

An example of highly scalable edtech is the idea of “weapons of mass instruction”: “rapidly scaling education technology companies, reaching tens of millions of learners at warp speed due to the mass availability of mobile devices and internet access”. Scalable edtech can achieve rapid growth leading to “big paydays”, such as when an investee undergoes a high-value initial public offering (IPO) or trade sale, thus securing a big ROI for the investment fund, the founders and their partners.

For investors, then, it is imperative that startups’ business models demonstrate potential scalability, as ROI is most likely when income can be secured from very large numbers of users subscribing to a platform. And the active role in investees’ operations doesn’t stop once funds have been allocated, as edtech investors often take up formal roles on startup boards, and  support their portfolio companies to grow and scale post investment too. These are relationships founded on valuation practices designed to discern signals of future earnings from uncertain futures.

Making relations

Edtech investors network. To be successful, they have to build a lot of relations. They have to acquire their own funding from limited partners—the huge wealth funds and individuals who input into an investment fund with their own ROI interests. They also construct networks with organizations and individuals from across the finance, technology, philanthropy, government and education sectors. Job roles such as “network chair” and “head of platform”, as well as discursive references to “community” , signify investors’ painstaking work to connect and coordinate various sectoral actors towards shared aims for educational change.

These relationship-making activities span from dinner parties and drinks with select groups to massive industry summits where tech gurus, celebrities, big tech corporations and politicians come together with eager startups to celebrate the power of edtech investing to solve the greatest challenges of education.

Edtech investors therefore do a lot of lubricating relations. They speak on each others’ platforms, they lay on parties, they make sure their channels are open to journalists, and they communicate publicly through platforms like Medium, Substack or via other organizations to share their messages with wider networks of interested parties. These are not secondary activities to the main business of raising and allocating funds, but constitute the complex and laborious everyday work of edtech investing.

Pedagogic advice

Besides funding edtech startups to solve or transform education, investors develop pedagogic guidance programs for edtech startups too. They are not just sources of finance, but industry teachers who offer advice and lessons for startups to achieve scale and high valuations. Programs like “Product-Market Fit Academy”, blog posts, instructional videos, and other online training courses represent a venture capital pedagogy and curriculum that is designed to hustle the edtech startup scene into alignment with investor priorities.

These pedagogic interventions in the startup sector provide business advice, but also discursive framing of the problems and opportunities to be pursued. For many edtech investors, current institutions of education cannot cope with contemporary social and technological trends; they are too slow to change; stuck in outdated models of transmissive pedagogy; and are “failing” to offer the kinds of learning and training that digital learners demand. Startups are exhorted to address these problems if they are find product-market fit,

Edtech VC firms also test startups’ ability to meet their requirements. They assess startups using various metrics. They also run competitions and awards to “amplify, highlight, and support the most innovative and impactful startups”, with startups rated using market criteria like the “5Ps” of “People, Product, Potential, Predictability, and Purpose”. In other words, edtech investors teach and test startups on how to perform in the market and assess their capacity to secure valuable ROI.

Futuring practice

For edtech investors, the future is an economic opportunity—but only if the future turns out how they imagine it could or should be. That means edtech investors do a lot of laborious work to construct future expectations about education, and the economic returns available, and to produce conviction in others that these expectations are desirable and attainable. What we call “investor futuring” involves both the discursive construction of narratives of future change, and calculations of future cashflows made with the use of techno-economic instruments.  

Investors’ futuring techniques make it possible for them to make claims such as: “We believe there is a digital revolution rapidly unfolding in education … . This revolution is creating a historic opportunity to invest in companies that are disrupting and improving the over $6 trillion global education market”. They lay claim to expertise and authority on the “megatrends” in society and technology that “create tailwinds for investment”. These megatrends are accompanied by mega-numbers and graphical visualizations that appear to support their expectations, such as the $6 trillion dollar market opportunity, and manifestoes to address the “$8.5 trillion dollar skills gap”.

Edtech investors therefore render the future legible in discourses, infographics and market valuations. These are painstaking accomplishments that involve imaginative scenario production, data analysis and visualization, and narrative storytelling to convince others that their vision of the future, and the rewards it promises, are just over the horizon.   

Social impact scoring

Edtech investors are very keen to communicate their social conscience, responsibility, and impact. Their desire is not only to support highly scalable platforms, but to achieve scalable social impact, as evident in how they position themselves as empowering startups to increase diversity, equity, and inclusion, and democratise access to learning.

This framing of their actions in terms of social impact is captured, for example, in the idea of “Return on Education”, or low-cost mission-driven edtech. It is not accidental that ROE resonates with ROI: “RoE is a key part of our investment thesis and is directly correlated to ROI; the higher the RoE, the better the investment outcome”.

Socially responsible impact through democratized access to edtech at massive scale in low-income contexts is therefore an opportunity for maximizing economic return. As one edtech investment partner puts it, “if you’re serving the 1%, you’re by definition serving a much more limited market”. Scalable edtech platforms are thus positioned as responsible technologies, with investors as virtuous and moral actors undertaking the socially responsible task of allocating capital to good causes. They are “investing for good” as well as for economic gain.

Edtech investors as political actors

As we wrote in the conclusion of our most recent paper, edtech investors require further research attention because they are responsible for reshaping education by structuring the edtech industry. Edtech VC investors are becoming political as well as economic actors in the education sector through a variety of practices and techniques.

Edtech investors imagine particular futures of education, and seek to materialise them through investment in selected startup companies and products. They structure the edtech industry not only by providing capital but also through pedagogic advice and curricula intended to engineer product-market fit. Edtech investors also position themselves as expert and moral actors through narratives of social good and associated evaluative tools, and they engage in the political work of constructing network relations to produce broader consensus and conviction in their visions of the future of education.

The imaginative and investment practices of edtech VC by no means determine educational futures. However, edtech investors are catalysing action in the present towards the realisation of a selected future that appears to offer the best prospects for long-term cash flows. There is a significant need for more research to attend to these increasingly powerful actors in education, and we hope our initial case study examples and analyses catalyse further research into their social and political practices, their narratives and discourses, and their techno-economic instruments and metrics of valuation and market-making.

Posted in Uncategorized | Tagged , , , | Comments Off on The power of edtech investors in education

Degenerative AI in education

An illustrative image of a line drawing on Frankenstein
AI applications are already being tested in schools, but issues such as the impact of AI on teachers’ work, students’ learning, and schools’ financial sustainability remain to be addressed. Photo by freestocks on Unsplash

After months of hype about the potential of artificial intelligence in education, signs are appearing of how technologies like language models may actually integrate into teaching and learning practices. While dreams of automation in education have a very long history, the current wave of “generative AI” applications – automated text and image producers – has led to an outpouring of enthusiastic commentary, policy activity, accelerator programs and edtech investment speculation, plus primers, pedagogic guides and training courses for educators. Generative AI in education, all this activity suggests, will itself be generative of new pedagogic practices, where the teacher could have a “co-pilot” assistant guiding their pedagogic routines and students could engage in new modes of studying and learning supported by automated “tutorbots”.

“Every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful,” wrote venture capital investor Marc Andreessen in one of the most hyperbolic examples of recent AI boosterism. “The AI tutor will be by each child’s side every step of their development, helping them maximize their potential with the machine version of infinite love.”

But what if, instead of being generative of educational transformations, AI in education proves to be degenerative—deteriorating rather than improving classroom practices, educational relations and wider systems of schooling? The history of technology tells us that no technology ever works out entirely as intended, and neither is it just a neutral tool to achieve beneficial ends. AI, like all tech, has longer historical roots that shape its development and the tasks it is set to perform, and often leads to unanticipated and sometimes deleterious consequences. Current instantiations of AI are infused with a kind of politics that applies technical and market solutions to all social problems.

While appreciative of some of the current creative and critically-informed thinking about generative AI in education, questions remain to be addressed about its possible long-term impacts. Important work on the ethics, harms and social and environmental implications of AI in education has already appeared, and it should remain a topic of urgent deliberation rather than constrained by narrow claims of inevitability. Here I explore the idea of degenerative AI in education from three directions: degenerative AI as (1) extractive experimental systems, (2) monstrous systems, and (3) scalable rent-seeking systems.

Extractive experimental systems

Many generative AI applications in education remain speculative, but one prototypical example of an automated tutorbot has already started being introduced into schools. The online learning non-profit Khan Academy has built a personalized learning chatbot called Khanmigo, which it claims can act as a tutor for students and an assistant for teachers. The New York Times journalist Natasha Singer reported on early pilot tests of Khanmigo in a US school district, suggesting the district “has essentially volunteered to be a guinea pig for public schools across the country that are trying to distinguish the practical use of new A.I.-assisted tutoring bots from their marketing promises”. The results appear mixed.

The framing of the Khanmigo trials as pilot tests and students as guinea pigs for the AI instructive. Recently, Carlo Perrotta said educators appear to be joining a “social experiment” in which the codified systems of education — pedagogy, curriculum and assessment — are all being reconfigured by AI, demanding laborious efforts by educators to adjust their professional practices. This pedagogic labour, Perrotta suggested, primarily means teachers helping generative AI to function smoothly.

The Khanmigo experiment exemplifies the laborious efforts demanded of teachers to support generative AI. Teachers in the NYT report kept encountering problems — such as the tutorbot providing direct answers when teachers wanted students to work out problems — with Khan Academy responding that they had patched or fixed these issues (wrongly in at least one instance).

This raises two issues. The first is the possible cognitive offloading and reduction of mental effort or problem-solving skill that generative AI may entail. AI may exert degenerative effects on learning itself. More prosaically, AI is likely to reproduce the worst aspects of schooling – the standardized essay is already highly constrained by the demands of assessment regimes, and language models tend to reproduce it in format and content.

The second issue is what Perrotta described in terms of a “division of learning”: a term from Shoshana Zuboff denoting a distinction between AI organizations with the “material infrastructure and expert brainpower” to  learn from data and fine-tune models and processes, and the unpaid efforts of everyday users whose interactions with systems flow back into their ongoing development. Elsewhere, Jenna Burrell and Marion Fourcade have differentiated between the “coding elite”, a new occupational class of technical expertise, and a newly marginalized or unpaid workforce, the “cybertariat”, from which it extracts labour. In the Khanmigo case, Khan Academy’s engineers and executives are a new coding elite of AI in education, extracting the labour of the cybertariat teachers and students in the classroom.   

AI may in the longer term put further degenerative pressure on classroom practices and relations. It both demands teachers’ additional unpaid labour and extracts value from it. AI and other predictive technologies may also, as Sun-ha Hong argues, extract professionals’ discretionary power, reshaping or even diminishing their decision-making and scope for professional judgment. In the experimental case of Khanmigo, the teacher’s discretionary power is at least partially extracted too, or at the very least complicated by the presence of a tutorbot.

Monstrous systems

The Khan Academy experiment is especially significant because Khanmigo has been constructed through an integration with OpenAI’s GPT-4. Ultimately this means the tutorbot is an interface to OpenAI’s language model, which enables the bot to generate personalized responses to students. There are even reports the partnership could be the basis of OpenAI’s plans to develop an entire OpenAI Academy—a GPT-based alternative to public education. 

OpenAI’s Sam Altman and Salman Khan both tend to justify their interests in personalized learning tutoring applications by reference to an influential education research study published in 1984—Benjamin Bloom’s 2 sigma problem. This is based on the observation that students who receive one-to-one tutoring score on average 2 standard deviations higher on achievement measures than those in a conventional group class. The problem is how to achieve the same results when one-to-one tutoring is “too costly for most societies to bear on a large scale”. Bloom himself suggested “computer learning courses” might “enable sizeable proportions of students to achieve the 2 sigma achievement effect”.

At a recent talk on the “transformative” potential of new AI applications for education, Sam Altman claimed:

The difference between classroom education and one-on-one tutoring is like two standard deviations – unbelievable difference. Most people just can’t afford one-on-one tutoring… If we can combine one-on-one tutoring to every child with the things that only a human teacher can provide, the sort of support, I think that combination is just going to be incredible for education.

This mirrored Khan’s even more explicit invocation of Bloom’s model in a recent TED Talk on using AI to “save education”, where he also announced Khanmigo.

Bloom’s model emphasizes an approach to education termed “mastery learning”, a staged approach that includes rounds of instruction, formative assessment, and feedback in order to ensure students have mastered key concepts before moving on to the next topic. For entrepreneurs like Altman and Khan, the 2 sigma achievement effect on mastery learning is considered feasible with personalized learning tutorbots that can provide the required one-to-one tuition at low cost and large scale.

Marie Heath and colleagues have recently argued that educational technology is overly dominated by psychological conceptions of individual learning, and therefore fails to address the social determinants of educational outcomes or student experiences. This individual psychological approach to learning is only exacerbated by AI-based personalized learning systems based on notions of mastery and its statistical measurement. The aim to achieve the 2 sigma effect also reflects the AI industry assumption that human intelligence is an individual capacity, which can therefore be improved with technical solutions — like tutorbots — rather than something shaped by educational policies and institutions.

Moreover, the personalized learning bots imagined by Khan, Altman, and many others, are unlikely to function in the neat streamlined way they suggest. That’s because every time they make an API call to a language model for content in response to a student query, they are drawing on vast reserves of information that are very likely polluted by past misinformation or biased and discriminatory material, and possibly may become even more so as automated content floods the web. As Singer put it in the NYT report,

Proponents contend that classroom chatbots could democratize the idea of tutoring by automatically customizing responses to students, allowing them to work on lessons at their own pace. Critics warn that the bots, which are trained on vast databases of texts, can fabricate plausible-sounding misinformation — making them a risky bet for schools.

Like all language models, tutorbots would be “plagiarism engines” scooping up past texts into new formations, and possibly misinformation as a substitute for authoritative curriculum materials.

Perhaps more dramatically, the sci-fi writer Bruce Sterling has described language models as “beasts”. “Large Language Models are remarkably similar to Mary Shelley’s Frankenstein monster”, Sterling wrote, “because they’re a big, stitched-up gathering of many little dead bits and pieces, with some voltage put through them, that can sit up on the slab and talk”.

These monstrous misinfo systems could lead to what Matthew Kirschenbaum has termed a “textpocalypse” — a future overrun by AI-generated text in which human authorship is diminished and all forms of authority and meaning are put at risk. “It is easy now to imagine a setup wherein machines could prompt other machines to put out text ad infinitum, flooding the internet with synthetic text devoid of human agency or intent: gray goo, but for the written word,” he warned.

It’s hard to imagine any meaningful solution to Bloom’s 2 sigma problem when mastering a concept could be jeopardized by the grey goo spat back at a student by a personalized learning tutorbot. This may even become worse as language models themselves degenerate through “data poisoning” by machine-generated content. AI is, however, potentially valuable in other terms.

Scalable rent-seeking systems

One of the most eye-opening parts of Natasha Singer’s report on Khanmigo tests in schools is that while the schools currently trialing it are doing so without have to pay a fee, that will not remain so after the pilot test. “Participating districts that want to pilot test Khanmigo for the upcoming school year will pay an additional fee of $60 per student”, Singer reported, noting Khan Academy said that “computing costs for the A.I. models were ‘significant’”.

There appear to be symmetries here with OpenAI’s business model. OpenAI sought first-mover advantage in the generative AI race by launching ChatGPT for free to the public late in 2022, before later introducing subscription payments. Sam Altman described its operating costs as “eye-watering”. Another monetization strategy is integrating its AI models with third party platforms and services. With Khanmigo integrated with GPT-4, it appears its “significant” computing costs are going to be covered by charging public schools the $60 per-student annual subscription charge.

In other words, Khan Academy appears to be developing a rent-seeking business model for Khanmigo where schools help defray the operating costs of AI models. It reflects the growing tendency in the education technology industry towards rentiership models of capitalization, where companies exact economic rent from educational institutions in the shape of ongoing subscriptions for digital services, and can derive further value from extracting data about usage too.

However, schools paying rent for Khanmigo, according to Singer’s report, may not be a viable long-term strategy.

Whether schools will be able to afford A.I.-assisted tutorbots remains to be seen. … [T]he financial hurdles suggest that A.I.-enhanced classroom chatbots are unlikely to democratize tutoring any time soon.

Mr. Nellegar, Newark’s ed tech director, said his district was looking for outside funding to help cover the cost of Khanmigo this fall.

“The long-term cost of A.I. is a concern for us,” he said.

In a current context where schools seem increasingly compelled to try out new automated services, institutions paying edtech rent could lead to new financial and fundraising challenges within the schooling sector, even as it bolsters the market value of AI in education. The Khanmigo example seems to indicate considerable diversion of public funds to defray AI computing costs, potentially diverting schools’ funding commitments towards technological solutions at the expense of other services or programs. In this sense, AI in education could affect schools’ capacity for other spending at a time when many face conditions of austerity and underfunding.  

There is a paradox between these financial arrangements and the personalized learning approach inspired by Bloom. As Benjamin Bloom himself noted, one-to-one tutoring at scale is too expensive for schools systems to afford. AI enthusiasts have routinely suggested personalized learning platforms, like Khan Academy, could solve this scalability problem at low cost. But as Khanmigo makes clear, scalable personalized tutorbots may themselves be a drain on the public financial resources of schools. As such, AI may not solve the cost problem of large-scale individual tutoring, but reproduce and amplify them due to the computing costs of AI scaling. Putting pressure on the finances of underfunded schools is, as Singer pointed out, unlike to democratize tutoring, but it may be route to long-term income for AI companies.

Resistant and sustainable systems

Generative AI may yet prove useful in some educational contexts and for certain educational tasks and purposes. Schools may be right to try it out — but cautiously so given it has only predicted rather than proven benefits. It remains important to pay attention to its actual and potential degenerative effects too. Besides the degenerative effects it may exert on teachers’ professional conditions, on learning content, and on schools’ financial sustainability, as I’ve outlined, AI also has degenerative environmental effects and impacts on the working conditions of the “hidden” workers in the Global South who help train generative models.

This amounts to what Dan McQuillan calls “extractive violence”, as “the demand for low-paid workers to label data or massage outputs maps onto colonial relations of power, while its calculations demand eye-watering levels of computation and the consequent carbon emissions” that threaten to worsen planetary degradation. Within the educational context, similarly, “any enthusiasms for the increased use of AI in education have to reckon with the materiality of this technology, and its deleterious consequences for the planet” and the workers hidden behind the machines.

The hype over AI applications such as language models “is stress-testing the possibility of real critique, as academia is submerged by a swirling tide of self-doubt about whether to reluctantly assimilate them”, continues Dan McQuillan. Resignation to AI as an inevitable feature of the future of education is dangerous, as it is likely to lock education institutions, staff and students into technical systems that could exacerbate rather than ameliorate existing social problems, like teacher over-work, degradation of learning opportunities, and school underfunding.

McQuillan’s suggestion is to resist current formations of AI. “Researchers and social movements can’t restrict themselves to reacting to the machinery of AI but need to generate social laboratories that prefigure alternative socio-technical forms”. With the AI models of OpenAI currently being tested in classroom laboratories through Khan Academy, what kind of alternative social laboratories could educators create in order to construct more resistant and sustainable education systems and futures than those being configured by degenerative AI?

Posted in Uncategorized | Comments Off on Degenerative AI in education