Emergency edtech

Ben Williamson

markus-spiske-unsplash

The education technology industry has positioned itself as an emergency response to the coronavirus pandemic. Photo by Markus Spiske on Unsplash

Education institutions around the world are switching to ‘remote’ teaching and learning, and the education technology industry is generously offering its products to support them in the current emergency. To a significant extent, the emergency edtech response is providing much-needed services to help educators provide some continuity of study and learning for their students. But the edtech sector has been preparing for remote education for years, and built up a marketplace of products that could radically alter how education is organized long after the world has recovered from the public health crisis.

Pandemic markets
The novel coronavirus pandemic is a health emergency, a political emergency, an economic emergency, and an educational emergency. Since its effects on education systems first became apparent in south east Asia early this year, education companies and technology businesses have ramped up their marketing of products to support online learning, seeing the public health crisis and the quarantining of students partly as an opportunity to prove the benefits of edtech.

Coronavirus may also be beneficial for the edtech industry for financial reasons. Early in March, the investment bank BMO Capital Markets predicted a spike in edtech stocks. ‘While we are uncomfortable citing “winners” in the coronavirus situation, some companies may be positioned better than others,’ it claimed. ‘Specifically, those that specialize in online education could see increased interest should the situation worsen’. BMO Capital Markets specifically singled out major market leaders including K12 and Pearson as potential for-profit beneficiaries of mass education closures and population quarantining measures. These companies have already created the technologies to support ‘remote’ forms of teaching and learning across both the schooling and higher education sectors.

To take one of these example companies, the multinational, multibillion dollar edu-business Pearson has been seeking to reshape education as a remote process as part of a ‘digital transformation’ and corporate restructuring stretching back nearly a decade. In the past few years, Pearson has adopted a ‘digital first’ strategy, begun ditching its production of textbooks, and embraced new forms of ‘platform’ delivery. It has also reconceived its customers as ‘Gen Z’ student-consumers who prefer ‘on-demand streaming’ content to conventional educational delivery, and developed a ‘Global Learning Platform’ to position itself as the ‘Netflix of education’.

At the same time, Pearson has significantly increased its emphasis on online learning for higher education, with a strategic focus on growing its Online Program Management (OPM) market share specifically in the US and UK. OPM models are attractive to universities as they provide the infrastructure necessary for institutions to deliver distance courses and thereby increase their share of the international student market. Institutions across the US and UK have signed 10-year deals with the company, where Pearson provides the back-end systems to host courses and then takes a 50% cut of the fees when students enrol.

Pearson’s Global Learning Platform and Online Program Management services are not just technical developments but ‘market devices’ that have enabled the company to create new markets for its products, and establish itself as the market leader in edtech as part of its corporate vision of education. It is both reaching out to students themselves as remote customers of streaming education services, and partnering up with universities to deliver remote courses. As Anna Hogan and Sam Sellar have argued in relation to Pearson’s vision of education in 2025, the company is seeking to create disruptive changes to the educational profession, deliver personalized learning as a private service, and generate huge quantities of student data for further analysis and product development.

These are not changes that Pearson and its competitors are simply offering up, opportunistically, in response to sudden coronavirus measures. Instead, they are part of a concerted long-term strategy by the edtech industry to actively reorganize public education as a market for its products, platforms and services. As Pearson’s 2018 corporate strategy document stated, the company aimed to shape the future of  education and lead and shape the market too.

Edtech companies, exemplified by Pearson, wish to make ‘remote learning’ the new normal mode of education. ‘Remote’ may not even mean students being geographically distant from their schools or campuses, but simply that edtech platforms act as  intermediaries between educational institutions and their students, acting at a distance to shape the possibilities of teaching and learning. The global pandemic has appeared as an opportunity to rapidly grow market share, generate competitive advantage, and boost stock market valuation, with a view to long-term consolidation of market advantage and to reshaping public education at the same time.

Pandemic experiments
The global coronavirus pandemic is also an opportunity to produce very large quantities of student data, as students are forced online into data-intensive digital learning environments at unprecedented scale. For researchers and organizations invested in data scientific forms of analysis in education, as Jonathan Zimmerman put it in The Chronicle of Higher Education, coronavirus is an opportunity for a ‘great online learning experiment’.

Coronavirus … has created a set of unprecedented natural experiments. For the first time, entire student bodies have been compelled to take all of their classes online. So we can examine how they perform in these courses compared to the face-to-face kind, without worrying about the bias of self-selection. It might be hard to get good data if the online instruction only lasts a few weeks. But at institutions that have moved to online-only for the rest of the semester, we should be able to measure how much students learn in that medium compared to the face-to-face instruction they received earlier.

The working assumption here is that coronavirus is a natural experimental opportunity for education data scientists–both those in academic education research and analysts working in edtech companies and other edubusinesses–to demonstrate the effectiveness of online education over face-to-face teaching. Zimmerman even argued that it should be considered a kind of moral responsibility for universities to use the chance to figure out if online education outperforms in-person teaching, even though, he said, ‘if students showed more gains from online instruction, professors who teach face-to-face classes–like I do–might find their own jobs in peril’.

The Chronicle article is fraught with methodological and ethical problems. Clearly any analysis of the data of populations of online students affected by pandemic conditions could not be meaningfully compared with other data from face-to-face teaching under other conditions. Treating a pandemic as an experiment in online learning reduces human suffering, fear and uncertainty to mere ‘noise’ to be controlled in the laboratory, as if there is a statistical method for controlling for such exceptional contextual variables. Yet the data scientific dream of measuring learning at scale in order to develop a precise understanding of the benefits of remote instruction is clearly animating part of the effort by edtech businesses and associated researchers to utilize the coronavirus emergency as a mass data-gathering and analysis opportunity. And this might ultimately, as Zimmerman suggested, lead to a consolidation of online instruction and lead to further worker precarity for educators.

Emergency edtech eventually won’t be needed to help educators and students through the pandemic. But for the edtech industry, education has always been fabricated as a site of crisis and emergency anyway. An ‘education is broken, tech can fix it’ narrative can be traced back decades. The current pandemic is being used as an experimental opportunity for edtech to demonstrate its benefits not just in an emergency, but as a normal mode of education into the future.

A full paper on Pearson’s market-making activities in higher education is published in Critical Studies in Education, or available at ResearchGate.
Posted in Uncategorized | Tagged , , , , , , , | Leave a comment

Re-engineering education

The Chan-Zuckerberg Initiative, for-profit philanthropy and experimental precision education

Ben Williamson

chuttersnap

The Chan Zuckerberg Initiative is developing experimental new approaches to measurement and intervention in education. Photo by chuttersnap on Unsplash

Many new parents announce the birth of a child on Facebook. Mark Zuckerberg took it a step further, announcing in a December 2015 ‘letter to our daughter‘ that he and Priscilla Chan would give 99% of their Facebook shares during their lifetimes (estimated then at around US$45billion) to causes including education, science and social justice. The vehicle would be the Chan Zuckerberg Initiative (CZI), a ‘new kind of philanthropy’ focused on ‘personalized learning, curing disease, connecting people and building strong communities.’

Four years on, as Chan and Zuckerberg’s child approaches school age, what kind of influence has CZI had on education? ‘Our experience with personalized learning, internet access, and community education and health has shaped our philosophy,’ they wrote in their letter to their newborn daughter. ‘Your generation,’ they continued, will ‘have technology that understands how you learn best and where you need to focus. You’ll advance quickly in subjects that interest you most, and get as much help as you need in your most challenging areas. You’ll explore topics that aren’t even offered in schools today. Your teachers will also have better tools and data to help you achieve your goals. Even better, students around the world will be able to use personalized learning tools over the internet, even if they don’t live near good schools.’

Personalized learning supported by technology tools and data is clearly its priority, not just within the USA but around the globe. This is a long-term project, as Zuckerberg’s letter stated. But by looking closely at its existing portfolio of grants and investments, and at its peculiar organizational structure and status, it is possible to gain some insights into how it is trying to instantiate its vision–and to speculate on its effects.

Grants and investments
In its early days, CZI faced criticism for its lack of transparency. By 2018 it had already spent $300million on education-related projects but it took digging by journalists to reveal what the money was supporting. Since then it has maintained an open grants and investments database. Its grants database–retroactive to January 2018–lists over 400 awards across its three key mission areas, and a ventures list of 15 major investments.

The investments include Byju’s (the highly successful learning app based in India), AltSchool (a Silicon Valley startup school chain that folded in 2019 to become the edtech software company Altitude), Panorama (a platform for schools to gather social-emotional learning data), Brightwheel (an early years management platform), and Handshake (a platform to match college graduates to careers). CZI’s ambitions in education therefore stretch from the early years through higher education and on into graduate destinations, as well as beyond the US borders into new models of online learning at huge global scale. In just a few years, CZI has become a major player in an expanding ‘global education industry‘.

Besides its investments, some of CZI’s education grants are enormous. Most notable is $23million awarded to Summit Schools since 2018 alone–though this does not include any previous grants to the charter school chain, or its in-kind donation of a 50-person engineering team from Facebook to build its personalized learning platform. CZI also granted $2million to TLP, the partnership established to roll-out the Summit Learning Platform nationally. The deployment of engineers to Summit is typical of CZI’s technology-based approach as a self-proclaimed ‘new kind of philanthropy focused on engineering change at scale.’

Of its 88 listed education grants, CZI has also awarded a range of charter school chains, as well as a range of initiatives broadly focused on personalized education, social-emotional learning, and school innovation. Technological solutions, data and evidence feature significantly across these and other programs in its Education Initiative:

We build tools that help teachers tailor learning experiences to the needs of every student, with an emphasis on using evidence-based practices from the fields of learning science and human development … We believe in a data-driven approach … [and] that students need to learn more in school than what is measured on standardized tests. Our tools help students set and track progress towards short- and long-term goals, make plans, demonstrate mastery when ready, and reflect on their learning.

CZI is in some ways a very ‘hands-on’ organization, giving gifts with a view to adding engineering solutions to the problems that its grantees are seeking to address. Even prior to CZI, Zuckerberg had joined up with the Gates Foundation to fund the EducationSuperHighway program to connect all US schools to broadband internet. Zuckerberg and Gates have helped lay the infrastructural cable network to enable digital learning in US schools, and to create the conditions necessary for personalized learning across the system.

For-profit philanthropy
Although it has a major record of grant-giving, CZI is not a typical philanthropic foundation. Instead, it was established as a Limited Liability Company (LLC). LLCs are legally-defined entities which, in contrast with conventional non-profit, tax-exempt private foundations, are free to engage in grantmaking, investment, and political action with few restrictions. It also provides enhanced personal control for its founders.

The legal scholar Dana Brakman Reiser suggests that LLCs such as CZI represent a new form of ‘disruptive philanthropy’ that is distinct from traditional philanthropies (Rockefeller, Carnegie) or even recent ‘venture philanthropies (Gates, Broad). Instead LLC philanthropy models–‘philanthropy 3.0′–have become increasingly common among Silicon Valley entrepreneurs. Ebay co-founder Pierre Omidyar’s Omidyar Network has LLC status, as does Laurene Powell Jobs’ Emerson Collective and ex-Google chair Eric Schmidt’s Schmidt Futures. These ‘disruptive philanthropic vehicles,’ Reiser argues, ‘can both unleash tremendous capital for solving society’s most challenging problems and magnify the influence of its most powerful elites.’ CZI is not so much a philanthropic organization, but a ‘philanthrocapitalist‘ one with huge financial, political, and technical power.

In practice, being an LLC means CZI can act as a charitable grant giving organization, while also making investments in for-profit companies, engaging in ‘impact investing’–where financial returns can be made from programs with measurably beneficial social results–and carrying out significant political work too. CZI’s leadership gives it significant political clout. Zuckerberg himself is connected to a range of political, legal, financial and media networks. Rachel Moran compellingly describes him as a ‘network switcher.’ CZI also made senior hires from Uber, Microsoft, Amazon, Google, Virgin America, Rockefeller University, the Gates Foundation, the US Department of Education, the White House, and various Silicon Valley law firms. This gives CZI the power, through its advocacy program, to ‘support policy change strategies,’ as well as to ‘shape policies’ and engage in ‘changing laws.’

To be fair, many of CZI’s advocacy efforts are targeted at causes such as addressing systemic inequality and injustice. The problem is that ‘philanthrocapitalism’ casts these as issues that can only be solved through programs that also legitimate and deliver personal profit. As Linsey McGoey has argued, philanthrocapitalism ‘resonates with long-held economic assumptions of the moral advantages of capitalism.’ However, ‘what is most novel about the new philanthrocapitalism is the openness of personally profiting from charitable initiatives, an openness that deliberately collapses the distinction between public and private interests in order to justify increasingly concentrated levels of private gain.’

Philanthrocapitalism, or ‘venture philanthropy’ has been strongly associated with foundations such as the Gates Foundation. But foundations such as Gates do continue to operate as non-profits. As an LLC, CZI is subtly different, and much more overtly engages in for-profit activities where social benefit and financial return are treated as reciprocal outcomes. Ken Saltman, for example, has raised a ‘serious question as to whether CZI functions philanthropically at all or whether its activities are only profit seeking and “philanthropy” is a label intended to project an image of “corporate social responsibility.”’

Experimental precision science
Although personalized learning is CZI’s most overt focus area in its Education Initiative, perhaps more significant is its dedication to ‘learning science.’ It is through its learning science program, grants and investments that CZI’s vision for the future of education becomes most clear.

The CZI’s learning science page states that ‘The best learning experiences are grounded in the science of how people learn and develop. We enable educators, researchers, education technology developers, and communities to use the latest learning science,’ and it emphasizes ‘learning measurement, the ‘ development, collection, evaluation, and use of high-quality evidence’ in order to ‘apply knowledge of how people learn’ and ‘develop solutions to challenges educators face in classrooms.’

To achieve this goal, it announced a $5million fund for ‘teams of schools, support organizations, and researchers who want to apply the science of learning and human development to improve existing school-based practices.’ A further partnership with the Gates Foundation began to explore the science of ‘executive function’ and the neural substrates of learning, leading to a ‘consensus’ report and a blueprint for further research and development. That in turn catalysed a joint Gates/CZI $50million fund for the 5-year EF+Math Program, designed to award basic and applied research in executive function, led by educational neuroscientists at the University of California San Francisco.

The program lead of EF+Math is also the Director of Education at Neuroscape at UCSF, a brain imaging centre which together with BrainLENS (Laboratory for Educational Neuroscience, also at UCSF) was awarded a further $2.9million by CZI in 2018 to develop ‘a free mobile tool to measure child and adult progress in executive functioning skills such as working memory, attention, problem solving, and goal setting’. Together, Neuroscape and BrainLENS are developing new computational approaches to brain and genetic analysis applied to education. Neuroscape and BrainLENS are also partners of the University of California’s multi-institutional Precision Learning Center, which focuses on the use of neuroscience, psychology and biomedical data to improve learning experiences and outcomes.

Given CZI’s Science Initiative emphasis on ‘precision medicine‘–the use of big data and predictive algorithms for healthcare–its learning science efforts appear to suggest it is positioning itself as a centre of expertise and authority in ‘precision education.’ CZI’s director of learning science, Bror Saxburg, has made the link between precision medicine and precision education explicit in his advocacy for ‘learning engineering.’ Saxberg, a high-profile learning scientist within the education technology industry, describes learning engineering as a multidisciplinary blend of the learning sciences, instructional design and learning analytics:

getting the most from learning analytics has to be an interdisciplinary effort: computer science, linguistics, education, measurement science, cognitive science, motivational and social psychology, machine learning, cognitive neuroscience among others. These different domains will need to be combined to build out an effective evidence-grounded ‘learning engineering’ version of learning analytics.

These learning engineering approaches, including data gathering and modelling, says Saxburg, ‘ultimately can allow for personalization to interests, capabilities, identity, social-emotional state, and motivation states for individual learners’, by using evidence ‘at multiple levels, from clickstreams, motion position data, speech streams, gaze data, biometric and brain sensing, to more abstracted feature sets from all this evidence.’ The use of this evidence across ‘multiple dimensions’, he adds, will allow examination of ‘longitudinal and multidimensional trajectories’ and clusters and patterns of ‘learner change.’ Such analyses, finally, will  help to identify ‘new opportunities for targeted intervention’ and ‘precise action’ that are analogous to data-scientific ‘precision medicine.’

As such, through Saxberg and its learning science grants, CZI is promoting learning engineering as an educational parallel to precision medicine–the experimental use of multiple sources of biomedical, neuroscientific, cognitive and psychological data for personalized diagnosis and intervention.

Re-engineering education
The Chan Zuckerberg Initiative may not yet have the reach and influence of the Gates Foundation, but it is fast becoming one of the most significant funders of educational technology development and scientific research into learning and child development. This positions it to become a powerful source of authority in the shaping of education in multiple ways.

Through support for Summit and other charter school operations it is continuing the longstanding project of philanthropic advocacy for alternatives to public education, albeit now in the for-profit mode of disruptive philanthropy. Its personalized learning projects are extending adaptive, data-driven software beyond the charter chains where they have been developed and tested and out into schools and colleges at very large scale. And by funding computationally-powered research and development in learning science and learning engineering, CZI is advancing experimental new ‘precision’ understandings of the human brain and cognition into applied teaching practices. It is in other words championing a new model of personalized, precision education that brings together the Silicon Valley culture of disruption, commercial technology, personalized learning advocacy, and new scientific practices modeled on those of precision medicine.

By creating CZI as an LLC, Chan and Zuckerberg also maintain powerful control over their spending and the direction of the organization. This gives them unprecedented power to shape the direction of research and development in education, by selecting and investing in programs that fit their personal vision. These efforts amount to an attempt to experiment on and re-engineer education into the form that Mark Zuckerberg and his networks find desirable, and that they believe can and ought to be pursued and attained. CZI is re-engineering education at scale.

Posted in Uncategorized | Tagged , , , , , , | 1 Comment

Platform teachers

Ben Williamson

containers by Guillaume Bolduc

Amazon has launched a new service allowing teachers to sell and buy education resources through its platform. Image by Guilaume Bolduc on Unsplash: https://unsplash.com/photos/uBe2mknURG4

The massive multinational platform company Amazon has announced a new service allowing teachers to sell lesson plans and classroom resources to other teachers. The service, Amazon Ignite, is moving into a space where Teachers Pay Teachers and TES Teaching Resources have already established markets for the selling and buying of teaching materials. These services have reimagined the teacher as an online content producer, and Amazon has previously dabbled in this area with its Amazon Inspire ‘open educational resources’ service for free resource-sharing. But Amazon Ignite much more fully captures the teaching profession as a commercial opportunity.

The operating model of Amazon Ignite is very simple. Teachers can produce content, such as lesson plans, worksheets, study guides, games, and classroom resources, and upload them as Word, Powerpoint or PDF files using the dedicated Amazon Ignite platform. Amazon then checks the resources to ensure they don’t infringe any copyrights before they appear in the marketplace. In these ways, Amazon is now in the business of ‘shipping’ educational content across the education sector in ways that mirror its wider online commerce model.

Amazon claims the Ignite platform offers a way for teachers to ‘earn money for work you’re already doing’ by paying users 70% royalties on the resources they sell. The company itself will take 30% of the sales, plus a transaction fee of 30 cents for items under $2.99, though it also has discretion to change the price of resources including by discounting the cost to customers. This makes Amazon Ignite potentially lucrative for Amazon as well as for successful vendors on the platform.

Although Ignite is available only in the US in the first instance, the platform exemplifies the current expansion of major multinational tech companies and their platforms into the education sector. The extension of the commercial technology industry into education at all levels and across the globe is set to influence the role of the teacher and the practices of the classroom considerably over coming years.

Teacher brand ambassadors
The edtech industry, and the wider technology sector, are strongly involved in defining the characteristics and qualities of a ‘good teacher’ for the 2020s. While commercial businesses have long sought access to schools, the National Educational Policy Center (NEPC) in the US recently launched a report on teachers as ‘brand ambassadors’:

Corporate firms, particularly those with education technology products, have contracted with teachers to become so-called brand ambassadors. A brand ambassador is an individual who receives some form of compensation or perk in exchange for the endorsement of a product. Unlike celebrity endorsers, teachers can be thought of as ‘micro-influencers’ who give firms access to their network of social influence.

Teacher brand ambassadors, as well as ‘product mentors’, ‘champions’ and ‘evangelists’, have become significant edtech marketing figures. They often use social media, including Twitter, Facebook, and Instagram, to promote and model the use of specific educational technologies. They might even be involved in the development and testing of new software features and upgrades, as well expenses-paid trips to conferences, summits and trade events where they are expected to attend as representatives of the brand.

The NEPC reported that teacher brand ambassador programs raise significant ethical issues and conflicts of interest, while delivering return on investment to producers when their product is introduced into classrooms and students are exposed to their brand.

As the big tech firms have closed in on education, they have begun to merge the marketing role of the brand ambassador into a professional development role–such as Google’s Certified Educator program. Amazon’s AWS Educate program enables whole institutions to become AWS Educate members, in effect bringing whole institutions into its branded environment. The ‘perks’ include providing educators access to AWS technology, open source content for their courses, training resources, and a community of cloud evangelists, while also providing students credits for hands-on experience with AWS technology, training, and content.

Platform gig teachers
Amazon Ignite, however, represents the next-stage instantiation of the brand ambassador and the teacher as micro-influencer. On Amazon Ignite, teachers are not contracted as platform ambassadors, but invited to become self-branded sellers in a competitive marketplace, setting up shop as micro-edubusinesses within Amazon’s global platform business. Without becoming official brand ambassadors, teachers become gig workers engaging in market exchanges mediated by Amazon’s platform. This in turn requires them to become micro-influencers of their own brands.

So who are the teachers who participate in the Amazon Ignite educational gig economy? Amazon Ignite is ‘invitation-only’ and as such makes highly consequential decisions over the kinds of content and resources that can be purchased and used. This might be understood as high-tech ‘hidden curriculum’ work, with Amazon employees working behind the scenes to make selections about what counts as worthwhile resources and knowledge to make available to the market.

Aamazon educators

The list of ‘featured educators’ on Amazon Digital Education Resources. Image from:  https://www.amazon.com/b/ref=dervurl?node=17987895011

It is not really clear that Amazon Ignite will even empower existing classroom teachers to become content producers and sellers. A brief review of the current ‘featured educators’ on Amazon’s Digital Education Resources page gives an indication of the kind of invited participants who might thrive on Ignite. Most of these appear as established micro-edubusinesses with well-developed brands and product ranges to sell. Amazon offers extensive advice to potential vendors about how to package and present their resources to customers.

The featured educator Blue Brain Teacher, for example, is the branded identity of a former private education curriculum adviser and Montessori-certified educator, who focuses strongly on ‘brain-based’ approaches including ‘Right-Brain training’. An established vendor on Teachers Pay Teachers, the Blue Brain Teacher also has a presence on Facebook, Instagram and Pinterest, is a Google Certified Educator, and officially certified to offer training on Adobe products.

Another featured educator, Brainwaves Instruction, also has a glossy website and existing web store of printable resources, a blog featuring thoughts and lesson ideas on mindfulness, growth mindset, and the adolescent brain, and all the social media accounts to amplify the brand.

These and many of the other featured educators on the Amazon Digital Education Resources store give some indication of how the Amazon Ignite market will appear. Many are existing TpT users, active and prolific on social media, have their own well-designed and maintained websites, write blogs, and are highly attentive to their brand identity. Some, such as Education with an Apron, are not limited to the selling of educational resources, but have their own teacher-themed fashion lines such as T-shirts and tote bags (‘I’m the Beyonce of the classroom’). These are teacher gig workers in an increasingly platformized education sector.

Amazon Ignite, at least at this early stage, also seems to be overwhelmingly feminized. Most of its featured educators present themselves through the aesthetics of lifestyle media and family values, as examples such as The Classroom Nook indicate. It suggests the reproduction of a specifically gendered construction of the teacher.

This is balanced, in many cases, with sophisticated social media-style iconography, and significant investment in various technology industry programs. Erintegration, for example, shares resources, lesson plans, reviews, and tips for using iPads, Google Apps, and other devices ‘to engage digital learners in all curriculum areas’, and is already involved in other Amazon programs:

Erintegration is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.

Erintegration is sometimes provided free services, goods, affiliate links and/or compensations in exchange for an honest review.  All thoughts and options are my own and are not influenced by the company or its affiliates.

Not all the featured educators are single individuals either. Clark Creative Education is a team of educators, authors, designers and editors, whose founder is a ‘top-milestone author on Teachers Pay Teachers’. Amazon Ignite is, then, not simply empowering practising teachers to ‘earn money for work you’re already doing’ but is actively incentivizing the expansion of a market of educational startup content producers.

Children can even be content providers. According to the Terms and Conditions, ‘A parent or guardian of a minor can open a Program account and submit the minor’s Resource-Related Content as the Content Provider’. Given the role of young celebrity micro-influencers on social media, it is possible to speculate here that school children could also establish positions as ‘edu-preneurial’ content producers.

Platform classrooms
All in all, Amazon Ignite is encouraging teachers to see themselves as empowered and branded-up personal edubusinesses operating inside Amazon’s commerce platform. It is easy to see the attraction in the context of underfunded schools and low teacher pay. But it also brings teachers into the precarious conditions of the gig economy. These educators are gig workers and small-scale edu-startup businesses who will need to compete to turn a profit. Rather than making select teachers into brand ambassadors for its platform, Amazon is bringing teacher-producers and education startups on to its platform as content producers doing the labour of making, uploading and marketing resources for royalty payments. It expands platform capitalism to the production, circulation and provision of classroom resources, and positions Amazon as an intermediary between the producers and consumers in a new educational market.

By making selections about which educators or businesses can contribute to Ignite, Amazon is also making highly significant and opaque decisions about the kind of educational content made available to the teacher market. The criteria for inclusion on Amazon Ignite are unclear. What kind of educational standards, values, or assumptions underpin these choices? Curriculum scholars have long talked about the ways aspects of culture and knowledge are selected for inclusion in school syllabi, textbooks and resources. Amazon is now performing this function at a distance through its selection of educational content creators and market vendors.

Over time, Amazon Ignite is likely to produce hierarchies of vendors, since Amazon claims the Ignite resources will show up in search results. This raises the prospect of algorithmic recommendations based on a combination of vendor popularity and users’ existing purchases—a ‘recommended for you’ list tailored to teachers’ search and purchase histories. The Terms and Conditions specify that Amazon ‘will have sole discretion in determining all marketing and promotions related to the sale of your Resources through the Program and may, without limitation, market and promote your Resources by permitting prospective customers to see excerpts of your Resources in response to search queries’.

Moreover, Amazon claims ‘sole ownership and control of all data obtained from customers and prospective customers in connection with the Program’, thereby gaining the advantage of using buyer and seller data to potentially further maximize its platform profitability.

Amazon Ignite anticipates an increasingly close alignment of classrooms and platforms in coming years. ‘As with social media platforms in the 2000s, educational platform providers will be working to expand the scope of their “walled gardens” to encompass as many user practices as possible’, argue the authors of a recent article outlining likely trends in education technology in the 2020s. Along with Amazon’s ongoing attempts to embed its Alexa voice assistant in schools and universities, Amazon Ignite has now further expanded the walls of Amazon’s huge commerce platform to enclose the education sector. Amazon is inciting educators to become platform teachers whose labour in platform classrooms is a source of profit under platform capitalism.

Posted in Uncategorized | Tagged , , , , , | 1 Comment

Psychodata

Ben Williamson

annie-spratt-q1yMLvXT8Z4-unsplash

‘Social emotional learning’ centres on the capture of psychological data from children. Photo by Annie Spratt on Unsplash

‘Social and emotional learning’ (SEL) has become one of the most active topics in education policy and practice over the last few years. At an international scale, the OECD is about to run its new Study on Social and Emotional Skills for the first time this month, in its bid to produce policy-relevant comparative data on different nations’ ‘non-cognitive’ capacity. Nationally and regionally, government education departments have begun to endorse SEL as a key priority. At classroom level, teachers are using SEL-based edtech devices like ClassDojo, Panorama and HeroK12 to observe students’ social-emotional learning, twinned with tasks such as ‘emotion diaries’, ‘managing your emotions’ posters, and self-assessment scales for children to rate their emotions.

How should we understand this SEL explosion? In a new research article entitled ‘Psychodata‘, just published in Journal of Education Policy, I argue that SEL is a good case of a ‘policy infrastructure’ that is currently in-the-making, and that its main objective is the construction of ‘data infrastructure’ for the measurement of students’ social-emotional skills. The article presents my attempt to ‘disassemble’ the statistical, psychological and economic infrastructure of social-emotional learning into some of its main constituent parts.

Policy & data infrastructures
By policy infrastructure I mean all the various organizations, forms of expert knowledge, concepts, techniques and technologies that all have to be brought together to make any policy area operational. Psychology, economics and statistics–which include people, knowledge, devices, practices and techniques–are key aspects of SEL policy infrastructure. And by data infrastructure I mean the technologies, modes of quantification, actors and desires that have to be assembled together for large-scale measurement–the system of data collection, analysis and presentation. In fact, I argue that the construction of data infrastructure is making social-emotional learning possible to conceive and enact as a key policy area. A policy infrastructure, in this sense, to a large extent is its data system.

Social-emotional learning sounds like a progressive, child-centred agenda, but behind the scenes it’s primarily concerned with new forms of child measurement. As the OECD noted in a 2015 report proposing its study on social-emotional skills, ‘While everyone acknowledges the importance of social and emotional skills, there is insufficient awareness of “what works” to enhance these skills and efforts to measure and foster them.’ Many other SEL advocates talk of the importance of building a ‘psychometric evidence base’ to truly demonstrate the polict-relevance of social-emotional learning, and to consolidate SEL as a coherent ‘policy field’. As a result, the construction of data infrastructure has become the central focus of many SEL organizations, from transnational governance organizations like OECD to edtech companies, philanthropies, think tanks, campaign coalitions, edu-businesses, and many others. The enumeration of student emotions as evidence for policymaking is the central agenda of SEL advocates.

This is not to suggest that we necessarily see a coherent data infrastructure for the quantification of SEL. That perhaps is the ultimate objective but actually SEL measurement is being done in myriad ways, involving multiple different conceptualizations of SEL, different political positions, and different sectoral interests. The OECD’s study is clearly an attempt to create a global measurement standard for SEL—but its use of personality theory and the Big Five personality testing method in the test is not entirely consistent with SEL frameworks derived from positive psychology and youth development literatures deployed by other SEL organizations and coalitions. The article is an attempt to identify continuities and relations across the diverse SEL field, as well as to highlight inconsistencies and incoherence.

Psycho-economic expertise
I make six main points in the paper. First, SEL needs to be understood as the product of a ‘psycho-economic’ fusion of psychological and economics expertise. Long-standing collaboration between the positive psychologist Angela (‘Grit’) Duckworth and the economist James Heckman in the measurement of social-emotional learning and related ‘non-cognitive’ qualities illustrates this interdisciplinary combination. These psycho-economic experts have attained remarkable transnational promiscuity as authorities on social-emotional learning and its measurement.

But this psycho-economic fusion also illustrates a wider political context where psychology and economics have become dominant forms of expertise in contemporary governance. This is not necessarily novel, but as big data have become available it has become increasingly possible to gather behavioural and other psychological data from populations, which may be embraced by authorities (governmental or otherwise) in economic forecasting and political management. Heckman, Duckworth and other SEL authorities embody a political economy in which human psychological qualities are translated into psychometric data as quantitative measures of potential economic value, and behavioural data has become a source for governmental ‘nudging’ and control.

Policy mobility
The second key point is about ‘policy mobility’ and the sets of moving relations among think tanks, philanthropies and campaigning coalitions which have been central to establishing SEL as an emerging policy field. Big players in the US include CASEL, the Aspen Institute and the Templeton Foundation. They, like the OECD, are forming relations with experts and packaging up SEL in glossy brochures, meta-analyses, evidence digests, and summaries of existing psychometric data, in order to attract policy commitment. They are, in other words, involved in the painstaking work of assembling diverse sources and resources into actionable policy-relevant knowledge.

Rather than a project of central governments, then, SEL is the product of networked governance involving organizations from across sectors and working from diverse perspectives and interests. Yet despite considerable heterogeneity, these organizations are slowly translating their different interests into shared objectives, forming coalitions, and producing ‘consensus’ statements that seek to stabilize social-emotional learning as a coherent area of policy development.

Money moves
Third, SEL is a site of considerable movement of money. There’s a lot of investment in SEL programs, SEL-based edtech products, and philanthropic funding of SEL organizations. For example, both the Gates Foundation and the Chan-Zuckerberg Initiative have generously funded some of the key SEL organizations mentioned above. A statistical algorithm has been devised to calculate the economic value of social and emotional learning, and prediction of substantial return on investment has stimulated a very active impact investing sector. Government departments are also funding SEL through, for example, grants for schools.

As such, SEL is thoroughly entangled with financial mechanisms which show how education policy has become inseparable from market logics. Money is flowing into businesses from investors, and into schools from governments, and into classroom practices through impact investment, all of which is making SEL appear practicable while also contributing to the production of ‘evidence’ about ‘what works’ for further policy influence. The beneficial social ‘return’ of SEL is also generating lucrative return for investors, as financial investment has begun to prefigure official policy intervention.

Policy machinery
The fourth point is that a huge industry of SEL products, consultancy and technologies has emerged, which has allowed SEL practices to proliferate through schools. Edtech platforms, with reach into thousands of schools globally, may even be understood as new producers of policy-relevant knowledge, by generating large-scale SEL data in ‘real time’ and an extensive evidence base at the kind of scale and speed that bureaucratic international organizations or state departments of education cannot match. They act as practical relays of the commercial aims of SEL edtech providers into the spaces and practices of pedagogy at scales exceeding the national or local boundaries of education systems.

We might think of such edtech devices as policy machinery in their own right. SEL is building momentum through teacher resources and edtech markets, as well as through the work of consultants and in-service professional development providers. The policy infrastructure of SEL is, then, populated by people doing new kinds of policy work but also by nonhuman policy machines that are active in school practices and in the quantification of student affects.

Glocal policy
Fifth, while much SEL activity is working in mobile ways across national borders, its enactment is also contingent on local, regional and national priorities. In the UK, for example, the Department for Education has focused on ‘character education’, partly as a result of advocacy by the Templeton Foundation-funded Jubilee Centre. In California, ‘growth mindset’ measurement is being tied to school accountability mechanisms.

At the same time, however, how SEL is locally enacted is dependent upon the global markets of resources and technologies available—which allows a device such as ClassDojo to participate in classrooms globally, directly through the fingertips and observations of teachers. As such, SEL exemplifies the increasingly ‘glocal’ character of education policy, with flows of transnational influence on local practices and local priorities sometimes scaling back up to the global. Edtech SEL products emanating from Silicon Valley, for example, travel globally and bring concepts such as growth mindset–which originated at Stanford University–into schools thousands of miles distant from the culture of entrepreneurial self-improvement in the tech sector.

Global metrics
The sixth and final main point is about the OECD’s effort to create a standardized global metric for SEL. The OECD overtly brings together psychology and economics with the test positioned as a way of calculating the contribution of social-emotional skills to ‘human capital’. Directly informed by the economist James Heckman and by the personality theorist Oliver John, the OECD test uses the Big Five personality testing method and labour market calculations to connect up students’ socio-emotional qualities to quantitative socio-economic outcomes. In this way, the OECD test shows how students’ psychological qualities have been ‘economized’.

The test represents a significant shift in focus for the OECD. As the OECD’s Andreas Schleicher has argued, it is shifting its emphasis from ‘literacy and numeracy skills for employment, towards empowering all citizens with the cognitive, social and emotional capabilities and values to contribute to the success of tomorrow’s world’. It is also increasingly emphasizing the new ‘sciences of learning’ emerging from psychology, neuroscience and biomedical fields. As such, the OECD SSES test exemplifies how education policy influencers are increasingly turning to the human sciences as sources of policy-relevant insights for education. In the case of SSES specifically, it involves the use of personality testing as a way of calculating economic competitiveness, and entails that subsequent policy interventions would focus on modifying student personality characteristics for economic advantage.

Psychoeconomic governance
Overall, what I’ve tried to show in the article is that SEL is a policy field in-the-making and that it remains inchoate and in some ways incoherent. We can understand it as a policy infrastructure that is being assembled from highly diverse elements, and that is centrally focused on the production of ‘psychodata’. In fact, the potential of a SEL policy infrastructure depends to a great extent on the creation of the data infrastructure required to produce policy-relevant knowledge. In other words, the generation of psycho-economic calculations is at the very core of current international policy interest in social-emotional learning, which is already relaying into classroom practices globally, governing teachers’ practices, and shaping the priorities of education systems to be focused on the enumeration of student emotions.

Psychodata: disassembling the psychological, economic, and statistical infrastructure of ‘social-emotional learning’ is published in Journal of Education Policy. An accessible version is also available at Researchgate.
Posted in Uncategorized | Tagged , , , , , | Leave a comment

EdTech Resistance

Ben Williamson

Prepared for EdTech KnowHow conference, Stavanger, Norway, 26 September 2019

One month ago I set a Twitter mob against a team of young researchers working on a new education technology prototype at a major university in the United States.

Here’s how I did it.

One of my current research interests is in how technical advances in brain science and human genetics are leading to new ways of understanding learning and education. So I’m gathering a lot of material together from companies and from research labs to scope out the state of the art in the science of neurotechnology and bioinformatics.

MIT AttentivU

The MIT Media Lab project AttentivU

That’s how I came across this prototype MIT Media Lab project called AttentivU. It’s building a pair of wearable, ‘socially acceptable’ glasses with in-built electroencepholagram (EEG) detectors that can ‘sense’ from brainwave signals escaping the skull when a student is losing attention. The glasses then emit ‘bone-conducted sound’ to ‘nudge’ the student to pay attention.

Having written at length about the potential effects of neurotechnology in education before, personally I thought these seemed  potentially very concerning, and definitely worth posting on to Twitter.

MIT AttentivU tweet

Tweet on AttentivU triggered accusations of ‘eugenics’ and ‘torture’

‘Check out these cool brain-reading glasses from MIT Media Lab’, I tweeted. In retrospect I should have put scare quotes around ‘cool’, even though I thought they self-evidently were not–I mean, look at them!

By that evening my Twitter notifications were buzzing constantly with outrage. These glasses were not ‘cool’, but a ‘torture device’ from Stanley Kubrick’s film of A Clockwork Orange, especially for neurodiverse populations and young people labelled with attention deficit and hyperactivity disorder.

By the next morning, I was being called a ‘eugenicist’ and ‘human garbage’. Some people thought it was my project; most thought I was amplifying it. Doubtless the sense of outrage was pumped high because of the Media Lab’s association with Jeffrey Epstein.

Others recognized it was in fact the project of a team of young postdoctoral researchers. Two days after I posted the original tweet I started seeing a steady stream of tweets from them clarifying its aims and scope. Twitter outrage had found them and demanded they shut down the project.

techlash books

The ‘techlash’ is reflected in critical books on the social and political consequences of technology

The ‘techlash’
Now I still don’t like these brain goggles very much—the criticisms on Twitter reflected my own critical views about targeting students for automated ‘nudges’ to the skull based on simplified brainwave readings. I don’t much like the way Twitter turned this into a  ‘torture device’ either–I think we need to read these innovations more closely and develop more careful critiques.

But it has been educational for me to be on the harsh end of what I see as a recent and emerging trend—edtech resistance and pushback. Twitter outrage is its most extreme expression–but there are also good reasons to pay attention to edtech pushback.

Edtech pushback is our sector’s symptom of a much wider public backlash against ‘big tech’—or a ‘techlash’ as some are calling it.

By now we recognize how exploitation of user data for targeted advertising, online misinformation, social media bots, ‘deepfakes’ and so on, have got us to a situation where, some argue, democracy has been hacked and modern capitalism has come to depend on surveillance and behaviour control.

The techlash is a response from the public, the media, the charity sector, even some government ministers and policymakers to these data controversies. In some cases is even leading to large commercial fines, government committee summonses, and calls for much greater tech regulation.

edtechlash

News media has begun to report critically on edtech

Edtech resistance, or perhaps an ‘edtechlash’, is also gathering strength. Anyone developing, researching, or teaching with educational technologies should be paying attention to it–not least because news journalists are increasingly reporting on controversial edtech-related stories.

There are some common resistance themes emerging—such as edtech privacy, security, and data protection; concerns over artificial intelligence in schools; and the role of multinational commercial companies.

In this talk I want to raise some specific edtech resistance examples and take from these a few key lessons. As a university researcher I’m just trying to document the steady build-up of edtech pushback. For those in the edtech industry this resistance should be informing your thinking as you look to the next decade of product development, and for educators or decision-makers, these tensions should be in mind when thinking about the kinds of education systems and practices you want to develop for the future.

EdTech activists
First up, I think anyone involved in making or using edtech needs to be paying close attention to a growing number of ‘anti-edtech activists’—journalists, educators, parents, or simply concerned members of the public who feel moved to challenge the current direction of edtech development.

These activists are doing their own forensic research into edtech, its links to commercial interests, and the critical issues it raises regarding privacy, private sector influence over public education, and the challenges that are emerging for educators and students. The work of Audrey Watters at Hack Education is exemplary on these points.

Hack Education

Audrey Watters’ Hack Education site is a popular source of critical edtech commentary

These anti-edtech activists are actively disseminating their arguments via blogging and social media, and gaining public attention. Charitable groups focused on children’s digital rights are moving to a more activist mode regarding edtech too. DefendDigitalMe and the 5Rights group in the UK are already exploring the legal, ethical and regulatory challenges of technologies that collect and process student data.

The lesson we can take here is that activists are increasingly expressing outrage over private exploitation of public education and students’ personal data. Look what happened when data privacy activists got organized against the Gates Foundation’s $100million inBloom platform for educational data-sharing, learning apps and curricula in 2013–it collapsed within a year of launch amid growing public alarm over personal data exploitation and misuse.  Monica Bulger and colleagues commented,

The beginnings of a national awareness of the volume of personal data generated by everyday use of credit cards, digital devices, and the internet were coupled with emerging fears and uncertainty. The inBloom initiative also contended with a history of school data used as punitive measures of education reform rather than constructive resources for teachers and students. InBloom therefore served as an unfortunate test case for emerging concerns about data privacy coupled with entrenched suspicion of education data and reform.

Diversity challenges
Then there’s the FemEdTech movement, mostly consisting of academics, edtech software developers, and STEM education ambassadors who, inspired by feminist theory and activism, are pushing greater representation and involvement of women and other excluded and disadvantaged groups in both the development of and critical scholarship on educational technologies.

femedtech_white-1024x341

The FemEdTech network challenges the lack of diversity in the edtech sector

The FemEdTech network  is:

alive to the specific ways that technology and education are gendered, and to how injustices and inequalities play out in these spaces (which are also industries, corporations, and institutions). We also want to celebrate and extend the opportunities offered by education in/and/with technology – to women, and to all people who might otherwise be disadvantaged or excluded.

The lesson I take from FemEdTech is that industry needs to act on the lack of diversity in the edtech sector, and educators need to be more aware of the potentially ‘gendered’ and ‘racialized’ nature of edtech software. We already know that education serves to reproduce inequalities and disadvantages of many kinds–the risk is that edtech worsens it. It might be claimed, for example, that the model of ‘personalized learning’ favoured by the edtech sector reflects  the mythology of the self-taught white male programmer. The introduction of computer science and programming in the National Curriculum in England has failed to appeal to girls or children from poorer backgrounds, with the result that England now has fewer girls than ever studying a computer-based subject–not a great way to build up diversity in STEM areas or in the technology workforce.

Student protests
Probably one of the most publicized acts of edtech resistance in the last year or so were the series of student walkouts and parent protests at the Mark Zuckerberg-funded Summit Schools charter chain in the US last year. Personalized learning through adaptive technology is at the core of the Summit approach, using a platform built with engineering assistance from Facebook.

As students from New York wrote in a public letter to Zuckerberg, they were deeply concerned about exploitation of their personal data, and the possibility of it being shared with third parties, but also rejected the model of computer-based, individualized learning which, they claimed, was boring, easy to cheat, failed to prepare them for assessments, and eliminated the ‘human interaction, teacher support, and discussion and debate with our peers that we need in order to improve our critical thinking’.

Summit news coverage

Student and parent protests about Summit Schools generated newspaper headlines

There were controversies too about the curriculum content in the Summit Personalized Learning Platform—students in some cases were being pointed to the UK tabloid the Daily Mail that reportedly ‘showed racy ads with bikini-clad women’. Reports surfaced of Summit curriculum developers working at such speed to create content for the platform that they barely had time to check the adequacy of the sources.

Our lesson from this is about students’ distrust in engineering solutions to schooling.  Personalized learning appears as an ‘efficiency’ model of education, using opaque technologies to streamline students’ progress through school while shaving off all the interactions and space for thinking that students need to engage meaningfully with knowledge and develop lasting understanding. Zuckerberg is now providing the funding to enable the Summit platform to roll out across US schools, through the new non-profit Teachers, Learning & Partners in Education. For educators this raises important questions about whether we want technology-based models like this at the centre of our curricula and pedagogies–because this is what’s coming, and it’s being pushed hard by the tech sector with huge financial resources to help it succeed.

Investor scepticism
Edtech resistance comes not only from activists and students, but sometimes from within its own industry.

Many of you will know AltSchool, the ‘startup charter school chain’ launched by ex-Googler Max Ventilla which quickly attracted almost $174million investment in venture capital funding and then almost as quickly ‘pivoted’ to reveal its main business model was not running schools after all but product testing a personalized learning platform for release to the wider schools market.

There has been strong resistance to AltSchool throughout its short lifecycle. It’s been seen as a template for ‘surveillance schooling’, treating its young student as ‘guinea pigs’ in a live personalized learning experiment. It even called its key sites ‘lab schools’.

Altschool tweet

A critical tweet triggered a venture capitalist backlash

Earlier this summer, though,  resistance came from venture capitalist edtech investor Jason Palmer, who claimed AltSchool had always been a terrible idea especially as, he tweeted, ‘edtech  is all about partnering w/existing districts, schools and educators (not just “product”)’.

And that tweet, in turn, attracted a torrent of criticism from other technology investors who accused Palmer of ‘toxic behaviour’ and made fairly aggressive threats about his future prospects in edtech investment.

When the New York Times ran a piece on this a couple of weeks ago, it focused on the ‘Silicon Valley positivity machine’—a kind of secret code of upbeat marketing that refuses to publicly engage with failure or even in critical debates about the social consequences of technical innovation. AltSchool has now announced it is rebranding as Altitude and will sell to schools the personalized learning product it’s been engineering and testing in its experimental lab school settings for several years.

If there is any lesson to learn here, it’s not just that edtech is about partnerships rather than product. It’s that the edtech industry needs to wake up to critical debate about its ideas and products, and that educators and activists push back against bad ideas and evidence from failed experiments–otherwise they’ll just happen again, under a different brand name. As Audrey Watters commented:

Jason Palmer was absolutely right. AltSchool was a terrible idea. It was obviously a bad investment. Its founder had no idea how to design or run a school. He had no experience in education — just connections to a powerful network of investors who similarly had no damn clue and wouldn’t have known the right questions to ask if someone printed them out in cheery, bubble-balloon lettering. It’s offensive that AltSchool raised almost $175 million.

Without this kind of critical engagement and proper reflective engagement with failure and bad ideas, the danger is that even more intrusive forms of surveillance and monitoring–powered by the techno-optimism and hype of the tech sector positivity machine–become normalized and rolled out across schools and colleges.

Regulation
And of course data-based surveillance has become perhaps the most critical issue in contemporary education technology. One high-profile case is the high school in Sweden that was fined under GDPR rules just last month for the unlawful introduction of facial detection to document student attendance.

The high school board claimed that the data was consensually collected, but the Swedish Data Protection Authority found that it was still unlawful to gather and process the students’ biometric data ‘given the clear imbalance between the data subject and the controller’.

Sweden facial recognition ban

Sweden has issued a major GDPR fine for trials of facial recognition in a school

Sweden has now moved to ban facial recognition in education outright, and the case is catalyzing efforts within the European Union to impose ‘strict limits on the use of facial recognition technology in an attempt to stamp out creeping public surveillance of European citizens … as part of an overhaul in the way Europe regulates artificial intelligence’.

This example shows us growing legal and regulatory resistance to intrusive and invasive surveillance. In fact, with its core emphasis on power imbalances regarding ‘consent’ the case could raise wider debates about students’ rights to ‘opt-out’ of the very technological systems that their schools and colleges now depend on. It also raises the issue that schools themselves might bear the financial burden of GDPR fines if the technologies they buy breach its rules.

Flawed algorithms
Students’, educators’ and regulators’ critical resistance to edtech is likely to grow as we learn more about the ways it works, how it treats data, and in come cases how dysfunctional it is.

Just this summer, an investigation of automated essay-grading technology found it disproportionately discriminates against certain groups of students. This is because:

Essay-scoring engines don’t actually analyze the quality of writing. They’re trained on sets of hundreds of example essays to recognize patterns that correlate with higher or lower human-assigned grades. They then predict what score a human would assign an essay, based on those patterns.

The developers of the software in question openly acknowledged that this was a problem going back 20 years of product development. Each time they tweak it, different groups of students end up disadvantaged. There is systematic and irremediable bias in the essay scoring software.

The examination of essay scoring engines also included the finding that these technologies would give good grades to ‘well-structured gibberish’. The algorithms can’t tell between genuine student insight and meaningless sentences strung together in ways that resembled well-written English.

Increasingly, journalists are on to edtech, and are feeding into the growing sense of frustration and resistance by demonstrating these technologies don’t even fairly do what they claim to do. These investigations teach us to be dubious of claims of algorithmic accuracy used to promote new AI-based edtech products. We shouldn’t presume algorithms do a better job than educators, but insist on forensic, independent and impartial studies of their intended outcomes and unintended effects. Cases like this force educators to confront new technologies with scepticism. In the name of educational innovation, or efficiency, are we ceding responsibility to algorithms that neither care nor even do their job effectively?

Political algorithms
But edtech flaws and resistance can get even more serious.

Five years ago, the UK government Home Office launched an investigation into claims of systematic cheating in English language tests for international students. The assessment developer, English Testing Services (ETS), were called in to do a biometric voice-matching analysis of 66,500 spoken test recordings to determine if candidates had cheated in the test by getting someone else to take it for them.

Its finding was that 58% had cheated by employing a proxy test-taker, and a further 39% were questionable. Over 33,000 students had their visas revoked. More than 2,500 have  been forcibly deported, while another 7,000 left voluntarily after being told they faced detention and removal if they stayed. In all, it is believed that over 10,000 students left the country as a result of the test.

But a later investigation found the voice-matching algorithm may have been wrong in up to 20% of cases. Thousands of international students were wrongly accused of cheating, wrongly had their visas revoked, and were wrongly ordered to leave the country. Multiple news outlets picked up the story as evidence of problematic governmental reliance on algorithmic systems.

In response to the emerging scandal, the UK’s official investigative organization, the National Audit Office, conducted an investigation earlier this year, and the whole fiasco has become a political scandal–result now is that literally thousands of court cases are proceeding against the Home Office. 12,500 appeals have already been heard and over 3000 have won. According to the National Audit Office investigation:

It is difficult to estimate accurately how many innocent people may have been wrongly identified as cheating. Voice recognition technology is new, and it had not been used before with TOEIC tests. The degree of error is difficult to determine accurately because there was no piloting or control group established for TOEIC tests.

Since then a parliamentary inquiry was even launched. One properly shocking part of this is that the Home Office has spent over £21million dealing with the fallout, while ETS has made an estimated £11.4million, and only £1.6million has been reclaimed for the taxpayer. More shocking than that, individual students themselves are reported to be paying many thousands of pounds to have their appeal heard–with many others unable to afford it. The inquiry reported its findings last week, heavily criticizing the Home Office for rushing ‘to penalise students without establishing whether ETS was involved in fraud or if it had reliable evidence of people cheating’.

What lessons can we draw from this? This not just a case of resistance to educational technologies. It is a shocking example of how untested software can have huge consequences for people’s lives. It’s about how those consequences can lead to court cases with massive cost implications for individuals. It’s about the cost to the public of government outsourcing to private contractors. It’s about the outsourcing of human expertise and sensitivity to the mechanical efficiency of algorithms.

It also teaches us that technology is not neutral. The deployment of this voice matching software was loaded with politics—the voice matching algorithm reproduced UK government ‘hostile environment’ policy by efficiently optimizing the deportation process.

Body contact
So finally, what can we learn from the edtech pushback I experienced first-hand on Twitter in relation to the Media Lab’s brain glasses?

Here we can see how proposed experiments on students’ bodies and brains can generate extremely strong reactions. In the last few years, interest in brain science, wearable biometrics and even genetic testing in education has grown substantially.

Experiments are underway with wearable neural interfaces to detect brainwave signals of student attention, and studies are being conducted in behavioural genetics that could in coming years bring about the possibility of DNA testing young children for future achievement, attainment and intelligence.

The potential here, according to behavioural geneticists, is to personalize education around a student’s genetic scores and associated predictions. Maybe consumer genetics companies, like 23andMe, will move to create a bio-edtech market, just as educational neuroscience companies  are already creating a new neuro-edtech market. One educational neurotechnology company, BrainCo, just announced a partnership with the edtech company Progrentis on a ‘fully neuro-optimized education platform’ combining brainwave reading with personalized, adaptive learning technologies.

BrainCo Progrentis

BrainCo and Progrentis have partnered to create a ‘neuro-optimised education platform’

We’re moving into deeply controversial and ethically grey area here. No wonder Twitter exploded on me with accusations of eugenics and forcible mental manipulation when I shared MIT Media Lab’s brain glasses.

These new educational developments in brain technologies and genetics raise huge ethical challenges which must be resolved before these innovations are rolled out—if not stop them in their tracks, as Sweden has moved to do in relation to facial recognition in education. Bioethicists and scientists themselves are increasingly calling for new human rights amendments to protect the human body and the brain from intrusion and extraction in all but necessary medical cases. The UK’s Royal Society just launched a report on the need for regulation of neurotechnology as developments in neural interfaces accelerate, with unknown consequences for human life itself. Yet in education we’re not having these discussions at all–and the result is more and more projects like MIT’s brain glasses, which treat education as an experimental playground for all sorts of potentially outrageous technological innovations.

Conclusion
So, there is a rising wave of edtech resistance from a wide variety of perspectives—from activists to students, journalists to regulators, and legal experts to ethicists.

If these are signals of an emerging edtechlash, then educators, decision-makers and the edtech industry would benefit from being engaged in the key issues that are now emerging, namely that:

  • private sector influence and outsourcing is perceived to be detrimental to public education
  • lack of edtech diversity may reproduce the pedagogic assumptions of engineers
  • student distrust of engineering solutions and continuing trust in human interactions as central to education
  • there may be bad science behind positive industry and investor PR
  • new data protection regulations question how easily student ‘consent’ can be assumed when the balance of power is unequal
  • algorithmic ‘accuracy’ is being exposed as deeply flawed and full of biases
  • algorithmic flaws can lead to devastating consequences at huge costs to individuals, the public, and institutions
  • increasingly invasive surveillance proposals raise new ethical and human rights issues that are likely to be acted upon in coming years.

We should not and cannot ignore these tensions and challenges. They are early signals of resistance ahead for edtech which need to be engaged with before they turn to public outrage. By paying attention to and acting on edtech resistances it may be possible to create education systems, curricula and practices that are fair and trustworthy. It is important not to allow edtech resistance to metamorphose into resistance to education itself.

Posted in Uncategorized | Leave a comment

Automating mistrust

Ben Williamson

Exam by XaviTurnitin can now analyse students’ individual writing styles to tackle ‘contract cheating’. Image by Xavi

The acquisition of plagiarism detection company Turnitin for US$1.75 billion, due to be completed later this year, demonstrates how higher education has become a profitable market for education technology companies. As concern grows about student plagiarism and ‘contract cheating’, Turnitin is making ‘academic fraud’ into a market opportunity to extend its automated detection software further. It is monetizing students’ writing while manufacturing mistrust between universities and students, and is generating some perverse side effects.

Cheating software
Turnitin’s acquisition is one of the biggest deals ever signed in the edtech field. Its new owner, Advance Publications, is a global media conglomerate with a portfolio including the Conde Nast company. With traditional media forms losing audiences, the deal indicates how technology and media businesses have begun to view education as a potentially valuable investment market.

The profitability of Turnitin, and attraction to Advance, derives from the assignments that students provide for free to its platform. Its plagiarism detection algorithm is constantly fine-tuned as millions of essays are added, analysed and cross-checked against each other and other sources. The ‘world’s largest comparison database’ of student writing, it consists of 600+ million student papers, 155,000+ published works and 60+ billion web pages. Similar to social media companies profiting from user-generated content, value for Turnitin comes from analysing students’ uploaded essays against that database, and securing purchases from universities based on the analysis.

Students can even pay to upload their essays prior to submission to Turnitin’s WriteCheck service, in order to check for similar sentences and phrases, missing or inaccurate citations, and spelling or grammatical inaccuracies. WriteCheck uses the same techniques as common standardized English language tests, and offers an online Professional Tutor Service through a partnership with Pearson.

The company has had years to grow and finesse its services and its ‘Similarity Score’ algorithm. In the UK, the original version of Turnitin, then known as iParadigms, was first paid for on behalf of the HE sector by Jisc (the digital learning agency) from 2002 to 2005, giving it an inbuilt cost advantage over competitors. It also gave it an inbuilt data advantage to train its plagiarism detection algorithm on a very large population of students’ assignments. Nonetheless, studies have repeatedly shown its plagiarism detection software is inaccurate. It both mistakenly brands some students as cheats while completely missing other clear instances of plagiarism, with an error rate that suggests its automated plagiarism reports should be trusted  less than its commercial valuation and market penetration indicates.

With the announcement of its acquisition by Advance, critics say the $1.75bn deal also amounts to the exploitation of students’ intellectual property. ‘This is a pretty common end game for tech companies, especially ones that traffic in human data’, commented Jesse Stommel of the University of Mary Washington. Turnitin’s business model, he added, is to ‘create a large base of users, collect their data, monetize that data in ways that help assess its value, [and] leverage that valuation in an acquisition deal’.

The tension between students’ intellectual property and Turnitin’s profit-making is not new. In many universities, it is compulsory for all student assignments to be submitted to Turnitin, with their intellectual effort then contributing to its growing commercial valuation without their informed knowledge. Ten years ago, four US college students tried to sue Turnitin for taking their assignments against their will and then profiting from them.

Manufacturing mistrust
Beyond its monetization strategy, Turnitin is also reshaping relationships between universities and students. Students are treated by default as potential essay cheats by its plagiarism detection algorithm. This is not a new concern. Ten years ago Sean Zwagerman argued that  plagiarism detection software is a ‘surveillance technology’ that ‘treats writing as a product, grounds the student-teacher relationship in mistrust, and requires students to actively comply with a system that marks them as untrustworthy’. Turnitin’s continued profitability depends on manufacturing and maintaining mistrust between students and academic staff, while also foregrounding its automated algorithm over teachers’ professional expertise.

In the book Why They Can’t Write, John Warner argues that students’ writing abilities have been eroded by decades of standardized curriculum and assessment reforms. Turnitin is yet another technology that treats writing as a rule-based game. ‘It signals to students that the writing is a game meant to please an algorithm rather than an attempt to convey an idea to an interested audience’, Warner has noted. ‘It incentivizes assignments which can be checked by the algorithm, which harms motivation’.

Turnitin also changes how students practice academic writing. One of the leading critical researchers of Turnitin, Lucas Introna, argues it results in the ‘algorithmic governance’ of students’ academic writing practices. Moreover, he suggests that ‘what the algorithms often detect is the difference between skilful copiers and unskilful copiers’, and as a result that it privileges students ‘who conceive of “good” writing practice as the composition of undetectable texts’.

The new deal will open opportunities for Turnitin to develop and promote new features that will further intervene in students’ writing. One is its new service to scan essays to detect an individual’s unique writing style, launched to the HE market in March just a week after announcing its acquisition. This could then be used to identify ‘ghostwriting’—when students hire someone else to write their essays or purchase made-to-order assignments.

Turnitin contract cheatingTurnitin has published expert guidance for universities to identify and combat contract cheating

The new Authorship Investigate service extends Turnitin from the analysis of plagiarism to students’ writing ability, using students’ past assignments, document metadata, forensic linguistic analysis, machine learning algorithms and Natural Language Processing to identify if a student has submitted work written by someone else. It reinforces the idea that the originality, value and quality of student writing should first be assessed according to the criteria of the detection algorithm, and treats all student writing as potential academic piracy. It is also likely to require students to submit extensive writing samples to train the algorithm to make reliable assessments of their writing style, thereby further enhancing the monopoly hold of Turnitin over data about student writing.

Turnitin has bred suspicion and mistrust between students and academics, while affecting how students value and practice academic writing. Yet this mistrust is itself a market opportunity, as the company seeks to offer more solutions services to the perceived problem of increased student plagiarism and contract cheating. As suspicions about student cheating have continued to grow since it was launched nearly 20 years ago, Turnitin has been able to capitalize to dramatically profitable results. Its ghostwriter detection service, of course, is a solution to one of the very problems Turnitin created–because plagiarism has become so detectable, the huge essay mills industry has emerged to produce original on-demand content for students to order. As a result, Turnitin is automating mistrust as it erodes relationships between students and universities, devalues teacher judgment, and reduces student motivation.

Plagiarism police
However damaging and inaccurate it may be, the Advance acquisition will enable Turnitin to further expand its market share and product portfolio. For Turnitin, the timing is ideal, as universities and HE policymakers are collectively beginning to address the rise of online ‘essay mills’ and their erosion of ‘academic integrity’. Government education departments in the UK and Australia have begun to tackle contract cheating more seriously, including through advocating increased use of innovative plagiarism detection software.

In a speech to the Universities UK International higher education forum in March, universities minister Chris Skidmore identified essay mills as one of the issues that needed to be tackled to protect and improve the quality of higher education in England and ensure that it retained its reputation for excellence.

UK academic leaders, HE agencies and ministers have already asked PayPal to stop processing payments to essay mills, and Google and YouTube to block online ads, in an effort to close down the $1 billion annual market in made-to-order assignments. These moves to prevent contract cheating also affect university students and graduates in Kenya, a ‘hotspot‘ for essay mill companies and writers, who rely on contract academic writing as a major source of income. So while Turnitin is set to profit from the detection of contract cheating in Global North contexts, it is disrupting a significant source of employment in specific Global South contexts. In Kenya, for example, where unemployment is high, ‘participants think of their jobs as providing a service of value, not as helping people to cheat. They see themselves as working as academic writers.’

Turnitin’s website now prominently markets its ghostwriter detection service along with a series of free-to-download ebooks to help universities identify contract cheating and develop strategies and tactics to combat it. It’s positioning itself not just as a technical solutions vendor, but as an expert source of insight and authority on ‘upholding academic integrity’. At the same time, Authorship Investigate will allow Turnitin to become the market leader in the fight against essay mills.

The launch of Authorship Investigate has coincided with a Times Higher Education report on the ‘surprising level of support’ among academics for contract cheating services to be made illegal and for ‘the criminalising of student use of these services’. This would appear to raise the prospect of algorithmic identification of students for criminal prosecution. Though there’s nothing to indicate quite such a hard punitive line being taken, the UK Department for Education has urged universities to address the problem, commenting to the THE, ‘universities should also be taking steps to tackle this issue, by investing in detection software and educating students on the severe consequences they face if caught cheating’.

Turnitin is the clear market-leader to solve the essay mills problem that the department has now called on universities to tackle. Its technical solution, however, does not address the wider reasons—social, institutional, psychological, financial or pedagogic—for student cheating, or encourage universities to work proactively with students to resolve them. Instead, it acts as a kind of automated ‘plagiarism police force’ to enforce academic integrity, which at the same time is also set to further disadvantage young people in countries such as Kenya where preparing academic texts for UK and US students is seen as a legitimate and lucrative service by students and graduates.

Robotizing higher education
Like many other technology organizations in education, Turnitin is increasing automation in the sector. Despite huge financial pressures, universities are investing in Turnitin to automate plagiarism and ghostwriting detection as a way of combating academic fraud. The problem of essay mills that politicians are now fixated upon is the ideal market opportunity for Turnitin to grow its business and its authority over student writing even further. In so doing, it also risks standardizing students’ writing practices to conform to the rules of the algorithm–ultimately contributing to the algorithmic governance, and even ‘robotization’, of academic writing.

The real problem is that universities are being motivated to invest in these robotized, data-crunching edtech products for multiple complex reasons. As universities have to seek larger student enrolments for their financial security, algorithmic services become efficient ways of handling huge numbers of student assignments. They satisfy government demands for action to be taken to raise standards, boost student performance, and preserve academic integrity. But automated software is a weak, robotic, and error-prone substitute for the long-term development of trusting pedagogic relationships between teachers and students.

A version of this post was previously published on Research Professional with the title ‘Manufacturing mistrust‘ on 12 June 2019.
Posted in Uncategorized | Tagged , , , , , , , | 1 Comment

Learning from surveillance capitalism

Ben Williamson

Fraction collectorSurveillance capitalism combines data analytics, business strategy, and human behavioural experimentation. Image: “Fraction collector” by proteinbiochemist

‘Surveillance capitalism’ has become a defining concept for the current era of smart machines and Silicon Valley expansionism. With educational institutions and practices increasingly focused on data collection and outsourcing to technology providers, key points from Shoshana Zuboff’s The Age of Surveillance Capitalism can help explore the consequences the field of education. Mindful of the need for much more careful studies of the intersections of education with commercially-driven data-analytic strategies of ‘rendition’ and ‘behavioural modification’, here I simply outline a few implications of surveillance capitalism for how we think about education policy and about learning.

Data, science and surveillance
Zuboff’s core argument is that tech businesses such as Google, Microsoft, Facebook and so on have attained unprecedented power to monitor, predict, and control human behaviour through the mass-scale extraction and use of personal data. These aren’t especially novel insights—Evgeny Morozov has a 16,000 word essay on the book’s analytical and stylistic shortcomings—but Zuboff’s strengths are in the careful conceptualization and documentation of some of the key dynamics that have made surveillance capitalism possible and practical. As James Bridle argued in his review of the book, ‘Zuboff has written what may prove to be the first definitive account of the economic – and thus social and political – condition of our age’.

Terms such as ‘behavioural surplus’, ‘prediction products’, ‘behavioural futures markets’, and ‘instrumentarian power’ provide a useful critical language for decoding what surveillance capitalism is, what it does, and at what cost. Some of the most interesting documentary material Zuboff presents include precedents such as the radical behaviourism of BF Skinner and the ‘social physics’ of MIT Media Lab pioneer Sandy Pentland. For Pentland, quoted by Zuboff, ‘a mathematical, predictive science of society … has the potential to dramatically change the way government officials, industry managers, and citizens think and act’ (Zuboff, 2019, 433) through ‘tuning the network’ (435). Surveillance capitalism is not and was never simply a commercial and technical task, but deeply rooted in human psychological research and social experimentation and engineering. This combination of tech, science and business has enabled digital companies to create ‘new machine processes for the rendition of all aspects of human experience into behavioural data … and guarantee behavioural outcomes’ (339).

Zuboff has nothing to say about education specifically, but it’s tempting straight away to see a whole range of educational platforms and apps as condensed forms of surveillance capitalism (though we might just as easily invoke ‘platform capitalism’). The classroom behaviour monitoring app ClassDojo, for example, is a paradigmatic example of a successful Silicon Valley edtech business, with vast collections of student behavioural data that it is monetizing by selling premium features for use at home and offering behaviour reports to subscribing parents. With its emphasis on positive behavioural reinforcement through reward points, it represents a marriage of Silicon Valley design with Skinner’s aspiration to create ‘technologies of behaviour’. ClassDojo amply illustrates the combination of behavioural data extraction, behaviourist psychology and monetization strategies that underpin surveillance capitalism as Zuboff presents it.

Perhaps more pressingly from the perspective of education, however, Zuboff makes a number of interesting observations about ‘learning’ that are worth unpacking and exploring.

Learning divided
The first point is about the ‘division of learning in society’ (the subject of chapter 6, and drawing on her earlier work on the digital transformation of work practices). By this term Zuboff means to demarcate a shift in the ‘ordering principles’ of the workplace from the ‘division of labour’ to a ‘division of learning’ as workers are forced to adapt to an ‘information-rich environment’. Only those workers able to develop their intellectual skills are able to thrive in the new digitally-mediated workplace. Some workers are enabled (and are able) to learn to adapt to changing roles, tasks and responsibilities, while others are not. The division of learning, Zuboff argues, raises questions about (1) the distribution of knowledge and whether one is included or excluded from the opportunity to learn; (2) about which people, institutions or processes have the authority to determine who is included in learning, what they are able to learn, and how they are able to act on their knowledge; and (3) about what is the source of power that undergirds the authority to share or withhold knowledge (181).

But this division of learning, according to Zuboff, has now spilled out of the workplace to society at large. The elite experts of surveillance capitalism have given themselves authority to know and learn about society through data. Because surveillance capitalism has access to both the ‘material infrastructure and expert brainpower’ (187) to transform human experience into data and wealth, it has created huge asymmetries in knowledge, learning and power. A narrow band of ‘privately employed computational specialists, their privately owned machines, and the economic interests for who sake they learn’ (190) has ultimately been authorized as the key source of knowledge over human affairs, and empowered to learn from the data in order to intervene in society in new ways.

Sociology of education researchers have, of course, asked these kinds of questions for decades. They are ultimately questions about the reproduction of knowledge and power. But in the context of surveillance capitalism such questions may need readdressing, as authority over what constitutes valuable and worthwhile knowledge for learning passes to elite computational specialists, the commercial companies they work for, and even to smart machines. As data-driven knowledge about individuals grows in predictive power, decisions about what kinds of knowledge an individual learner should receive may even be largely decided by ‘personalized learning platforms’–as current developments in learning analytics and adaptive learning already illustrate. The prospect of smart machines as educational engines of social reproduction should be the subject of serious future interrogation.

Learning collectives
The second key point is about the ‘policies’ of smart machines as a model for human learning (detailed in chapter 14). Here Zuboff draws on a speech by a senior Microsoft executive talking about the power of combined cloud and Internet of Things technologies for advanced manufacturing and construction. In this context, Zuboff explains, ‘human and machine behaviours are tuned to pre-established parameters determined by superiors and referred to as “policies”’ (409). These ‘policies’ are algorithmic rules that

substitute for social functions such as supervision, negotiation, communication and problem solving. Each person and piece of equipment takes a place among an equivalence of objects, each one “recognizable” to the “system” through the AI devices distributed across the site. (409)

In this example, the ‘policy’ is then a set of algorithmic rules and a template for collective action between people and machines to operate in unison to achieve maximum efficiency and optimal outcomes. Those ‘superiors’ with the authority to determine the policies, of course, are those same computational experts and machines that have benefitted from the division of learning. This gives them unprecedented powers to ‘apply policies’ to people, objects, processes and activities alike, resulting in a ‘grand confluence in which machines and humans are united as objects in the cloud, all instrumented and orchestrated in accordance with the “policies” … that appear on the scene as guaranteed outcomes to be automatically imposed, monitored and maintained by the “system”’ (410). These new human-machine learning collectives represent the future for many forms of work and labour under surveillance capitalism, according to Zuboff.

Zuboff then goes beyond human-machine confluences in the workplace to consider the instrumentation and orchestration of other types of human behaviour. Drawing parallels with the behaviourism of Skinner, she argues that digitally-enforced forms of ‘behavioral modification’ can operate ‘just beyond the threshold of human awareness to induce, reward, goad, punish, and reinforce behaviour consistent with “correct policies”’, where ‘corporate objectives define the “policies” toward which confluent behaviour harmoniously streams’ (413). Under conditions of surveillance capitalism, Skinner’s behaviourism and Pentland’s social physics spill out of the lab into homes, workplaces, and all the public and private space of everyday life–ultimately turning the world into a gigantic data science lab for social and behavioural experimentation, tuning and engineering.

And the final point she makes here is that humans need to become more machine-like to maximize such confluences. This is because machines connected to the IoT and the cloud work through collective action by each learning what they all learn, sharing the same understanding and ‘operating in unison with maximum efficiency to achieve the same outcomes’ (413). This model of collective learning, according to surveillance capitalists, can learn faster than people, and ‘empower us to better learn from the experiences of others’:

The machine world and the social world operate in harmony within and across ‘species’ as humans emulate the superior learning processes of the smart machines. … [H]uman interaction mirrors the relations of the smart machines as individuals learn to think and act by emulating one another…. In this way, the machine hive becomes the role model for a new human hive in which we march in peaceful unison toward the same direction based on the same ‘correct’ understanding in order to construct a world free of mistakes, accidents, and random messes. (414)

For surveillance capitalists human learning is inferior to machine learning, and urgently needs to be improved by gathering together humans and machines into symbiotic systems of behavioural control and management.

Learning in, from, or for surveillance capitalism?
These key points from The Age of Surveillance Capitalism offer some provocative starting places for further investigations into the future shape of education and learning amid the smart machines and their smart computational operatives. Three key points stand out.

1) Cultures of computational learning. One line of inquiry might be into the cultures of learning of those computational experts who have gained from the division of learning. And I mean this in two ways. How are they educated? How are they selected into the right programs? What kinds of ongoing training provides the kinds of privilege to learn about society through mass-scale behavioural data? These are questions about new and elite forms of workforce preparation and professional education. How, in short, are these experts educated, qualified and socialized to do data analytics and behaviour modification—if that is indeed what they do? In other words, how is one educated to become a surveillance capitalist?

The other way of approaching this concerns what is actually involved in ‘learning’ about society through its data. This is both a pedagogic and a curricular question. Pedagogically, education research would benefit from a much better understanding of the kinds of workplace education programmes underway inside the institutions of surveillance capitalism. From a curricular perspective, this would also require an engagement with the kinds of knowledge assumptions and practices that flow through such spaces. As mentioned earlier, sociology of education has long been concerned with how aspects of culture are ‘selected’ for reproduction by transmission through education. As tech companies and related academic labs become increasingly influential, they are producing new ‘social facts’ that might affect how people both within and outside those organizations come to understand the world. They are building new knowledge based on a computational, mathematical, and predictive style of thinking. What, then, are the dynamics of knowledge production that generate these new facts, and how do they circulate to affect what is taught and learnt within these organizations? As Zuboff notes, pioneers such as Sandy Pentland have built successful academic teaching programs at institutes like MIT Media Lab to reproduce knowledge practices such as ‘social physics’.

2) Human-machine learning confluences. The second key issue is what it means to be a learner working in unison with the Internet of Things. Which individuals are included in the kind of learning that is involved in becoming part of this ‘collective intelligence? When smart machines and human workers are orchestrated together into ‘confluence’, and human learning is supposed to emulate machine learning, how do our existing theories and models of human learning hold up? Machine learning and human learning are not obviously comparable, and the tech firms surveyed by Zuboff appear to hold quite robotic notions of what constitutes learning. Yet if the logic of extreme instrumentation of working environments develops as Zuboff anticipates, this still raises significant questions about how one learns to adapt to work in unison with the smart machines, who gets included in this learning, who gets excluded, how those choices and decisions are made, and what kinds of knowledge and skills are gained from inclusion. Automation is likely to lead to both further divisions in learning and more collective learning at the same time–with some individuals able to exercise considerable autonomy over the networks they’re part of, and others performing the tasks that cannot yet be automated.

In the context of concerns about the role of education in relation to automation, intergovernmental organizations such as the OECD and World Economic Forum have begun encouraging governments to focus on ‘noncognitive skills’ and ‘social-emotional learning’ in order to pair human emotional intelligence with the artificial cognitive intelligence of smart machines. Those unique human qualities, so the argument goes, cannot be quantified whereas routine cognitive tasks can. Classroom behaviour monitoring platforms such as ClassCraft have emerged to measure those noncognitive skills and offer ‘gamified’ positive reinforcement for the kind of ‘prosocial behaviours’ that may enable students to thrive in a future of increased automation. Being emotionally intelligent, by these accounts, would seem to allow students to enter into ‘confluent’ relations with smart machines. Rather than competing with automation, they would complement it as collective intelligence. ‘Human capital’ is no longer a sufficient economic goal to pursue through education—it needs to produce ‘human-computer capital’ too.

3) Programmable policies. A third line of inquiry would be into the idea of ‘policies’. Education policy studies have long engaged critically with the ways government policies circumscribe ‘correct’ forms of educational activity, progress, and behaviour. With the advance of AI-based technologies into schools and universities, policy researchers may need to start interrogating the policies encoded in the software as well as the policies inscribed in government texts. These new programmable policies potentially have a much more direct influence on  ‘correct’ behaviours and maximum outcomes by instrumenting and orchestrating activities, tasks and behaviours in educational institutions.

Moreover, researchers might shift their attention to the kind of programmable policies that are enacted in the instrumented workplaces where, increasingly, much learning happens. Tech companies have long bemoaned the adequacy of school curricula and university degrees to deliver the labour market skills they require. With the so-called ‘unbundling’ of the university in particular, higher education may be moving further towards ‘demand driven’ forms of professional learning and on-the-job industry training provided by private companies. When education moves into the smart workplace, learning becomes part of the confluence of humans and machines, where all are equally orchestrated by the policies encoded in the relevant systems. Platforms and apps using predictive analytics and talent matching algorithms are already emerging to link graduates to employers and job descriptions. The next step, if we accept the likeliness of the direction of travel of surveillance capitalism, might be to match students directly to smart machines on-demand as part of the collective human-machine intelligence required to achieve maximum efficiency and optimized outcomes for capital accumulation. In this scenario, the computer program would be the dominant policy framework for graduate employability, actively intervening in professional learning by sorting individuals into appropriate networks of collective learning and then tuning those networks to achieve best effects.

All of this raises one final question, and a caveat. First the caveat. It’s not clear that ‘surveillance capitalism’ will sustain as an adequate explanation for the current trajectories of high-tech societies. Zuboff’s account is not uncontested, and it’s in danger of becoming an explanatory shortcut for deployment anywhere that data analytics and business interests intersect (as ‘neoliberalism’ is sometimes evoked as a shortcut for privatization and deregulation). The current direction of travel and future potential described by Zuboff are certainly not desirable, and should not be accepted as inevitable. If we do accept Zuboff’s account of surveillance capitalism, though, the remaining question is whether we should be addressing the challenges of learning in surveillance capitalism, or the potential for whole education systems to learn from surveillance capitalism and adapt to fit its template. Learning in surveillance capitalism at least assumes a formal separate of education from these technological, political and economic conditions. Learning from it, however, suggests a future where education has been reformatted to fit the model of surveillance capitalism–indeed, where a key purpose of education is for surveillance capitalism.

Zuboff, S. 2019. The Age of Surveillance Capitalism: The fight for a human future at the new frontier of power. London: Profile.
Posted in Uncategorized | Tagged , , , , , , , | Leave a comment

Education for the robot economy

Ben Williamson

Robot by Saundra CastanedaRobotization is driving coding and emotional skills development in education. Image by Saundra Castaneda

Automation, coding, data and emotions are the new keywords of contemporary education in an emerging ‘robot economy’. Critical research on education technology and education policy over the last two decades has unpacked the connections of education to the so-called ‘knowledge economy’, particularly as it was projected globally into education policy agendas by international organizations including the OECD, World Economic Forum and World Bank. These organizations, and others, are now shifting the focus to artificial intelligence and the challenges of automation, and pushing for changes in education systems to maximize the new economic opportunities of robotization.

Humans & robots as sources of capital
In the language of the knowledge economy, the keywords were globalization, innovation, networks, creativity, flexibility, multitasking and multiskilling—what social theorists variously called ‘NewLiberalSpeak’ and the ‘new spirit of capitalism’. With knowledge a new source of capital, education in the knowledge economy was therefore oriented towards the socialization of students into the practices of ICT, communication, and teamwork that were seen as the necessary requirements of the new ‘knowledge worker’.

In the knowledge economy, learners were encouraged to see themselves as lifelong learners, constantly upskilling and upgrading themselves, and developing metacognitive capacities and the ability to learn how to learn in order to adapt to changing economic circumstances and occupations. Education policy became increasingly concerned with cultivating the human resources or ‘human capital’ necessary for national competitive advantage in the globalizing economy. Organizations such as the OECD provided the international large scale assessment PISA to enable national systems to measure and compare their progress in the global knowledge economy, treating young people’s test scores as indicators of human capital development.

The steady shift of the knowledge economy into a robot economy, characterized by machine learning, artificial intelligence, automation and data analytics, is now bringing about changes in the ways that many influential organizations conceptualize education moving towards the 2020s. Although this is not an epochal or decisive shift in economic conditions, but rather a slow metamorphosis involving machine intelligence in the production of capital, it is bringing about fresh concerns with rethinking the purposes and aims of education as global competition is increasingly linked to robot capital rather than human capital alone.

Automation
According to many influential organizations, it is now inevitable that automated technologies, artificial intelligence, robotization and so on will pose a major threat to many occupations in coming years. Although the evidence of automation causing widespread technological unemployment is contested, many readings of this evidence adopt a particularly determinist perspective. The robots are coming, the threat of technology is real and unstoppable, and young people are going to be hit hardest because education is largely still socializing them for occupations that the robots will replace.

The OECD has produced findings reporting on the skill areas that automation could replace. A PriceWaterhouseCoopers report concluded that ‘less well educated workers could be particularly exposed to automation, emphasising the importance of increased investment in lifelong learning and retraining’. Pearson and Nesta, too, collaborated on a project to map the ‘future skills’ that education needs to promote to prepare nations for further automation, globalization, population ageing and increased urbanization over the next 10 years. The think tank Brookings has explicitly stated, ‘To develop a workforce prepared for the changes that are coming, educational institutions must de-emphasize rote skills and stress education that helps humans to work better with machines—and do what machines can’t’.

For most of these organizations, the solution is not to challenge the encroachment of automation on jobs, livelihoods and professional communities. Instead, the robot economy can be even further optimized by enhancing human capabilities through reformed institutions and practices of education. As such, education is now being positioned to maximize the massive economic opportunities of robotization.

Two main conclusions flow from the assumption that young people’s future jobs and labour market prospects are under threat, and that the future prospects of the economy are therefore uncertain, unless education adapts to the new reality of automation. The first is that education needs to de-emphasize rote skills of the kind that are easy for computers to replace and stress instead more digital upskilling, coding and computer science. The second is that humans must be educated to do things that computerization cannot replace, particularly by upgrading their ‘social-emotional skills’.

Coding
Learning to code, programming and computer science have become the key focus for education policy and curriculum reform around the world. Major computing corporations such as Google and Oracle have invested in coding programs alongside venture capitalists and technology philanthropists, while governments have increasingly emphasized new computing curricula and encouraged the involvement of both ed-tech coding products and not-for-profit coding organizations in schools.

The logic of encouraging coding and computer science education in the robot economy is to maximize the productivity potential of the shift to automation and artificial intelligence. In the UK, for example, artificial intelligence development is at the centre of the government’s industrial strategy, which made computer programming in schools an area for major investment. Doing computer science in schools, it is argued, equips young people not just with technical coding skills, but also new forms of computational thinking and problem-solving that will allow them to program and instruct the machines to work on their behalf.

This emphasis on coding is also linked to wider ideas about digital citizenship and entrepreneurship, with the focus on preparing children to cope with uncertainty in an AI age. A recent OECD podcast on AI and education, for example, put coding, entrepreneurship and digital literacy together with concerns over well-being and ‘learning to learn’. Coding our way out of technological unemployment, by upskilling young people to program, work with, and problem-solve with machines, then, is only one of the proposed solutions for education in the robot economy.

Emotions
The other solution is ‘social-emotional skills’. Social-emotional learning and skills development is a fast-growing policy agenda with significant buy-in by international organizations. The World Economic Forum has projected a future vision for education that includes the development and assessment of social-emotional learning through advanced technologies. Similarly, the World Bank has launched a program of international teacher assessment that measures the quality of instruction in socioemotional skills.

The OECD has perhaps invested the most in social-emotional learning and skills, as part of its long-term ‘Skills for Social Progress’ project and its Education 2030 framework. The OECD’s Andreas Schleicher is especially explicit about the perceived strategic importance of cultivating social-emotional skills to work with artificial intelligence, writing that ‘the kinds of things that are easy to teach have become easy to digitise and automate. The future is about pairing the artificial intelligence of computers with the cognitive, social and emotional skills, and values of human beings’.

Moreover, he casts this in clearly economic terms, noting that ‘humans are in danger of losing their economic value, as biological and computer engineering make many forms of human activity redundant and decouple intelligence from consciousness’. As such, human emotional intelligence is seen as complementary to computerized artificial intelligence, as both possess complementary economic value. Indeed, by pairing human and machine intelligence, economic potential would be maximized.

Intuitively, it makes sense for schools to focus on the social and emotional aspects of education, rather than wholly on academic performance. Yet this seemingly humanistic emphasis needs to be understood as part of the globalizing move by the OECD and others to yet again reshape the educational agenda to support economic goals.

Data
The fourth keyword is data, and it refers primarily to how education must be ever more comprehensively measured to assess progress in relation to the economy. Just as the OECD’s PISA has become central to measuring progress in the knowledge economy, the OECD’s latest international survey, the Study of Social and Emotional Skills—a computer-based test for 10 and 15 year-old young people that will report its first findings in 2020—will allow nations and cities to assess how well their ‘human capital’ is equipped to complement the ‘robot capital’ of automated intelligent machines.

If the knowledge economy demanded schools help produce measurable quantities of human capital, in the robot economy schools are made responsible for helping the production of ‘human-computer capital’–the value to be derived from hybridizing human emotional life with AI. The OECD has prepared the test to measure and compare data on how well countries and cities are progressing towards this goal.

While, then, automation does not immediately pose a threat to teachers–unless we see AI_based personalized learning software as a source of technological unemployment in the education sector–it is likely to affect the shape and direction of education systems in more subtle ways in years to come. The keywords of the knowledge economy have been replaced by the keywords of the robot economy. Even if robotization does not pose an immediate threat to the future jobs and labour market prospects of students today, education systems are being pressured to change in anticipation of this economic transformation.

The knowledge economy presented urgent challenges for research; its metamorphosis into an emergent robot economy, driving policy demands for upskilling students with coding skills and upgraded emotional competencies, demands much further research attention too.

Posted in Uncategorized | Tagged , , , , , , , , | Leave a comment

Learning lessons from data controversies

Ben Williamson

This is a talk delivered at OEB2018 in Berlin on 7 December 2018, with links to key sources. A video recording is also available (from about 51mins mark)

Ten years ago ‘big data’ was going to change everything and solve every problem—in health, business, politics, and of course education. But, a decade later, we’re now learning some hard lessons from the rapid expansion of data analytics, algorithms, and AI across society.

DCMS Zuckerberg      Data controversies became the subject of international government attention in 2018

Data doesn’t seem quite so ‘cool’ now that it’s at the centre of some of society’s most controversial events. By ‘controversy’ here I mean those moments when science and technical innovation come into conflict with the public or political concerns.

Internationally, politicians have already begun to ask hard questions, and are looking for answers to recent data controversies. The current level of concern about companies like Facebook, Google, Uber, Huawei, Amazon and so on is now so acute that some commentators say we’re witnessing a ‘tech-lash’—a backlash of public opinion and political sentiment to the technology sector.

The tech sector is taking this on board, such as the Centre for Humane Technology seeking to stop tech from ‘hijacking our minds and society’. Universities that nurture the main tech talent, such as MIT, have begun to recognize their wider social responsibility and are teaching their students about the power of future technologies, and their potentially controversial effects. The AI Now research institute just launched a new report on the risks of algorithms, AI and analytics, calling for tougher regulation.

TES-algorithms-printPrint article on AI & robotization in teaching, from Times Education Supplement, 26 May 2017

We’re already seeing indications in the education media of a growing concern that AI and algorithms are ‘gonna get you’—as it said in the teachers’ magazine the Times Education Supplement last year.

In the states the FBI even issued a public service announcement warning that the collection of sensitive data by ‘edtech’ could result in social engineering, bullying, tracking, identity theft, or other means for targeting children’. An ‘edtech-lash’ has begun.

The UK Children’s Commissioner has also warned of the risks of ‘datafying children’ both at home and at school. ‘We simply do not know what the consequences of all this information about our children will be,’ she argued, ‘so let’s take action now to understand and control who knows what about our children’.

And books like Weapons of Math Destruction and The Tyranny of Metrics have become surprise non-fiction successes, both drawing attention to the damaging effects of data use in schools and universities.

So, I want to share some lessons from data controversies in education in the last couple of years—things we can learn from to avoid damaging effects in the future.

Software can’t ‘solve’ educational ‘problems’ 
One recent moment of data controversy was the protest by US students against the Mark Zuckerberg-supported Summit Public Schools model of ‘personalized learning’. Summit is originally a charter school chain with an adaptive learning platform—partly built by Facebook engineers—that’s scaled up across many high school sites in the US.

But in November, students staged walkouts in protest at the educational limitations and data privacy implications of the personalized learning platform. Student protestors even wrote a letter to Mark Zuckerberg in The Washington Post, claiming assignments on the Summit Learning Platform required hours alone at a computer and didn’t prepare them for exams.

They also raised flags about the huge range of personal information the Summit program collected without their knowledge or consent.

‘Why weren’t we asked about this before you and Summit invaded our privacy in this way?’ they asked Zuckerberg. ‘Most importantly’, they wrote, ‘the entire program eliminates much of the human interaction, teacher support, and discussion and debate with our peers that we need in order to improve our critical thinking…. It’s severely damaged our education.’

So our first lesson is that education is not entirely reducible to a ‘math problem’, nor can it be ‘solved’ with software—it exceeds the increase in data available from teaching and learning processes. For many educators and students alike, education is more than the numbers in an adaptive, personalized learning platform, and includes non-quantifiable relationships, interactions, discussion, and thinking.

Global edtech influence raises public concern
Google, too, has become a controversial data company in education. Earlier this year it launched its Be Internet Awesome resources for digital citizenship and online safety. But the New York Times questioned whether the public should accept Google as a ‘role model’ for digital citizenship and good online conduct when it is seriously embattled by major data controversies.

Google NY TimesThe New York Times questioned Google positioning itself as a trusted authority in schools

Through its education services, it’s also a major tracker of student data and is shaping its users as lifelong Google customers, said the Times. Being ‘Internet Awesome’ is also about buying into Google as a user and consumer.

In fact, Google was a key target of a whole series of Times articles last year revealing Silicon Valley influence in public education. Silicon Valley firms, it appears, have become new kinds of ‘global education ministries’—providing hardware and software infrastructure, online resources and apps, curricular materials and data analytics services to make public education more digital and data-driven.

This is what we might call ‘global policymaking by digital proxy’ as the tech influences public education at speeds and international scale conventional policy approaches cannot achieve.

The lesson here is that students, the media and public may have ideas, perceptions and feelings about technology, and the companies behind it, that are different to companies’ aspirations—claims of social responsibility compete with feelings of ‘creepiness’ about commercial tracking and concern about private sector influence in public education.

Data leaks break public trust
Data security and privacy is perhaps the most obvious topic for a data controversy lesson—but it remains an urgent one as educational institutions and companies are increasingly threatened by cybersecurity attacks, hacks, and data breaches.

K12 cybermapThe K12 Cyber Incident map has catalogued hundreds of school data security incidents

The K-12 Cyber Incident Map is doing great work in the US to catalogue school hacks and attacks, importantly raising awareness in order to prompt better protection. And then there’s the alarming news of really huge data leaks from the likes of EdModo and SchoolZilla—raising fears this is surely only going to get worse as more data is collected and shared about students.

The key lesson here is that data breaches and student privacy leaks also break students’, parents’, and the public’s trust in education companies. This huge increase in data security threats risks exposing the ed-tech industry to media and government attack. We’re supposed to protect children, they might say, but we’re exposing their information to the dark web instead!

Algorithmic mistakes & encoded politics cause social consequences 
Then there’s the problem of educational algorithms being wrong. Earlier this year, the English Testing Service revealed results from a check of whether international students were cheating an English language proficiency test. To discover how many students had cheated, ETS used voice biometrics to analyze tens of thousands of recorded oral tests, looking for repeated voices.

What it found? According to reports, 20% of the time the algorithm was getting the voice matching wrong. That’s a huge error rate, with massive consequences.

Around 5000 international students in the UK wrongly had their visas revoked and were threatened with deportation, all related to the UK’s ‘hostile environment’ immigration policy. Many have subsequently launched legal challenges, and many have won.

Data lesson 4, then, is that poor quality algorithms and data can lead to life-changing outcomes and consequences for students—even raising the possibility of legal challenges to algorithmic decision-making. This example also shows the problem with ascribing too much objectivity and accuracy to data and algorithms—in reality, they’re the products of ‘humans in the room’ whose own assumptions, and potential biases and mistakes can be coded into the software that’s used to make life-changing decisions.

Let’s not forget, either, that the test wouldn’t even have existed except the UK government was seeking to root out and deport unwanted immigrants—the algorithm was programmed with some nasty politics.

Transparency, not algorithmic opacity, is key to building trust with users
The next lesson is about secrecy and transparency. The UK government’s Nudge Unit, for example, revealed this time last year that it had piloted a school-evaluating algorithm for school inspection, which could identify where a school might be failing from its existing data.

Many headteachers and staff are already fearful of the human school inspector. The automated school-inspecting algorithm secretly crawling around in their servers and spreadsheets, if not their corridors, offices and classrooms, hasn’t made them any less concerned. Especially as it can only rate their performance from the numbers, rather than qualitatively assessing the impact of local context on how they perform.

A spokesperson for the National Association of Headteachers said to BBC News, ‘We need to move away from a data-led approach to school inspection. It is important that the whole process is transparent and that schools can understand and learn from any assessment. Leaders and teachers need absolute confidence that the inspection system will treat teachers and leaders fairly’.

The lesson to take from the Nudge Unit experiment is that secrecy and lack of transparency in use of data analytics and algorithms do not win trust in the education sector—teacher unions and education press are likely to reject AI and algorithmic assistance if not believed to be transparent, fair, or context-sensitive.

Psychological surveillance raises fears of emotional manipulation
My last three lessons focus on educational data controversies that are still emerging. These relate to the idea that the ‘Internet of Bodies’ has arrived in the shape devices for tracking the ‘intimate data’ of your body, emotions and brain.

For example, ‘emotion AI’ is emerging as a potential focus of educational innovation—such as biometric engagement sensors, emotion learning analytics, and facial vision algorithms that can determine students’ emotional response to teaching styles, materials, subjects, and different teachers.

Emotive computingEmotionAI is being developed for use in education, according to EdSurge

Among others, EdSurge and the World Economic Forum have endorsed systems to run facial analytics and wearable biometrics of students’ emotional engagement, legitimizing the idea that invisible signals of learning can be detected through skin.

Emotion AI is likely to be controversial because it prioritizes the idea of constant psychological surveillance—the monitoring of intimate feelings and perhaps intervening to modify those emotions. Remember when Facebook got in trouble for its ‘emotional contagion’ study? Fears of emotional manipulation inevitably follow from emotionAI–and the latest AI Now report highlighted this as a key area of concern.

Facial coding and engagement biometrics with emotion AI could even be seen to treat teaching and learning as ‘infotainment’—pressuring teachers to ‘entertain’ and students to appear ‘engaged’ when the camera is recording or the biometric patch is attached.

‘Reading the brain’ poses risks to human rights 
The penultimate lesson is about brain-scanning with neurotechnology. Educational neurotechnologies are already beginning to appear—for example, the BrainCo Focus One brainwave-sensing neuroheadset and application spun out of Harvard University.

Such educational neurotechnologies are based on the idea that the brain has become ‘readable’ through wearable headsets that can detect neural signals of brain activity, then convert those signals into digital data for storage, comparison, analysis and visualization via the teacher’ brain-data dashboard. It’s a way of seeing through the thick protective barrier of the skull to the most intimate interior of the individual.

BrainCo 1The BrainCo Focus One neuroheadset reads EEG signals of learning & presents them on a dashboard

But ‘brain surveillance’ is just the first step as ambitions advance to not only read from the brain but to ‘write back’ into it or ‘stimulate’ its ‘plastic’ neural pathways for more optimal learning capacity.

Neurotechnology is going to be extraordinarily controversial, especially as it is applied to scanning and sculpting the plastic learning brain. ‘Reading’ the brain for signals, or seeking to ‘write back’ into the plastic learning brain, raises huge ethical and human rights challenges—‘brain leaks’, neural security, cognitive freedom, neural modification—with prominent neuroscientists, neurotechnologists and neuroethics councils already calling for new frameworks to protect the readable and writable brain.

Genetic datafication could lead to dangerous ‘Eugenics2.0’
I’ve saved the biggest controversy for last: genetics, and the possibility of predicting a child’s educational achievement, attainment, cognitive ability, and even intelligence from DNA. Researchers of human genomics now have access to massive DNA datasets in the shape of ‘biobanks’ of genetic material and information collected from hundreds of thousands of individuals.

The clearest sign of the growing power of genetics in education was the recent publication of a huge, million-sample study of educational attainment which concluded the number of years you spend in education can be partly predicted genetically.

The study of the ‘new genetics of intelligence’, based on very large sample studies and incredibly advanced biotechnologies, is also already leading to ever-stronger claims of the associations between genes, achievement and intelligence. And these associations are already raising the possibility of new kinds of markets of genetic IQ testing of children’s mental abilities.

Many of you will also have heard the news last week that a scientist claimed to have bred the first ever genetically edited babies, raising a massive debate about re-programming human life itself.

Basically, it is becoming more and more possible to study digital biodata related to education, to develop genetic tests to measure students’ ‘mental rating’, and perhaps even to recode, edit or rewrite the instructions for human learning.

It doesn’t get more controversial than genetics in education. So what data lesson can we learn? Genetic biodata risks reproducing dangerous ideas about the biologically determined basis of achievement, while genetic ‘intelligence’ tests are a step towards genetic selection, brain-rating, and gene-editing for ‘smarter kids’—raising risks of genetic discrimination, or ‘Eugenics 2.0’.

Preventing data controversies 
So why are these data lessons important? They’re important because governments are increasingly anxious to sort out the messes that overenthusiastic data use and misuse has got societies into.

In the UK we have a new government centre for data ethics, and a current inquiry and call for evidence on data ethics in education. Politicians are now asking hard questions about algorithmic bias in edtech, accuracy of data models, risk of data breaches in analytics systems, and the ethics of surveillance of students.

Data and its controversies are under the microscope in 2018 for reasons that were unimaginable during the big data hype of 2008. Data in education is already proving controversial too.

In Edinburgh, we are trying to figure out how to build productive collaborations between social science researchers of data, learning scientists, education technology developers, and policymakers—in order to pre-empt the kind of controversies that are now prompting politicians to begin asking those hard questions.

By learning lessons from past controversies with data in education, and anticipating the controversies to come, we can ensure we have good answers to these hard questions. We can also ensure that good, ethical data practices are built in to educational technologies, hopefully preventing problems before they become full-blown public data controversies.

Posted in Uncategorized | Tagged , , , , , | Leave a comment

The app store for higher education

Ben Williamson

young people phoneA government competition aims to make choosing a degree as easy as swiping a smartphone. Image by Garry Knight

App stores are among the most significant aspects of contemporary cultures. Commercial environments where consumers choose digital products, they are also important spaces where app producers and platform businesses first come into contact with users. As the shopping centres of platform capitalism, app stores enable users to become sources of data collection and value extraction.

Apps for higher education have become a key focus of government investment, and have the potential to become significant intermediaries bringing students, applicants and other publics into contact with HE data. This post continues ongoing research documenting the expanding data infrastructure of HE in the UK, which has already explored the policy context, data-led regulatory approach, data-centred sector agencies, and involvement of data-driven edu-businesses. New apps for shaping student choice bring small businesses, edtech startups, and the not-for-profit sector into the expanding infrastructure, and are introducing the idea that student choice can be shaped (or ‘nudged’) through the interactive presentation of data on apps, price-comparison websites, and social media-style services that indicate the quality of a provider’s performance.

An ‘information revolution’ in student choice
Universities Minister Sam Gyimah announced a competition in summer 2018 for small businesses to create new apps or online services to assist young people in making choices about going to university. Controversially to many in the sector, he claimed the competition would allow tech companies to use graduate earnings data—taken from the Longitudinal Educational Outcomes (LEO) dataset—to ‘create a MoneySuperMarket for students, giving them real power to make the right choice’.

A budget of £125,000 was allocated to support the winning entrants, which were expected to produce working prototypes during September and October. A few months later he announced five shortlisted companies, an additional £300,000 investment for two of the products, and the release of ‘half a million cells of data showing graduate outcomes for every university–more than has ever been published before’.

‘This is the start of an information transformation for students, which will revolutionise how students choose the right university for them’, said Gyimah. ‘I want this to pave the way for a greater use of technology in higher education, with more tools being made available to boost students’ choices and prospects’.

In other words, the competition is just a prototype of what is still to come–a government-backed marketplace of apps, platforms and other products and services to enable applicants, students and graduates to produce, interact with, and use HE data. Elsewhere, Gyimah was reported saying there is ‘clearly a market opportunity’ for services like this, even for those not awarded part of the £300,000 funding from the Department for Education.

Although the competition at this stage has only generated prototypes–only two of which will be more fully developed–all of the companies have already developed a web presence for their apps and products. A Department for Education video tweeted from the official finalists’ event also offers some glimpses of the these prototype products. This allows us to see how an expanding ‘app store’ for student choice might extend the data infrastructure in new ways.

MyEd UniPlaces app
MyEd is an existing provider of services designed to enhance choices in education institutions.

MyEdMyED provides educational choice-enhancement services. Image from https://myed.com/

MyEd already runs services supporting parent choice in nurseries, schools, colleges and universities, in particular by aggregating key data and previous reviews to enable easy user comparison and shortlisting of providers. According to its website:

Our unique reviews process is an intelligence data analysis system that has been designed to provide our users with the most relevant and digestible information to help them make the best decisions on their investment in education.

For the competition, MyEd proposed a UniPlaces app, which it pitched as a ‘web-based compatibility checker’  to assist applicants in making HE choices. Driven by a questionnaire capturing students’ achievements and preferences, the app then seeks to match them to HE options that are linked to certain job prospects.

As an established company, MyEd already compiles together information from a range of sources, including institutions, government departments, published performance tables, and agencies such as HESA and the QAA. in these ways, it is emblematic of the shift toward marketized education and choice across all sector–from early years to HE–in recent education policy.

Uni4U
The unique aspect of the Uni4U proposal is that it was designed by students, though the organization was founded by an entrepreneur with support from the NatWest Business Accelerator.

Uni4UUni4U is gathering additional data by surveying students and school children online. Image from http://uni4u.co.uk/

Like the other apps, Uni4U supports HE choice through the graphical presentation of data about universities, including their location, campus facilities, and graduate earnings.

While in prototype phase, Uni4U produced a website featuring two online surveys to gather further data from future students and current students. It invites future students to identify what would most help them make university choices, and current students to rate the quality of their existing provider and the support they gained in making their initial choice.

Coursematch
Coursematch presents itself on its website as a fully functioning app available via the Apple App store and Google Play, with a claimed 25,000 users. It was already upgraded in its current form in May 2018 and has been marketing itself on social media as ‘The #1 social network to help find your perfect university course and meet future friends!’

CoursematchCourseMatch is a social network for university choice, already available on app stores. Image from https://coursematch.io/

Perhaps the notable aspect of Coursematch is its claim to use machine learning to make the most effective matches between students and courses, twinned with a ‘swipeable’ interface design adopted from dating apps.

‘Our new look app is going to make it easier than ever to browse University courses, and find your perfect course!’ read a recent promotional Coursematch tweet. ‘We are bringing in AI techniques to recommend a selection of courses right for you, to browse through with just a simple swipe’.

Potential students are provided with projected possible earnings based on the average lower quartile, median and upper quartile for particular courses, and can also interact through the app with existing students on those courses. Coursematch is already supported by Jisc, the HE digital learning agency, and Santander Universities.

AccessEd–ThinkUni app
The ThinkUni app comes from the not-for-profit sector, with AccessEd aiming to ‘increase access to university for young people from under-served backgrounds globally. We are creating a global network of partner organisations committed to this mission, sharing with them our expertise, resources and support’.

AccessEdAccessEd supports access to university for young people from under-served backgrounds. Image from https://access-ed.ngo/

Pitched as a ‘personalized careers assistance’ service that is easy for students to use on their smartphones, ThinkUni builds on AccessEd’s previous university access work–including its ‘Brilliant Club’, the UK’s largest university access programme for 11-18 year olds.

According to the co-founder and executive chair of AccessEd, existing sources such as UCAS are huge databases and glorified spreadsheets that make decision-making difficult. With ThinkUni they can instead access details such as which universities the student could choose based on their school exam grades, and how long it would take them to pay back their student loan based on a projected graduate salary.

The Profs—That’s Life
That’s Life is the most unique of the competition finalists–it’s an education and careers simulator produced by The Profs, a successful private HE tutoring company.

ProfsThe Profs is a successful HE private tutoring company. Image from https://www.theprofs.co.uk/

The idea for the service is that it provides a ‘gamified’ simulation of the outcomes of making certain kinds of decisions, and presents projected data such as their future levels of happiness, work-life balance and income, showing students the impact of their life and course choices, including not going to university all.

The gamification and simulation aspects of That’s Life demonstrate how the logics of video games could be employed to enhance student choice, notably by offering students opportunities to experiment with different pathways and problem solving strategies. But the app’s origins in the private HE tutoring sector is also indicative of how private sector and alternative providers are being actively welcomed into public university service provision.

Scaling up the prototype
Whether apps such as those supported by government–or the earnings potential they present–actually influence student choice remains for now an empirical question. Another question is whether initial government investment will enable these app producers to scale their products. In a way, Sam Gyimah is acting like a Silicon Valley venture capitalist, seed-funding early-stage prototypes that bear a high risk of failure.

However, one existing example of a HE-facing app suggests that appetite for real venture capital investment in such products may be growing. Debut is a smartphone app for talent-matching graduates to corporate employers and labour markets. Graduate users create a profile—as with other social media platforms—and complete a psychometric personality test which can then be used for automated push notifications of appropriate jobs. Partnering corporate employers can even ‘talent spot’ and target individual users directly without requiring an application form or CV.

Debut appDebut is a machine learning based talent-matching app. Image from http://debut.careers/

But Debut is also a direct challenge to universities and the status of the academic degree.  ‘We want to unbundle that and turn our user base into a behaviour- and competency-based user base,’ its founder says. ‘The strength would be the person’s competency as opposed to academic success’. Instead, it emphasizes graduates’ ‘cognitive psychometric intelligence’, behavioural traits and competencies. ‘We have everything on students, from their cognitive background, social background, to how well they perform in a selection process’—data it is using to train machine learning algorithms ‘to make personalized recommendations and predictions’.

Debut therefore instantiates the entry of automated predictive talent analytics into UK HE, inciting students to cultivate their marketable personality and behavioural skills above their academic credentials. Users of the platform generate training data for its machine learning learning algorithm to tune and refine its subsequent job-matches and recommendations. In summer 2018 Debut also received £5 million venture capital investment led by James Caan, the entrepreneur from the TV show Dragons’ Den, and  already has 60 corporate clients, including Google, Apple and Barclays, that pay it an annual subscription to sort and organize the graduate data.

Student-powered & metrics-powered HE
As an established product, Debut is well positioned in the emerging app store of services and products to help shape students’ choices. As the DfE competition demonstrates, apps are emerging to match prospective applicants the courses based on graduate earnings data from LEO, while Debut can later then link them to employers based on a training set of graduate competency profiles and successful labour market matches.

The finalists of the DfE competition represent the governmental recognition of the potential of data presented on apps to shape choices and decisions. The prototypical app store for HE choice is, therefore, a significant extension of ongoing upgrades to the data infrastructure of HE. It raises some key issues:

  • It exemplifies government ambitions to ‘unbundle‘ and open up HE to new market providers of technologies, entrepreneurs, the private sector, and other business interests, with government itself acting as a market catalyst and seed-fund investor
  • It brings the logic of ‘swipeable’ apps and social media platforms into HE, importing the business model of platform capitalism and the extraction of value from student data into higher education
  • It utilizes persuasive design and behavioural science insights to design interfaces and visualizations that might ‘hook’ attention, ‘trigger’ behaviours, and ‘nudge’ decisions according to the ‘choice architecture’ provided
  • It continues to treat students as calculative consumers, investing in HE with the expectation of ROI in the shape of graduate outcomes and earnings, and puts pressure on institutions to focus on labour market outcomes as the main purpose of HE
  • It incites prospective and current students to see and think about HE in primarily quantitative and evaluative terms, as represented in metrics and market-like performance rankings and ratings
  • It anticipates potential long-term and real-time data monitoring of students in HE institutions, through a digital surveillance assemblage of apps, platforms and infrastructural connections, thereby making students into data transmitters of institutional qualities as well as consumers of institutional data
  • It instantiates the increasing role of algorithms, machine learning and automation into applicants’, students’ and graduates’ decision-making, with Debut even seeking to short-circuit the job application process and automatically talent-match graduate competency profiles to corporate job descriptions
  • It raises questions about the uses of student data to reinforce pre-existing governmental ideology, with the DfE recently reprimanded by statistical authorities for prioritising political messaging ahead of its statistical evidence–could students apps be designed otherwise rather than to conform to market models of cost-benefit calculation?

By releasing a huge trove of LEO data, it also demonstrates how HE is being made increasingly measurable, computable, and comparable as a competitive, market-driven sector, with Gyimah noting that ‘these new digital tools will highlight which universities and courses will help people to reach the top of their field, and shine a light on ones lagging behind’.

The governmental focus on calculating which universities are ‘lagging’ or even ‘failing’ from their data is itself a huge sector concern, with Michael Barber, chair of the Office for Students, writing in The Telegraph that ‘While student choice should drive innovation, diversity and improvement, we recognise this won’t always be enough. So where market mechanisms are not sufficient, we will regulate’. The piece, entitled ‘We should allow bad universities to fail, as long as we protect their students’, followed another Telegraph article titled ‘If the higher education market is to succeed, bad universities must be allowed to go bust’.

In this highly conservative political and media context, further amplified by think tanks such as Reform, HE is being driven both by the supposed ’empowerment’ of students and by metrics of market performance. The first perspective sees data as central to a ‘student-powered’ sector characterized by choice, value for money, and market competitiveness. The other takes a ‘metrics-powered’ perspective on universities as comparable market actors with winners and failures, as calculated by the choices of applicants to attend, indicator data on provider performance, and LEO or other student outcomes data on graduate outcomes and earnings.

These two perspectives are, however, binocular rather than oppositional. Barber’s emphasis on ‘bad universities’ and Gyimah’s enthusiasm for student-facing apps are part of the same project, with data from and about students  treated as key performance indicators for both policy officials and university applicants to assess. As Barber noted, ‘With more information at their disposal on the quality of courses and associated salary outcomes, [students] will rightly be thinking carefully about such choices. That places an onus on universities to plan realistically and respond quickly where demand is higher–or lower–than expected’.

The emerging, prototypical HE app store instantiates these demands in software. It reveals to students the best-performing universities in terms of degree awards and graduate earnings, but also reveals the ‘bad universities’ and discourages them from ‘investing’ in these institutions and their courses. In these ways, the HE app store threatens to exert dangerously performative effects. By presenting university providers as a market, these apps will shape students’ choices away from certain institutions, or prompt institutions to drop courses that don’t promise a high percentage of positive graduate outcomes, while privileging elite institutions with stronger existing performance records. The app store will speed up the ‘market failure’ of those providers presented in the data as ‘bad universities’.

Posted in Uncategorized | Tagged , , , , , , , | 1 Comment