Social media and public pedagogies of political mis-education

Ben Williamson

circuit-board

Over the past few months the close knit relationship of education with software and data has become a defining feature of political life in democratic societies. In a year that has seen ‘post-truth‘ named as word of the year by Oxford Dictionaries, social media fueled by big data has been blamed for creating deep political polarization. At the same time, the organization of formal education has itself been accused of increasing inequalities and widening a gap in the worldviews between young people who leave education with high-status qualification and those who do not. What is the link?

The education gap
Both the UK’s Brexit referendum and the US election have raised significant questions about education. One question was about why, on average, people with fewer educational qualifications had tended to vote for the UK to leave the EU, or for Trump to take the presidency despite his lack of political experience, while those with more qualifications tended to vote the other way. A new ‘education gap’ has emerged as an apparent determinant of people’s political preference. This education gap has begun to raise concerns about divisions in democracy itself, as the political scientist David Runciman has argued:

The possibility that education has become a fundamental divide in democracy—with the educated on one side and the less educated on another—is an alarming prospect. It points to a deep alienation that cuts both ways. The less educated fear they are being governed by intellectual snobs who know nothing of their lives and experiences. The educated fear their fate may be decided by know-nothings who are ignorant of how the world really works.

Of course, plenty of wealthy educated people in the UK voted out of the EU, and voted for Trump in the US. But statistics from both votes did indicate significant population differences in terms of educational qualification, in relation to a range of other social factors, in determining voting patterns.

Significantly, the statistics from the EU referendum indicate that the vote for leaving the EU was concentrated in geographical areas already most affected by growing economic, cultural and social inequalities, as well as by physical pain and mental ill-health and rising mortality rates. The sociologists Mike Savage and Niall Cunningham have vividly articulated the consequences of growing inequalities for citizens’ political participation:

There is ample evidence that political dynamics are being increasingly driven by the dramatic spiraling of escalating inequalities. To put this another way, growing economic inequalities are spilling over into all aspects of social, cultural, and political life, and that there are powerful feedback loops between these different spheres which are generating highly worrying trends.

Education, of course, is itself highly unequally distributed in terms of how well children achieve in schools, in ways that reproduce all sorts of social, cultural and economic inequalities. The increasing separation of children from more or less affluent backgrounds, and according to geographical locales and social and cultural contexts, is part of the dramatic spiralling of inequalities observed by sociologists. The kind of political polarization that materialized during both Brexit and the US election is the result of the related dynamics of education, geography, economics, and cultural and social networks, and the feedback loops between them.

It would be naive to suggest that those people with fewer qualifications are somehow to blame for not being critically aware of how their perspectives were being sculpted by populist propaganda during these campaigns. Anxiety among highly educated elites about the consequences of a lack of political awareness are far from novel. Moreover, the challenge here is to reconcile the polarizing interests of both those who are highly educated and those who are less educated. As Savage and Cunningham concluded, ‘the way that the wealthy elite are increasingly culturally and socially cocooned, and the extent to which large numbers of disadvantaged groups are outside their purview is deeply worrying.’ In their view, a kind of educated ignorance is the problem.

In the EU referendum and the US presidential election alike, neither side appeared to have any deep awareness of the other or of the deep-seated social issues that led to such distinctive and divided patterns of voting, as David Runciman explained:

Social media now enhances these patterns. Friendship groups of like-minded individuals reinforce each other’s worldviews. Facebook’s news feed is designed to deliver information that users are more inclined to ‘like’. Much of the shock that followed the Brexit result in educated circles came from the fact that few people had been exposed to arguments that did not match their preferences. Education does not provide any protection against these social media effects. It reinforces them. … [T]he gap between the educated and the less educated is going to become more entrenched over time, because it … represents a gulf in mutual understanding.

This point raises the other question, which was couched much less explicitly in terms of education. This concerned the role of social media in filtering how people learned about the issues on which they were being invited to vote.

Personalized political learning
The issue of how social media has participated in filtering people’s exposure to diverse political perspectives has become one of the defining debates in the wake of Brexit and the US election. An article in the tech culture magazine Wired on the day of the US election even asked readers, uncharacteristically, to consider the ‘dark side of tech’:

Even as the internet has made it easier to spread information and knowledge, it’s made it just as easy to undermine the truth. On the internet, all ideas appear equal, even when they’re lies. … Social media exacerbates this problem, allowing people to fall easily into echo chambers that circulate their own versions of the truth. … Both Facebook and Twitter are now grappling with how to stem the spread of disinformation on their platforms, without becoming the sole arbiters of truth on the internet.

The involvement of social media in the spread of ‘post-truth politics’ points to how it is leading citizens into informational enclaves designed to feed them news and knowledge that has been filtered to match their interests, based on data analysis of their previous online habits, what they have ‘liked’ or watched, what news sources they prefer, who they follow and what social networks they belong to.

‘Platforms like Twitter and Facebook now provide a structure for our political lives,’ Phil Howard , a sociologist of information and international affairs, has argued. He claims that social algorithms allow ‘large volumes of fake news stories, false factoids, and absurd claims’ to be ‘passed over social media networks, often by Twitter’s highly automated accounts and Facebook’s algorithms.’

Since the US election, it has been revealed that Trump’s campaign team worked closely with Facebook data to generate audience lists and targeted social media campaigns. Added to this, other more politically-activist social media sites such as Breitbart and Infowars have actively disseminated right-wing political agendas, reaching audiences that count in the tens of millions, as  Alex Krasodomski-Jones has detailed. ‘Computational propaganda’ involving automated bots spreading sensationalist political memes across social media networks have further compounded the problematic polarization of news consumption. Facebook and Twitter now accelerate the spread of fake news or sensationalized political bias through mechanisms such as trending topics and moments, which are engineered to be personalized to users’ preferences.

Clearly there are important implications here for how young people access and evaluate information. Jamie Bartlett and Carl Miller of the think tank Demos wrote a report 5 years ago that highlighted a need to teach young people critical thinking and scepticism online to ‘allow them to better identify outright lies, scams, hoaxes, selective half-truths, and mistakes.’

But the debate is not just about how to protect young people from online trolling, propagandist bias and fake news. Just as with the debate about the education gap, it’s important to note that people from across the political spectrum, whether highly educated or not, are all increasingly ‘socially and culturally cocooned’ as Mike Savage and Niel Cunningham phrased it. Education and social media are both involved in producing these cocooning effects.

The sociologist of social media Tarleton Gillespie wrote a few years ago about how big data-driven social media creates not just ‘networked publics’ who cohere together online around shared tastes and preferences, but ‘calculated publics‘: algorithmically produced snapshots of the ‘public’ around us and what most concerns it. He argued that search engines, recommendation systems, algorithms on social networking sites, and ‘trend’ identification algorithms not only help us find information, but provide a means to know what there is to know and to participate in social and political discourse.

Algorithmic calculations are now at the very centre of how people are learning to take part in political and democratic life, by filtering, curating and shaping what information and news we consume based on calculations of what most concerns and engages us — the logic of social media personalization now applies to political life. In other words, we are now living in a period of personalized political learning, whereby our existing political preferences are being reinforced by the consumption of news and information via social media and our participation in calculated, networked publics, with the consequence that  alternative perspectives are being systematically curated out of our feeds and out of our minds.

So seriously is this problem being taken that, in the fallout from the US election, it has been reported that a team of ‘renegade’ Facebook employees has established itself to deal with fake news and misinformation, although Mark Zuckerberg has denied Facebook had anything to do with it. The web entrepreneur Tim O’Reilly has suggested it would be a mistake for Facebook to reinstate human editors — whose alleged political bias was itself the centre of a major controversy not so long ago — but to design more intelligent techniques for separating information from sensationalist misinformation:

The answer is not for Facebook to put journalists to work weeding out the good from the bad. It is to understand, in the same way that they’ve so successfully divined the features that lead to higher engagement, how to build algorithms that take into account ‘truth’ as well as popularity.

Expect the quest for truth-divining algorithms to become a dominant feature of technical development in the social media field over the next years. Google in Europe, for example, has already announced support for a startup company that is developing automated, real-time fact-checking software (called RoboCheck) for online news. The appeal of apparently objective, impartial and unbiased truth-seeking algorithms in post-truth times is obvious, though as recent work in digital sociology and geography has repeatedly shown, algorithms are always dependent on the choices and decisions of their designers and engineers. The ‘social power of algorithms‘ such as those of Facebook to intervene in political life may not easily be resolved by new algorithms.

Public pedagogies of political mis-education
The post-truth spread of misinformation twinned with the magnification of political and social polarization via social media platforms and algorithms is at the core of a new public pedagogy of political mis-education. Public pedagogy is a term used to refer to the lessons that are taught outside of formal educational institutions by popular culture, informal institutions and public spaces,  dominant cultural discourses, and public intellectualism and social activism. Big data and social media are fast becoming the most successful sources of public pedagogy in the everyday lives of millions around the world. They are educating people by sealing them off into filter bubbles and echo chambers, where access to information, culture, news, and intellectual and activist discourse is being curated algorithmically.

The filter bubbles or echo chambers that calculated publics inhabit when they spend time on the web are consequential because they appear to close off access to alternative perspectives, and potentially lead people to think that everyone thinks like they do, shares their political sentiments, their aspirations, their fears. This is further related to, reproduced and exacerbated by  social inequalities in education, economics and cultural access. Doing well in formal education or not now appears to be a determinant of which kinds of social networks and calculated publics you belong to. ‘The educational divide that is opening up in our politics is not really between knowledge and ignorance,’ David Runciman argues. ‘It is a clash between one worldview and another.’

In an age where highly educated people and less educated people are being sharply divided both by social media and by their experience of education alike, serious issues are raised for the future of education as a social institution itself and the part it plays in supporting democratic processes. Existing educational inequalities and the experience of being parts of calculated publics in social media networks are now in a dynamic feedback loop. The public pedagogies of social media are becoming mis-educational in their effects, polarizing public opinion along different axes but most especially between the highly educated and the less educated.

Forms of measurement using data have long been at the core of how governments know and manage populations, as the sociologist David Beer has demonstrated in his work on ‘metric power.’ Today, the measurement of people’s interests, preferences and sentiments via social media, and the use of that information to feed-back content that people will like and that matches their existing preferences, is leading to a form of calculating governance that is exacerbating divisive politics and eroding democratic cohesion. Via social media data, people are being educated and governed according to measurements that indicate their existing worldview, and then provided access to more of the same.

As Brexit and the US election indicate, increasingly people in the UK and US are being governed as two separate publics, with many of the less-educated incited to support political campaigns that the more-educated find alien and incomprehensible, and vice versa. The philosopher Bruno Latour has described them as ‘two bubbles of unrealism,’ one clinging to an imagined future of globalization and the other retreating to the imagined ‘old countries of the past,’ or ‘a utopia of the future confronting a utopia of the past’:

For now, the utopia of the past has won out. But there’s little reason to think that the situation would be much better and more sustainable had the utopia of the future triumphed instead. … If the horizon of ‘globalization’ can no longer attract the masses, it is because everyone now understands more or less clearly that there is no real, material world in the offing corresponding to that vision of a promised land. … Nor can we count any longer on returning to the old countries of the past.

Education has long reinforced these utopias of unrealism — we’ve been teaching and learning in ‘post-truth’ times for years. Contradictory policy demands over the last two decades have pointed simultaneously towards an education for the future of a high-skills, globalized knowledge economy (as reinforced by global policy actors like the OECD), and an education of the past which emphasizes traditional values, national legacy, social order and authority. Social media algorithms and architectures have further enabled these utopias of unrealism to embed themselves across the US and Europe.

The mis-education of democratic society by the public pedagogies of big data and social media is being enabled by algorithmic techniques that are designed to optimize and personalize people’s everyday experiences in digital environments. But in the name of personalization and optimization, the same techniques are leading to post-truth forms of political mis-education and democratic polarization.

Sociologists have begun asking hard questions about the capacity of their field to address the new problems surfaced by Brexit and Trump. The field of education needs to involve itself too in this new problem space, in order to probe how young people are measured and known through traces of their data from early age; how their tastes and preferences are  formed through the dynamics between imagined utopias and social media feedback loops; how these relate to entrenched patterns of educational and other social inequalities; and how their sense of their place and their futures in democratic societies is formed as they encounter the public pedagogies of big data and social media in their everyday lives. How, in short, should we approach education in post-truth times?

Image credit: Quapan
Posted in Uncategorized | 5 Comments

Pearson, IBM Watson and cognitive enhancement technologies in education

Ben Williamson

ibm-watsonImage: Atomic Taco

The world’s largest edu-business, Pearson, partnered with one of the world’s largest computing companies, IBM, at the end of October 2016 to develop new approaches to education in the ‘cognitive era.’ Their partnership was anticipated earlier in the year when both organizations produced reports about the future trajectories of cognitive computing and artificial intelligence for personalizing learning. I wrote a piece highlighting the key claims of both at the time, and have previously published some articles tracing both Pearson’s interests in big data and IBM’s development of cognitive systems for learning. The announcement of their partnership is the next step in their efforts to install new machine intelligences and cognitive systems into educational institutions and processes.

At first sight, it might seem surprising that IBM and Pearson have partnered together. Their reports would suggest they were competing to produce a new educational market for artificially intelligent or cognitive systems applications. Pearson, however, has had a bad couple of years, with falling revenue and reputational decline, which appears to have resulted in the closure of its own in-house Center for Digital Data, Analytics and Adaptive Learning. IBM, meanwhile, has been marketing its cognitive computing systems furiously for use in business, government, healthcare, education, and other sectors. The key to the partnership is that, despite its business troubles, Pearson retains massive penetration into schools and colleges through its digital courseware, while IBM has spent years developing and refining its cognitive systems. A mutually beneficial strategic business plan underpins their partnership.

The Pearson-IBM partnership also taps into current enthusiasm and interest in new forms of machine-based intelligence. This is reflected, for example, in the recent establishment  of the Leverhulme Centre for Future Intelligences at the University of Cambridge, a White House report on preparing the future of artificial intelligence, and a Partnership on AI established by Facebook, Amazon, Google, IBM and Microsoft. The central tenet of the partnership on AI is that ‘artificial intelligence technologies hold great promise for raising the quality of people’s lives and can be leveraged to help humanity address important global challenges such as climate change, food, inequality, health, and education.’

Together, these developments point to a growing contemporary concern with forms of machine intelligence that are sometimes described as ‘weak’ or ‘narrow’ forms of AI. Weak or narrow AI includes techniques such as cognitive computing, deep learning, genetic algorithms, machine learning, and other automated, algorithmic processes, rather than aspiring to ‘strong’ or ‘general’ models of AI which assume computers might become autonomous superintelligences. A recent report on future computing produced by the Human Brain Project noted that:

The power of these innovations has been increased by the development of data mining and machine learning techniques, that give computers the capacity to learn from their ‘experience’ without being specifically programmed, constructing algorithms, making predictions, and then improving those predictions by learning from their results, either in supervised or unsupervised regimes. In these and other ways, developments in ICT and robotics are reshaping human interactions, in economic activities, in consumption and in our most intimate relations.

Ultimately, such technologies can be described as cognitive or intelligent because they have been built to learn and adapt in ways that are inspired by the human brain. Neuroscientific insights into the plasticity of the brain, how it adapts to input and stimuli from the social environment, have been at the centre of the current resurgence of interest in machine intelligence.

So what is education likely to look like if the glossy imaginary projected by Pearson and IBM of learning in the cognitive era materializes in the future?

Learning machines

The key technology underpinning their ambitions is Watson, IBM’s highly-publicized cognitive supercomputing system. The IBM webpages describe Watson as ‘a cognitive technology that can think like a human,’ and which has the capacity to:

  • Understand: With Watson, you can analyze and interpret all of your data, including unstructured text, images, audio and video.
  • Reason: With Watson, you can provide personalized recommendations by understanding a user’s personality, tone, and emotion.
  • Learn: With Watson, you can utilize machine learning to grow the subject matter expertise in your apps and systems.
  • Interact: With Watson, you can create chat bots that can engage in dialog.

Key to the way IBM is marketing Watson is that it has been built with extraordinary flexibility, with Watson APIs and starter code provided to allow organizations to build their own apps and products.

Though IBM has been promoting cognitive computing in education for a few years—in 2013 it produced a glossy visualization of the classroom in 5 years time, a ‘classroom that will learn you’—it is now firmly seeking to establish Watson in the educational landscape. IBM Watson Education, it claims, ‘is bringing education into the cognitive era’:

We are transforming the learning experience through personalization. Cognitive solutions that understand, reason and learn help educators gain insights into learning styles, preferences, and aptitude of every student. The results are holistic learning paths, for every learner, through their lifelong learning journey.

One of the key applications IBM has developed is a data-based performance tracking tool for schools and colleges called IBM Watson Element for Educators:

Watson Element is designed to transform the classroom by providing critical insights about each student – demographics, strengths, challenges, optimal learning styles, and more – which the educator can use to create targeted instructional plans, in real-time. Gone are the days of paper-based performance tracking, which means educators have more face time with students, and immediate feedback to guide instructional decisions.

Designed for use on an iPad so it can be employed directly in the classroom, Element can capture conventional performance information, but also student interests and other contextual information, which it can feed into detailed student profiles. This is student data mining that goes beyond test performance to social context (demographics) and psychological classification (learning styles). It can also be used to track whole classes, and automatically generates alerts and notifications if any students are off-track and need further intervention.

Another, complementary application is IBM Watson Enlight for Educators. Enlight is designed as a tool to support teachers to personalize their instructional techniques and content:

IBM Watson Enlight embodies three guiding principles: 1. Know Me: empower teachers with a comprehensive view of relevant data to help understand each student’s strengths and areas of growth 2. Guide Me: provide teachers with guidance as to how best to support each student 3. Help Me: support teachers with curated, personalized learning content and activities aligned with each student’s needs.

The application is marketed as a support system for understanding a class and the individual students in it, and for generating ‘actionable insights’ to ‘target learning experiences. ‘Teachers can optimize their time and impact throughout the year using actionable, on-demand insights about their students,’ it claims, and then ‘craft targeted learning experiences on-the-fly from content they trust.’ In many ways, these applications are extraordinarily similar to those being promoted for schools by companies like Facebook, with its Summit Personalized Learning platform, or AltSchool’s Playlist and Progression tools.

The partnership with Pearson will allow Watson to penetrate into educational institutions at a much bigger scale than it could do on its own, thanks to the massive reach of Pearson’s courseware products. Specifically, the partnership is focusing on the higher education sector (though time will tell whether it further migrates into the schools sector). The press release issued by Pearson stated that its new global education partnership would ‘make Watson’s cognitive capabilities available to millions of college students and professors’:

Pearson and IBM are innovating with Watson APIs, education-specific diagnostics and remediation capabilities. Watson will be able to search through an expanded set of education resources to retrieve relevant information to answer student questions, show how the new knowledge they gain relates to their own existing knowledge and, finally, ask them questions to check their understanding.

Strikingly, it proposes that Watson will act as a:

flexible virtual tutor that college students can access when they need it. With the combination of Watson and Pearson, students will be able to get the specific help they need in real time, ask questions and be able to recognize areas in which they still need help from an instructor.

The press release issued by IBM added that Watson would be ‘embedded in the Pearson courseware’:

Watson has already read the Pearson courseware content and is ready to spot patterns and generate insights.  Serving as a digital resource, Watson will assess the student’s responses to guide them with hints, feedback, explanations and help identify common misconceptions, working with the student at their pace to help them master the topic.

What Watson will do, then, is commit the entirety of Pearson’s content to its computer memory, and then, by constantly monitoring each individual student, cognitively calculate the precise content or structure of a learning experience that would best suit or support that individual.

The partnership is ultimately the material operationalization of a shared imaginary of machine intelligences in education that both IBM and Pearson have been projecting for some time. But this imaginary is slowly moving out of the institutional enclosures of these organizations to become more widely perceived as desirable and attainable in the future, and it is beginning to animate policy ideas as well as technical projects. The White House report on AI, for example, specifically advocates the development of AI digital tutors for use in education, and has suggested the need for a new technical agency within the US Department for Education that is modelled on its defence research agency DARPA. The think tank the Center for Data Innovation has also produced a report on ‘the promise of AI‘ that admiringly promotes Watson applications such as its automated Teacher Advisor.

Cognitive enhancement technologies

Underpinning these efforts is a shared vision of how machine intelligence might act as cognitive-enhancement technologies in educational settings, though we clearly need to be cautious about the extent to which the technology will live up to its futuristic hype. As educational technology critic Audrey Watters has recently argued, ‘the best way to predict the future is to issue a press release.’ IBM and Pearson are both busily marketing their vision of the cognitive future of education because their businesses depend on it. For them, it’s necessary to suggest that people today are at a cognitive deficit when faced with the complexities of the technologized era, so they can sell products offering cognitive enhancement.

The promise of cognitive computing for IBM, as stated in its recent white paper on ‘Computing, cognition and the future of knowing,’ is not just of more ‘natural systems’ with ‘human qualities,’ but a fundamental reimagining of the ‘next generation of human cognition, in which we think and reason in new and powerful ways’:

It’s true that cognitive systems are machines that are inspired by the human brain. But it’s also true that these machines will inspire the human brain, increase our capacity for reason and rewire the ways in which we learn.

These are extraordinary claims that put companies like IBM and Pearson in the cognitive-enhancement business. They have positioned themselves at the vanguard of the generation of hybrid ‘more-than-human’ cognition, learning and thinking.

Clearly there may be consequences of the development of cognitive enhancement technologies and machine intelligences in education. These technologies could ultimately become responsible for establishing the educational pathway and progress of millions of students. They could ‘learn’ some bad habits, like Microsoft’s infamous AI chatbot. They could be found to discriminate against certain groups of students, and reinforce and reproduce existing social inequalities. Privacy and data protection is an obvious issue as supposedly clever technologies ingest all the intimate details of individual students and store them in vast databanks on the IBM cloud. If Watson scales across Pearson’s content and courseware, it is ultimately going to be able to collect and data-mine huge amounts of information about potentially millions of students worldwide.

Moreover, access to these technologies won’t be cheap for institutions. This could lead to competitive cognitive advantage for students from wealthy institutions, whose learning and development may be supported by cognitive enhancement technologies. A new form of hybrid cognitive capital may become available for students at institutions that invest in these cognitive systems. Given that Pearson’s own global databank of country performance, the Learning Curve, compares education systems according to students’ ‘cognitive skills,’ measuring national cognitive capital as a comparative advantage in the ‘global race’ could also become attractive to government agencies.

Regarding this last point, IBM and Pearson also anticipate the development of real-time adaptive forms of governance in education. Both Pearson and IBM are trying to bypass the cumbersome bureaucratic systems of testing and assessment by creating real-time analytics that perform constant diagnostics and adaptive, personalized intervention on the individual. Pearson’s previous report on AI in education spells this out clearly:

Once we put the tools of AIEd in place as described above, we will have new and powerful ways to measure system level achievement. … AIEd will be able to provide analysis about teaching and learning at every level, whether that is a particular subject, class, college, district, or country. This will mean that evidence about country performance will be available from AIEd analysis, calling into question the need for international testing.

Although the current partnership with IBM is focused on college students, then, this is just part of a serious aspiration to govern the entire infrastructure of education systems through real-time analytics and machine intelligences, rather than through the infrastructure of test-based accountability that currently dominates schools and colleges.

Educational institutions are by now well used to accountability systems that involve collecting and processing test scores to produce performance measures, comparisons and ratings. IBM and Pearson are proposing to make cognitive systems orchestrate this infrastructure of accountability. As Adrian Mackenzie has put it, ‘cognitive infrastructures’ such as Watson ‘present problems of seeing, hearing, checking and comparing as no longer the province of human operators, experts, professionals or workers … but as challenges set for an often almost Cyclopean cognition to reorganise and optimise.’ IBM and Pearson are seeking to sink a cognitive infrastructure of accountability into the background of education, one which is intended to not just to measure and compare performance, but to reorganize and optimize whole systems, institutions and individuals alike.

Posted in Uncategorized | 2 Comments

Assembling ClassDojo

A sociotechnical survey of a public sphere platform

Ben Williamson

ClassDojo mojo

The world’s most successful educational technology is ClassDojo. Originally developed as a smartphone app for teachers to reward ‘positive behaviour’ in classrooms, it has recently extended significantly to become a communication channel between teachers and parents, a school-wide reporting and communication platform, an educational video channel, and a platform for schoolchildren to collect and present digital portfolios of their class work.

In a previous post I began sketching out a critical approach to the ClassDojo app. In this follow-up (note that it’s a long read, more a working paper than a post)  I want to explore ClassDojo as a more extensive platform, and to consider it as a sociotechnical ‘assemblage’ of many moving parts. It is, I argue, simultaneously composed of technical components,  people, policies, funding arrangements, expert knowledge and discourse, all of which combine and work together as a hybrid product of human and nonhuman actors to enable the functioning of the platform. Each of these components has been assembled together over time to make ClassDojo what it is today. The purpose of the post is twofold: to help generate greater public understanding and awareness of ClassDojo among teachers and parents, and also to scope out the contours of the platform for further detailed research.

Education in the ‘platform society’
When it was first launched as a beta product in 2011, ClassDojo was a simple app designed for use on mobile devices. It has subsequently become a much more extensive platform, spreading rapidly across the US and around the world. As new features have been added over its 5 year lifespan to date, ClassDojo has become much more like a social media platform for schools. It allows teachers to award points for behaviour, somewhat akin to pressing the ‘like’ button on Facebook; permits text and video communication between teachers and parents, as many social media platforms do; acts as a channel for video content; and also has capacity for schoolchildren to create digital portfolios of their work. It has also extended to become a ‘schoolwide’ platform, whereby all teachers, school leaders and pupils are signed up to the platform and school leaders can take an overview of everything occurring on it.

Given its expansion beyond its original design as an app, ClassDojo needs to be understood in relation to emerging critical research on digital platforms, where ‘platform’ refers to internet-based applications such as social media sites that process information and communication. Jose van Dijck and Thomas Poell have argued that ‘over the past decade, social media platforms have penetrated deeply into the mechanics of everyday life, affecting people’s informal interactions, as well as institutional structures and professional routines.’ More recently, van Dijck has suggested that we are entering a new kind of ‘platform society’ in which ‘social, economic and interpersonal traffic is largely channeled by an (overwhelmingly corporate) global online infrastructure that is driven by algorithms and fueled by data.’ This emerging platform society is gradually interfering with more and more aspects of everyday life, including key public institutions of society such as health and education. Van Dijck has called these ‘public sphere platforms’ that promise to contribute to the public good in areas which are under funded by governments, but are owned and structured by private actors and networks.

ClassDojo is prototypical of a public sphere platform for education, one that is designed to contribute to the public good by supporting teachers to manage children’s classroom behaviour and allow parents to communicate with schools at a time when schools are increasingly under pressure. Before detailing its various dimensions as a platform, however, it is important to note that any platform ultimate consists of multiple moving parts, both human and nonhuman, that have to be assembled together. Putting it simply, social researchers have recently begun to attend to the messy ‘assemblages’ of digital technologies such as online platforms, while education researchers have begun to acknowledge that their objects of study—classrooms, tests, policies, or educational technologies—are in fact assemblies of myriad things. In recent work on ‘critical data studies,’ Rob Kitchin and Tracey Lauriault have described a ‘data assemblage’ as:

a complex socio-technical system, composed of many apparatuses and elements that are thoroughly entwined, whose central concern is the production of data. A data assemblage consists of more than the data system/infrastructure itself, such as a big data system, an open data repository, or a data archive, to include all of the technological, political, social and economic apparatuses that frames their nature, operation and work.

An assemblage such as a digital platform, then, needs to be understood in terms of the ways that all its moving parts—whether human and social or nonhuman, material or technical—come together to form a relatively coherent and stable whole. For Kitchin and Lauriault, researching such an assemblage would therefore involve an investigation of its technical and material components; the people that inhabit it and the practices they undertake; the organizations and institutions that are part of it; the marketplaces and financial techniques that enable it; the policies and frameworks that govern it; and the knowledges and discourses that promote and support it.

Importantly, they—like others working with the assemblage concept—acknowledge that assemblages are contingent and mutable rather than fixed entities:

Data assemblages evolve and mutate as new ideas and knowledges emerge, technologies are invented, organisations change, business models are created, the political economy alters, regulations and laws introduced and repealed, skill sets develop, debates take place, and markets grow or shrink.

Utilizing the concept of a sociotechnical assemblage, in what follows I aim to detail how ClassDojo has been assembled over time as a mutating and evolving public sphere platform for education that consists of many human and nonhuman moving parts. I have arranged this as a kind of sociotechnical survey of the elements that constitute the ClassDojo assemblage.

Technicalities & materialities
As a technical platform ClassDojo consists of a mobile app and an online platform. Teachers can access and use the app on a smartphone or tablet in the classroom, and open up the online platform on any other computing device or display hardware for pupils to view. The platform allows class teachers to set their own behavioural categories, though it comes pre-loaded with a series of behaviours that teachers can use to award or deduct feedback points. Each child in the system is represented by a cute avatar, a dojo monster, which can be customized by the user. Behavioural targets can be set for both individuals and groups to achieve positive goals, and teachers can also deduct points. Children’s points are represented as a ‘doughnut’ of green positive points and red ‘needs work’ deductions. Teachers are able, if they choose, to display each child’s aggregate points to their entire class as a kind of league table of behaviour, and school leaders can access each child’s profile to monitor their behavioural progress.

Launched in 2016, its ‘school-wide’ features to allow whole schools, not just individual teachers, to sign up for accounts, which enables ‘teachers and school leaders to safely share photos, videos, and messages with all parents connected to the school at once, replacing cumbersome school websites, group email threads, newsletters, and paper flyers.’ At the same time that ClassDojo is expanding in scope to encompass new technical innovations and serve other practical and social functions, it is therefore obsolescing existing school technologies and materials. The new school-wide application of ClassDojo also makes it easier for the platform to be used by administrators, and means that a child’s individual profile remains persistent over time as that child moves between classes. Teachers can also create ‘Student Stories‘ for each child in a class, where digital portfolios of their class work can be uploaded and maintained.

The public ClassDojo website acts as a glossy front door and public face to the platform and the company behind it. It presents the brand through highly attractive visual graphics, high-production promotional video content and carefully crafted text copy. The website also features an ‘Idea Board’ where ideas about the use of the platform in the classroom can be submitted by teachers to be shared publicly, plus a blogging area for teachers and an engineering blog where the technical details of the platform are discussed and shared by its engineers. For parents assigned a login, it is possible to access the ‘Class Story’ area where teachers can share messages and video with all parents of children in a specific class, and individual teachers and parents can also exchange short text messages.

Less visibly, ClassDojo consists of technical standards relating to network security, data storage, interoperability, and communication protocols. All of the technical aspects of ClassDojo also need to be written in the code and algorithms that make the platform function. The ClassDojo engineering blog details some of the complexity of the code and algorithms that have been used or designed to make all the different elements of the platform function. Much of its source code is available to view on the ClassDojo area of the GitHub code repository. GitHub is therefore part of the assemblage of ClassDojo, a resource that both contains the code and algorithms used in the platform and a resource used by its engineers to locate existing re-useable code.

As a cloud-based service, all of ClassDojo’s data servers and analytics are hosted externally. For this it employs Amazon Web Services. The safety and security page of the ClassDojo website notes that the web servers of Amazon Web Services ‘are physically located in high-security data centers – the same data centers used to hold secure financial information. … Our database provider uses the same https security connections used by banks and government departments to store and transfer the most sensitive data.’ (Unfortunately, at the time of writing the link provided on the ClassDojo website to the ‘security measures’ provided by AWS does not work.) Any interaction with the ClassDojo platform, therefore, takes place via Amazon’s vast global infrastructure of cloud technologies, including being physically stored in one of Amazon’s data centres. ClassDojo is, in other words, physically, financially and technically located within one of the key global organisations that orchestrate the emerging ‘platform society.’

As well as being a technical platform, ClassDojo consists of a variety of material artefacts under the ‘Resources‘ section of the website . These include teacher resources to support the use of ClassDojo in the classroom and lesson planning, training resources such as powerpoint presentations to enable school leaders to train staff, and a variety of glossy printable posts and other display materials that can be used to decorate the classroom. In addition to this, the website provides resources for parents such as introductory letters that can be distributed by schools to explain the platform, detailed parent guides as downloadable PDF files, and simple video content that can be used in the classroom to help young children understand it too.

ClassDojo also extends into other platforms. It has its own Facebook page and a popular @ClassDojo account on Twitter with 61,000 followers. Much of its initial word-of-mouth marketing worked through these platforms, allowing ClassDojo to rapidly extend to new users as enthusiastic early adopters recommended it to friends and colleagues via social media. Facebook and Twitter are part of the ClassDojo community, enabling its vast user base to engage with the organization and other community members. User-generated materials such as lesson plans and classroom resources to support the use of ClassDojo are made available for sharing by teacher advocates of the platform on teaching websites and other public sharing sites such as Pinterest, thus extending it beyond the enclosures of its own technical infrastructure to other platforms and material resources. Via other platforms, teachers have created and shared, for example, ‘Dojo dollars,’ ‘reward coupons’ and ‘vouchers,’ created their own incentives and rewards systems and displays, posted ‘points tracker’ posters and sets of ‘Dojo goals for data folders, and suggested the use of ‘prize centres’ where physical prizes are displayed for pupils that top the ClassDojo league tables.

As this survey of the technical aspects of ClassDojo demonstrates, it consists of myriad technologies, materials, standards and so on; but these technical elements all need to be orchestrated by human hands.

People & organizations
Who makes ClassDojo? Critical studies of software code and algorithms have demonstrated that their function cannot be separated from their designers. As Tarleton Gillespie has phrased it, ‘algorithms are full of people.’ Humans make decisions about what algorithms do, their goals and purposes and the tasks to which they are put. Likewise, any system of data collection or online communication platform has to be programmed to perform its tasks according to particular objectives, business plans and within financial and regulatory constraints.

ClassDojo depends on a vast network of people and organizations. It was founded in 2011 by two young British entrepreneurs, Liam Don and Sam Chaudhary. Don was educated as a computer scientist and Chaudhary as an economist—with experience of working for the consultancy McKinsey in its education division in London—before both moved to Silicon Valley after successfully applying to the education technology ‘incubator’ program Imagine K-12. Imagine K-12’s founder Tim Brady was the very first investor in ClassDojo and continues to sit on its board; he has been described by ClassDojo’s founders as a key mentor and influence in the early days of its development. Brady himself was one of the very first employees at Yahoo! in the 1990s, where he acted as Chief Product Officer for 8 years. Considerable Silicon Valley experience is therefore accommodated on the ClassDojo board.

In addition to its founders, ClassDojo is staffed by a variety of software engineers, designers, product managers, communications and marketing officers, privacy, encryption and security experts and human-computer interaction designers. Notably, none of ClassDojo’s staff are listed as educational experts, but instead are all drawn from the culture of software development, many of them with experience in other Silicon Valley technology companies, social media organizations and consultancies. Founders Don and Chaudhary themselves have some limited educational experience of working with schools in the UK prior to moving to Silicon Valley.

ClassDojo mojo 2

Through external partnerships, ClassDojo employs three independent privacy experts to guide it in relation to data privacy regulation in north America and Europe, and works with a team of security researchers to continually test ClassDojo’s security practices for vulnerabilities. Its board consists primarily of its major investors (detailed more below under funding and finance). ClassDojo also works with over 20 third-party essential service providers who primarily support the platform with specific technical services, including data storage, video encoding, photo uploading, server performance, data visualization, web analytics, performance metrics, conducting A/B testing on different versions of the website, and managing real-time communication data. The third party service providers include Amazon Web Services, which hosts ClassDojo’s servers and data analytics, Google Analytics, for analytics on its website, and many others, without which the platform could not function.

Support for ClassDojo has been confirmed through the award of a number of prizes. The business magazine FastCompany listed ClassDojo as one of the 10 most innovative education companies in 2013, and in 2015 it won the Crunchie award for best education startup from the TechCrunch awards while its founders were featured in the ’30 under 30’ list of Inc magazine. These prizes and recognitions have helped ClassDojo and its founders to consolidate their reputations and brand, both as a successful classroom tool and an entrepreneurial business.

As a sociotechnical assemblage it is important to note that ClassDojo functions through the involvement of its users. Users are both configured by ClassDojo—in the sense that it makes new practices possible—but can also reshape ClassDojo to their own purposes. The basic reward mechanism at the heart of the ClassDojo behaviour management application can be customized by any signed up teacher. These reward categories then shape the ways in which points are awarded in classrooms, changing both the practices of the staff employing it and the experience of the pupils who are its subjects. With the announcement of school-wide features in 2016, entire schools can be signed up to ClassDojo, ultimately becoming institutional network nodes of the platform. By mid-2016 the ClassDojo website reported that the platform was in use in 180 countries, with other 3 million subscribing teachers serving over 35 million pupils. ClassDojo is, in other words, constituted partly through the practices of a vast global constellation of users.

Teachers using ClassDojo are repositioned by the platform by being conferred new responsibilities. Huw Davies suggests teachers are transformed into data entry clerical workers by the platform, becoming responsible for data collection in the classroom that will ultimately contribute to big datasets that could be analysed and then ‘sold’ back to school leaders as premium features. Although ClassDojo does not market itself as a big data company, its access to behavioural data on millions of children confers it with tremendous capacity to report detailed and comparative analyses that could be used to measure teachers’ and schools’ records on the management of pupil behaviour.

Policy, regulation & governance
The way that the technical platform of ClassDojo operates, and the work of the people who build and use it, is all governed by particular forms of regulation and policy. Data privacy is an area that the ClassDojo organization is especially keen to promote, not least following a critical article in the New York Times in 2014, which the ClassDojo company vigorously countered in an open letter entitled ‘What the NYTimes got wrong.’ Its website features an extensive privacy policy, the product of its privacy advisers. This policy is extensive and regularly updated, organized on the website to detail exactly what information the platform collects, its student data protection policy, and available opt-outs. Notably, ClassDojo claims that it deletes all pupils’ feedback points after 12 months, unless students or parents create accounts. Where schools or individual teachers have set up accounts that parents have then subscribed to, then a persistent record of the child’s personal information will be retained.

ClassDojo claims it is completely compliant with north American data privacy regulatory frameworks such as FERPA (Family Educational Rights and Privacy Act) and COPPA (Children’s Online Privacy Protection Act). FERPA is a Federal law that protects the privacy of student education records, while the primary goal of COPPA is to place parents in control over what information is collected from their young children online. ClassDojo’s ‘privacy center’ displays ‘iKeepSafe’ privacy seals from both FERPA and COPPA, alongside a badge proclaiming it as a signatory of the Student Privacy Pledge. iKeepSafe (Internet Keep Safe Coalition) is itself a nonprofit international alliance of more than 100 policy leaders, educators, law enforcement members, technology experts, public health experts and advocates, and acts to ensure that both FERPA and COPPA are enforced.

ClassDojo is additionally compliant with the US-EU Safe Harbor framework set forth by the US Department of Commerce regarding the collection, use, and retention of personal data from European Union member countries. The European Court of Human Justice, however, declared this agreement invalid in 2015, meaning there is a grey area in terms of data protection for children logged on ClassDjo outside the US. Its commitment to data privacy would seem to depend on specific agreements made between the EU and the cloud service provider hosting its data, in this case Amazon Web Services. This seems to put pressure on schools to make sense of complex international data protection policies. Schools making use of ClassDojo in the UK, for example, might need to ensure they are familiar with the Information Commissioner’s Office code of practice and checklists for data sharing. This code covers such activities as ‘a school providing information about pupils to a research organisation’ and would arguably extend to the providing of information of pupil behaviour to an organization like ClassDojo (and stored by Amazon Web Services), not least as the data may be used to construct behavioural profiles of individuals and classes.

ClassDojo also subscribes to the principles of ‘privacy by design,’ an approach which encourages the embedding of privacy frameworks into a company’s products or services. ClassDojo’s Sam Chaudhary has co-authored an article on privacy by design with the Future of Privacy Forum, a Washington DC-based think tank dedicated to promoting responsible data practices through lobbying government leaders, and a sounding board for companies proposing new products and services. The founders of ClassDojo have therefore situated themselves among a network of data privacy expertise and lobbying groups in order to ensure their compliance with federal law and to be seen as a leading data privacy organization in relation to education and young children.

How ClassDojo operates in relation to data protection and privacy is therefore circumscribed by federal regulatory frameworks which govern how and why ClassDojo can collect, process and store users’ data and what rights children and their parents have to withdraw their consent for its collection or request its deletion. Privacy regulation is ‘designed-in’ to its architecture, though inevitably some concerns persist, not least about ClassDojo’s admission that if it experienced a ‘change of control’ that all users’ personal information would be transferred to its new owner, with only 30 days for parents to request deletion of their children’s data.

Besides privacy policy and regulation, ClassDojo is also shaped by education policy, although less directly. A distinctive policy discourse of ‘character’ education, ‘positive behaviour support’ and ‘social-emotional learning’ frames ClassDojo, shaping the way in which the organization presents the platform. For example, ClassDojo’s founders present the platform through the language of character development and positive behaviour management. This is entirely compatible with US Department of Education policy documents and initiatives which, in the wake of a softening of the dominant test-based policy emphasis, has begun to emphasize concepts such as ‘character,’ ‘grit,’ ‘perseverance,’ ‘personal qualities’ and other ‘non-cognitive’ dimensions of ‘social-emotional learning’—the most prominent example being the 2013 US Department of Education, Office of Educational Technology report Promoting grit, tenacity and perseverance. ClassDojo is directly promoted in the report as ‘a classroom management tool that helps teachers maintain a supportive learning environment and keep students persisting on task in the classroom,’ allowing ‘teachers to track and reinforce good behaviors for individual students, and get instant reports to share with parents or administrators.’

As a consequence of the ‘grit’ report, controversial attempts have been made to make the measurement of ‘personal qualities’ of non-cognitive and social-emotional learning into school accountability mechanisms in the US. The prominent think tank the Brookings Institute has described these new school accountability systems as compatible with the Every Child Succeeds Act, the US law governing K-12 education signed in late 2015. The act requires states to include at least one non-academic measure when monitoring school performance. It therefore permits states to focus to a greater degree that previous acts on concepts such as competency-based and personalized learning, and promotes the role of the educational technology sector in supporting such changes. ClassDojo has been described in a commentary as an ideal educational technology to support the new law.

The ClassDojo website also suggests that its behaviour points system can be aligned with PBIS. PBIS stands for Positive Behavior Interventions and Supports and is an initiative of the US Department of Education, Office of Special Education Programs. Its aim is to support the adoption of the ‘applied science’ of Positive Behavior Support in schools and emphasizes social, emotional and academic outcomes for students. Through both its connections with the non-cognitive learning policy agenda and PBIS, ClassDojo has been positioned, and located itself, in relation to major political agendas about school priorities. It is in this sense an indirect technology of government that can help schools to support students’ non-cognitive learning. In turn, those schools are increasingly been held accountable for the development and effective measurement of those qualities.

ClassDojo is, in other words, a bit-part player in emerging policy networks that are changing the priorities of education policy to focus on the management and measurement of children’s personal qualities rather than academic attainment alone. Such changes are being brought about through processes of ‘fast policy’ as Jamie Peck and Nik Theodore describe it, where policy is a thoroughly distributed achievement of ‘sprawling networks of human and nonhuman actors’ that include web sites, practitioner communities, guru per­formances, evaluation scientists, think tanks, consultants, blogs, and media channels and sources, as well as the more hierarchical influence of centres of political authority. As both an organization and a platform, ClassDojo acts indirectly as a voice and a technology of networked fast policy in the educational domain, particularly as a catalyst and an accelerant that translates the priorities of government around non-cognitive learning and character development into classroom practice.

Markets, finances & investment
ClassDojo is part of a significant growing marketplace of educational technologies. The new Every Child Succeeds Act gives states in the US much more flexibility to spend on ed-tech, which has been growing as a sector at extraordinary rates in recent years. Some enthusiastic assessments suggest that global education technology sector spending was $67.8bn in 2015, part of a global e-learning market worth $165bn and estimated to reach $243.8bn by 2022.

This marketplace is being supported vigorously in Silicon Valley, where most investments are made, particularly through networks of venture capital firms and entrepreneurs and business ‘incubator’ and ‘accelerator’ programs dedicated to supporting startup ed-tech companies to go to scale. ClassDojo was developed as a working product through the Imagine K12 accelerator program for education technology startups in Silicon Valley. When ClassDojo emerged from its beta phase in 2013, it announced that it had further secured $1.6million in investment from venture capital sources from Silicon Valley. It raised another $21million in venture funding in spring 2016. Its investors include over 20 venture capital companies and entrepreneurial individuals, including Tim Brady from Imagine K12 (now merged with Y Combinator, a leading Silicon Valley Startup accelerator), General Catalyst Partners, GSV Capital and Learn Capital, ‘a venture capital firm focused exclusively on funding entrepreneurs with a vision for better and smarter learning.’ Learn Capital has invested in a large number of ed-tech products in recent years and is a key catalyst of the growth of the sector; its biggest limited partner is Pearson, the world’s biggest edu-business, which links ClassDojo firmly into the global ed-tech market. Many of ClassDojo’s investors also sit on the ClassDojo board.

Investment in ClassDojo has followed the standard model for startup funding in Silicon Valley. It first received seed funding from Imagine K12 and others, before securing Series A investment in 2013 and Series B in 2016. While seed funding refers to financial support for startup ideas, Series A funding is used to optimize a product and secure its user base, and Series B is about funding the business development, technology, support, and other people required for taking a business beyond its development stage. Sometime after 2016, ClassDojo will look to scale fast and wide through Series C funding—investment at this stage can reach hundreds of millions of dollars.

The ClassDojo success story for classroom practitioners and school leaders is therefore reflected and enabled by its success as a desirable product of venture capital funding, all of it framed by a buoyant marketplace of ed-tech development and finance. This marketplace is also itself framed and supported by specific kinds of discourses of technological disruption and solutionism. Many Silicon Valley companies and entrepreneurs have latched on to the education sector in recent years, seeing it in terms of problems that might be solved through technological developments and applications. Greg Ferenstein has noted that many Silicon Valley startup founders and their investors ‘believe that the solution to nearly every problem is more innovation, conversation or education,’ and therefore ‘believe in massive investments in education because they see it as a panacea for nearly all problems in society.’ The marketplace in which ClassDojo is located, therefore, is framed by a discourse that emphasizes the importance of fixing education systems and institutions in order to make them into effective mechanisms for the production of innovative problem-solving people.

Expert knowledge & discourse
As already noted above in relation to ClassDojo’s connections to education policy agendas, an emerging educational discourse is that of personal qualities and character education. ‘Education goes beyond just a test score to developing who the student is as a person—including all the character strengths like curiosity, creativity, teamwork and persistence,’ its co-founder and chief executive Sam Chaudhury has said. ‘There’s so much research showing that if you focus on building students’ character and persistence early on, that creates a 3 to 5 times multiplier on education results, graduation rates, health outcomes. It’s pretty intuitive. We shouldn’t just reduce people to how much content they know; we have to develop them as individuals.’

Underpinning the policy shift to character development of which ClassDojo plays a small bit-part are particular forms of expertise and disciplinary knowledge. The particular forms of expertise to which ClassDojo is attached are those of the psychological sciences, neuroscience and the behavioural sciences, in particular as they have been translated into the discourse of character education, grit, resilience and so on. One of the key voices in this emerging discourse is Paul Tough, author of a book about educating children with ‘grit,’ who has mapped out some of the networks of psychological, neuroscientific and economics experts contributing their knowledge and understandings to this field,  including names such as Angela Duckworth and Carol Dweck.

Duckworth and Dweck are both directly cited by ClassDojo’s founders as key influences, alongside other ‘thought leaders’ such as James Heckman, and Doug Lemov. Heckman is a Nobel prize-winning economist noted for his work on building character. Lemov is a former free-market advocate of the charter schools movement and author of the popular Teach Like a Champion. Duckworth has her own named psychological lab where she researches ‘personal qualities’ such as ‘grit’ and ‘self-control’ as dimensions of human character. The relationship between ClassDojo and Carol Dweck’s concept of ‘growth mindsets’ is the most pronounced. In January 2016, ClassDojo announced a partnership with the Project for Education Research That Scales (PERTS), an applied research center at Stanford University led by Dweck that has become the intellectual home of the theory of growth mindsets.

Dweck has argued that teachers can ‘engender a growth mind-set in children by praising them for their persistence or strategies (rather than for their intelligence), by telling success stories that emphasize hard work and love of learning, and by teaching them about the brain as a learning machine.’ Notably, Dweck’s PERTS lab itself has a close relationship with Silicon Valley, where the growth mindsets concept has been popularized as part of a recent trend in behavior-change training programs designed to enable valley workers to to ‘fix personal problems.’ Dweck herself has presented the concept at Google and other PERTS staff have advisory roles in Silicon Valley companies. The growth mindset concept is, therefore, closely aligned with the wider governmental behaviour change agenda associated with behavioural economics. Governments have long sought to use psychological and behavioural insights into citizens’ behaviours as the basis for designing policies and services that are intended to modify their future behaviours. ClassDojo seeks to accomplish this goal within schools by nudging children to change their behaviours at exactly the same time that schools are being encouraged to measure students’ non-cognitive social-emotional skills.

The partnership between ClassDojo and PERTS takes the form of a series of short animations on the ‘Big Ideas’ section of the ClassDojo website that help explain the growth mindsets idea for teachers and learners themselves. They present the brain as a malleable ‘muscle’ that can constantly grow and adapt as it is put to the task of addressing challenging problems. The presentation of the brain as a muscle in ClassDojo is part of the recent popularization of recent neuroscience concepts of ‘neuroplasticity,’ where the brain is seen as constantly adapting to the social environment. Rather than being seen as a structurally static organ, the brain has been reconceived as dynamic, with new neural pathways constantly forming through adaptation to environmental stimuli. The videos are basically high-production updates of instructional resources previously developed by Dweck and disseminated through her Mindset Works spin-out company. ClassDojo approached Dweck about adapting these materials, and the videos were produced by ClassDojo with input from PERTS. The ClassDojo website claims that ’15 million students are now building a growth mindset’–this figure is presumably based on web analytics of the numbers of schools in which the videos have been viewed–while at the time of writing in September 2016 the ClassDojo Facebook page was promoting ‘Growth Mindset Month’ .

ClassDojo is increasingly aligned with psychological and behavioral norms associated with growth mindsets, both by teaching children about growth mindsets through its Big Ideas videos and, through the app, by nudging children to conduct themselves in ways appropriate to the development of such a growth-oriented character. In this sense, ClassDojo is perfectly aligned with the controversial recent federal law which allows states to measure the performance of schools on the basis of ‘non-academic’ measures, such as students’ non-cognitive social-emotional skills, personal qualities, and growth mindsets. This governmental agenda sees children themselves as a problem to be fixed through schooling. Its logic is that if children’s non-cognitive personal qualities, such as character, mindset and grit, can be nudged and configured to the new measurable norm, then many of the problems facing contemporary schools will be solved.

The close relationship between ClassDojo, psychological expertise and government policy is indicative of the extent to which the ‘psy-sciences’ are involved in establishing the norms by which children are measured and governed in schools—a relationship which is by no means new, as Nikolas Rose has shown, but is now rapidly being accelerated by psy-based educational technologies such as ClassDojo. A science of mental measurement infuses ClassDojo, as operationalized by its behavioural points system, but it is also dedicated to an applied science of mental modification, involved in the current pursuit of the development of children as characters with grit and growth mindsets. By changing the language of learning to that of growth mindsets and other personal qualities, ClassDojo and the forms of expertise with which it is associated are changing the ways in which children may be understood and acted upon in the name of personal improvement and optimization.

^^^
ClassDojo is prototypical of how education is being reshaped in a ‘platform society.’ This sociotechnical survey of the ClassDojo assemblage provides some sense of its messy complexity as an emerging public sphere platform that has attained substantial success and popularity in education. Approached as a sociotechnical assemblage, ClassDojo is simultaneously a technical platform that serves a variety of practical, pedagogical and social functions; an organizational mosaic of engineers, marketers, product managers and other third party providers and partners; the subject of a wider regulatory environment and also a bit-part actor in new policy networks; the serious object for financial investment in the ed-tech marketplace; and a mediator of diverse expert psychological, neuroscientific and behavioural scientific knowledges and discourses pertaining to contemporary schooling and learning.

Like any digital assemblage, ClassDojo is mutating and evolving in response to the various elements that co-constitute it. As policy discourse shifts, ClassDojo follows suit–as its shift to embrace growth mindsets and its positioning in relation to policy discourses of character and positive behaviour support demonstrate. It is benefiting financially from a currently optimistic ed-tech marketplace, which is itself now being supported politically via the Every Child Succeeds Act. Its engineering blog also demonstrates how the technical platform of ClassDojo is changing as new code and algorithms become available, while its privacy policies are constantly being updated as data privacy regulation pertaining to children becomes an increasing priority and a concern–as it demonstrated in its response to a critical New York Times article in 2014. ClassDojo is not being ‘scaled up’ in a simple and linear manner, but messily and contingently, through a relational interweaving of human actions and nonhuman technologies, materials, policies, and technical standards.

Given its rapid proliferation globally into the practices of over 3 million teachers and the classroom experiences of over 35 million children in 180 countries, ClassDojo can accurately be described as a public sphere platform that is interfering in how teaching and learning take place. It is doing so according to psychological forms of expertise and governmental priorities, supported by financial instruments and organizations, and is being enacted through a technical infrastructure of devices and platforms and a human infrastructure of entrepreneurs, engineers, managers, and other experts, as well as the users who incorporate it into their own practices and extend it through the creation of user-generated content and materials. As it continues to scale and mutate, it deserves to be the focus of much further in-depth analysis. This work-in-progress has surveyed ClassDojo to point to possible future lines of inquiry into the reshaping of education in a platform society.

Images from ClassDojo media assets
Posted in Uncategorized | Tagged , , , , , | 8 Comments

Super-fast education policy fantasies

Ben Williamson

data server
Image by CSCW

In recent years the pace of education policy has begun to pick up speed. As new kinds of policy influencers such as international organizations, businesses, consultancies and think tanks have entered into educational debates and decision-making processes, the production of evidence and data to support policy development has become more spatially distributed across sectors and institutions and invested with more temporal urgency too. The increasing availability of digital data that can be generated in real time is now catalysing dreams of an even greater acceleration in policy analysis, decision-making and action. A fantasy of real-time policy action is being ushered into material existence, particularly through the advocacy of the global edu-business Pearson and the international organizations OECD (Organization of Economic Cooperation and Development) and WEF (World Economic Forum). At the same time, the variety of digital data available about aspects of education means that these policy influencers are focusing attention on the possible measurement of previously unrecorded activities and processes.

Fast policy
Education policy processes are undergoing a transformation. A spatial redistribution of policy processes is underway whereby government departments are becoming parts of ‘policy networks’ that also include consultants, think tanks, policy labs, businesses, and international non-governmental organizations.

In their recent book Fast Policy, policy geographers Jamie Peck and Nik Theodore argue that:

The modern policymaking process may still be focused on centers of political authority, but networks of policy advocacy and activism now exhibit a precociously transnational reach; policy decisions made in one jur­isdiction increasingly echo and influence those made elsewhere; and global policy ‘models’ often exert normative power across significant distances. Today, sources, channels, and sites of policy advice encompass sprawling networks of human and nonhuman actors/actants, including consultants, web sites, practitioner communities, norm-­setting models, conferences, guru per­formances, evaluation scientists, think tanks, blogs, global policy institutes, and best-­practice peddlers, not to mention the more ‘hierarchical’ influence of multilateral agencies, international development funds, powerful trading partners, and occupying powers.

These policy networks sometimes do the job of the state through outsourced contracts, commissioned evidence-collection and analysis, and the production of policy consultancy for government. They often also act as channels for the production of policy influence, bringing new agendas, new possibilities, and new solutions to perceived problems into the view of national government departments and policymakers. Policy is, therefore, becoming more socially and spatially distributed across varied sites, across public, private and third sectors, and increasingly involves the hybridization of methods drawn from all the actors involved in it, particularly in relation to the production and circulation of evidence that might support a change in policy.

The socially and spatially networked nature of the contemporary education policy environment is leading to a temporal quickening in the production and communication of evidence. In the term ‘fast policy’, Peck and Theodore describe a new condition of accelerated policy production, circulation and translation that is characterized not just by its velocity but also ‘by the intensified and instantaneous connectivity of sites, chan­nels, arenas, and nodes of policy development, evolution, and reproduction.’ Fast policy refers to the increasing porosity between policymaking locales; the transnationalization of policy discourses and communities; global deference to models of ‘what works’ and ‘best practices’; compressed R&D time in policy design and roll-out; new shared policy experimentality and evaluation practices; and the expansion of a ‘soft infrastructure’ of expert conferences, resource banks, learning networks, case-­study manuals, and web-­ based materials, populated by intermediaries, advocates, and experts.

Fast policy is becoming a feature of education policy production and circulation. As Steven Lewis and Anna Hogan have argued,

actors work within complex policy networks to produce and promote evidence tailored to policymakers, meaning they orchestrate rather than produce research knowledge in order to influence policy production. These actors tend to construct simplified and definitive solutions of best practice, and their reports are generally short, easy-to-read and glossy productions.

As a consequence they claim the desire for policy solutions and new forms of evidence and expertise, is ultimately leading to the ‘speeding up’ of policy:

This ‘speeding up’ of policy, or ‘fast policy’ … is characterized not only by the codification of best practice and ‘ideas that work’ but also, significantly, by the increasing rate and reach of such policy diffusion, from sites of policy development and innovation to local sites of policy uptake and, if not adoption, translation.

In other words, policies are becoming more fast-moving, both in their production and in their translation into action, as well as more transnational in uptake and implementation, more focused on quick-fix ‘best practice’ or ‘what works’ solutions, and more pacey and attractive to read thanks to being packaged up as short glossy handbooks and reports, websites and interactive data visualizations.

For Lewis and Hogan, the development of fast policy in education is exemplified by the work of the education business Pearson and the international organization OECD. In their specific example of fast policy in action, they observe how ‘so-called best practices travel from their point of origin (to the extent that this can ever be definitively fixed) at the OECD to their uptake and development by an international edu-business (Pearson),’ and how they are from there translated into more ‘localized’ concerns with improving state-level schooling performance within national systems. In particular they show how OECD data collected as part of the global PISA testing program have been translated into Pearson’s Learning Curve Databank, itself a public data resource intended to inform ‘evidence-based’ educational policymaking around the world, and from there mobilized in the specification of local policy problems and solutions. The concern with evidence-based policymaking, they show, involves the use of best practice models and learning from ‘examples’:

We see the dominance of fast policy approaches, and hence their broad appeal across policy domains such as schooling, as directly emanating from the promotion of decontextualised best practices that can, so it is alleged, transcend the specific requirements of local contexts. This is despite ‘evidence-based’ policymaking being an inherently political and contingent process, insofar as it is always mediated by judgements, priorities and professional values specific to the people, moments and places in which such policies are to be enacted.

Additionally, in the fast policy approaches that are developing in education through the work of OECD and Pearson, quantitative data have become especially significant for evidence-based practices, as measurement, metrics, ranking and comparison all help to create new continuities and flows that can overcome physical distance in an increasingly interconnected and accelerating digital world. Numbers and examples form the evidential flow of fast policy, enabling complex social, political and economic problems to be rendered in easy-to-understand tables, diagrams and graphs, and their solutions to be narrated and marketed through exemplar best practice case studies.

Real-time policy action
Pearson and OECD are additionally seeking to develop new computer-based data analytics techniques that can be used to generate evidence to inform education policy. Pearson, for example, has proposed a ‘renaissance in assessment’ that will involve a shift to new computer-based assessment systems for the continuous tracking and monitoring of ‘streaming data’ through real-time analytics, rather than the collection of data through discrete temporal assessment events. Its report promotes using ‘intelligent software and a range of devices that facilitate unobtrusive classroom data collection in real time,’ and to ‘track learning and teaching at the individual student and lesson level every day in order to personalise and thus optimise learning.’ Much of the data analytic and adaptive technology required of this vision is in development at Pearson’s own Center for Data Analytics and Adaptive Learning, its in-house centre for educational big data research and development.

Moreover, the authors of the renaissance in assessment report argue for a revolution in education policy, shifting the focus from the governance of education through the institution of the school to ‘the student as the focus of educational policy and concerted attention to personalising learning.’ The report clearly represents an emerging educational imaginary where policy is to concentrate on the real-time tracking of the individual rather than the planned and sequenced longitudinal measurement of the institution or system. Along these lines, its authors note that the OECD itself is moving towards new forms of machine learning in its international assessments technologies, with a proposal to assess collaborative problem solving through ‘a fully computer-based assessment in which a student interacts with a simulated collaborator or “avatar” in order to solve a complex problem.’ Such systems, for Pearson and OECD, can speed up the process of providing feedback to students, but are, importantly, also adaptive, meaning that the content adapts to the progress of the student in real time.

The potential promise of such computer-based adaptive systems, for the experts of Pearson and OECD, is a further acceleration in policy development to real-time speed. Instead of policy based on the long time-scales of temporally discrete assessment events, data analytics platforms appear to make it possible to perform a constant automated analysis of the digital timestream of student activities and tasks. Such systems can then adapt to the student in ways that are synchronized with their learning processes. This process appears to make it feasible to squeeze out conventional standardized assessments and tests, with their association with bureaucratic processes of data collection by governmental centres of political authority, and replace them with computer-adaptive systems.

These proposals imagine a super-fast policy process that is at least partly automated, and certainly accelerated beyond the temporal threshold of human capacities of data analysis and expert professional judgment. Heather Roberts-Mahoney and colleagues have analysed US documents advocating the use of real-time data analytics for personalized learning, and conclude that they transform teachers into ‘data collectors’ who  ‘no longer have to make pedagogical decisions, but rather manage the technology that will make instructional decisions for them,’ since  ‘curriculum decisions, as well as instructional practices, are reduced to algorithms and determined by adaptive computer-based systems that create ‘personalized learning,’ thereby allowing decision-making to take place externally to the classroom.’ The role of policymakers is changed by such systems too, turning them into awarders of contracts to data processing companies and technological vendors of adaptive personalized learning products. It is through such technical platforms and the instructions coded in to them that decisions about intervention will be made at the individual level, rather than bureaucratic decision-making at national or state system scale.

The use of real-time systems in education is therefore part of ‘a reconfiguring of intensities, or “speeds”, of institutional life’ as it is ‘now “plugged into” information networks,’ as Greg Thompson has argued. It makes the collection, analysis and feedback from student data into a synchronous loop that functions at extreme velocity through systems that are hosted by organizations external to the school but are also networked into the pedagogic routines of the adaptive, personalized classroom. In short, real-time data-driven systems are ideal fast policy technologies.

Affective policy
Importantly, these fast policy influencers are also pursuing the possibility of measuring non-academic aspects of learning such as social and emotional skills. The OECD has launched its Education and Social Progress project to develop specific measurement instruments for ‘social and emotional skills such as perseverance, resilience and agreeableness,’ ‘using the evidence collected, for policy-makers, school administrators, practitioners and parents to help children achieve their full potential, improve their life prospects and contribute to societal progress.’

The World Economic Forum, another major international organization that works in policy networks to influence education policy, has similarly produced a report on fostering social and emotional learning through technology. It promotes the development of biosensor technologies, wearable devices and other applications that can be used to ‘provide a minute-by-minute record of someone’s emotional state’ and ‘to help students manage their emotions.’ It even advocates educational applications of ‘affective computing’:

Affective computing comprises an emerging set of innovations that allow systems to recognize, interpret and simulate human emotions. While current applications mainly focus on capturing and analysing emotional reactions to improve the efficiency and effectiveness of product or media testing, this technology holds great promise for developing social and emotional skills such as greater empathy, improved self-awareness and stronger relationships.

The affective analytics of education being proposed by both the OECD and WEF make the emotional life of the school child into the subject of fast policy experimentation. They are seeking to synchronize children’s emotional state, measured as a ‘minute-by-minute record,’ with societal progress, rendering students’ emotions as real-time digital timestreams of data that can be monitored and then used as evidence in the evaluation of various practices and policies. Timestreams of data about how students feel are being positioned by policy influencers the OECD and WEF as a new form of evidence at a time of accelerating policy experimentation. These proposals are making sentiment analysis into a key fast policy technology, enabling policy interventions and associated practices to be evaluated in terms of the feelings they generate–a way of measuring not just the effects of policy action but its production of affect too.

Following super-fast policy prototypes
Writing about fast policy in an earlier paper prefacing their recent book, Jamie Peck and Nik Theodore have described ‘policy prototypes that are moving through mutating policy networks’ and which connect ‘distant policy-making sites in complex webs of experimentation-emulation-evolution.’ They describe the methodological challenges of ‘following the policy’ in the context of spatially distributed policy networks and temporally accelerated modes of policy development where spefific policies are in a constant state of movement, translation and transformation. For them:

Policy designs, technologies, and frames are … complex and evolving social constructions rather than as concretely fixed objects. In fact, these are very often the means and the media through which relations between distant policy-making sites are actively made and remade.

A research focus on the kind of super-fast policy prototypes being developed by Pearson, the WEF and the OECD would likewise need to focus, methodologically, on the technologies and the designs of computer-based approaches as socially created devices. It would need to follow these policy prototypes through processes of experimentation, emulation and mutation, as they are diversely developed, taken up or resisted, and modified and amended through interaction with other organizations, actors, discourses and agendas. As with Peck and Theodore’s focus on fast policy, researching the super-fast policy prototypes proposed for education by the OECD, WEF and Pearson would investigate the ‘social life’ of the production of new technologies of computer-adaptive assessment, personalized learning, affective computing and so on, but also attend to their social productivity as they change the ways in which education systems, institutions, and the individuals within them perform.

Posted in Uncategorized | Tagged , , , , , | Leave a comment

Performing data

‘Performance information’ in the Scottish Government national improvement plan for education

Ben Williamson

ScotGov plan

At the end of June 2016 the Scottish Government published a major national delivery plan for improving Scottish education over the next few years. Drafted in response to a recent independent review of Scottish education carried out by the OECD, the delivery plan is part of a National Improvement Framework with ambitious plans to raise attainment and achieve equity.

It is the relentless focus of the delivery plan on the use of performance measurement, metrics and evidence gathering to drive forwards these improvements that is especially arresting. In a striking line from the introduction it is stated that:

As the OECD review highlighted, current … arrangements do not provide sufficiently robust information across the system to support policy and improvement. We must move from a culture of judgement to a system of judgement.

A ‘system of judgment’: right from the start, it is clear that the delivery plan is based on the understanding—imported from the OECD via its recommendation that new ‘metrics’ be devised to measure Scottish education—that data can be used to drive forward performance improvement and for the purposes of disciplining under-performance.

Productive measurement
In a series of articles, the sociologist David Beer has been writing about the socially productive power of metrics in a variety of sectors and institutions of society:

We often think of measurement as in some way capturing the properties of the world we live in. This might be the case, but we can also suggest that the way that we are measured produces certain outcomes. We adapt to the systems of measurement that we are living within.

Metrics and measurements are not simply descriptive of the world, then, but play a part in reshaping it in particular ways, affecting how people behave and understand things and act to do things differently. As Beer elaborates:

The measurements themselves matter, but it is knowing or expecting how we will be measured that is really powerful. Systems of measurement then have productive powers in our lives, both in terms of how we respond to them and how they inform the judgments and decisions that impact upon us.

Performance measurement techniques, of the kind to be implemented through the Scottish Government’s proposed ‘system of judgement’, can similarly be understood as productive measures that will be used to attach evaluative numbers to practices and institutions in ways that are intended to change how the system performs overall. This is likely to affect how school teachers, leaders, and maybe even pupils themselves and their parents act and perform their roles, as they expect to be measured, judged, and acted upon as a result.

‘Performance information’ is one of the key ‘drivers of improvement’ listed in the plan, and clearly shows how a range of ‘measures’ are to be collected:

We will pull together all the information and data we need to support improvement.  Evidence suggests … we must ensure we build a sound understanding of the range of factors that contribute to a successful education system. This is supported by international evidence which confirms that there is no specific measure that will provide a picture of performance. We want to use a balanced range of measures to evaluate Scottish education and take action to improve further.

Scanning through the plan and the improvement framework, it becomes clear just how extensive this new focus on performance measurement will become. The plan emphasizes:

  • the use of standardized assessment to gather attainment data
  • the gathering of diverse data about the academic progress and well-being of pupils at all stages
  • pre-inspection questionnaires, school inspection and local authority self-evaluation reports
  • the production of key performance indicators on employability skills
  • greater performance measurement and measurement of schools
  • new standards and evaluation frameworks for schools
  • information on teacher induction, teacher views, and opportunities for professional learning
  • evidence on the impact of parents in helping schools to improve
  • regular publication of individual school data
  • the use of visual data dashboards to make school data transparent
  • training for ‘data literacy’ among teachers
  • comparison with international evidence

All of this is in addition to system-wide national benchmarking, international comparisons, defining and monitoring standards, quality assurance, and is all to be overseen by an international council of expert business and reformatory advisors to guide and evaluate its implementation.

Performative numbers
The delivery plan makes for quite a cascade of new and productive measures–an ‘avalanche of numbers‘ –though Scottish schools are unlikely to be terribly surprised by the emphasis in the delivery plan on performance information, targets, performance indicators and timelines. (In England the emphasis on performance data has been even more pronounced, with Paul Morris claiming ‘the purposes of schooling and what it means to be educated are effectively being redefined by the metrics by which we evaluate schools and pupils.’)

Since 2014, all Scottish schools have been encouraged by the Scottish Government to make use of Insight, an online benchmarking tool ‘designed for use by secondary schools and local authorities to identify success and areas where improvements can be made, with the ultimate aim of making a positive difference for pupils’. It provides data on ‘four national measures, including post-school destinations and attainment in literacy and numeracy as well as information on a number of local measures designed to help users take a closer look at their curriculum, subjects and courses’. It features data dashboards that allow schools to view an overall picture of the data from their school and compare it with the national measures presented on the national dashboard.

A notable feature of Insight is the ‘Virtual Comparator’ which allows users to see how the performance of their pupils compares to a similar group of pupils from across Scotland. The Virtual Comparator feature takes the characteristics of pupils in a school and matches them to similar pupils from across Scotland to create a ‘virtual school’ against which a ‘real’ school may benchmark its progress.

The relentless focus by the Scottish Government on performance information, inspection, comparison, measurement and evidence is demonstrative of how education systems, organizations and individuals are now the subjects of increasing demands of producing data.

As the concept of ‘productive measures’ reminds us, though, performance measurement is not simply descriptive. It also brings the matter it describes into being. Captured in the term ‘performativity,’ it has become apparent that education systems and institutions, and even individuals themselves, are changing their practices to ensure the best possible measures of performance. Closely linked to this is the notion of accountability, that is, the production of evidence that proves the effectiveness—in terms of measurable results—of whatever has been performed in the name of improvement and enhancement. As Stephen Ball phrases it:

Performativity is … a regime of accountability that employs judgements, comparisons and displays as a means of control, attrition and change. The performance of individuals and organizations serve as measures of productivity or output … [and] stand for, encapsulate or represent the worth, quality or value of an individual or organization within a field of judgement.

In other words, performativity makes the question of what counts as worthwhile activity in education into the question of what can be counted and of what account can be given for it. It reorients institutions and individuals to focus on those things that can be counted and accounted for with evidence of their delivery, results and positive outcomes, and de-emphasises any activities that cannot be easily and effectively measured.

In practical terms, performativity depends on databases, audits, inspections, reviews, reports, and the regular publication of results, and tends to prioritize the practices and judgements of accountants, lawyers and managers who subject practitioners to constant processes of target-setting, measurement, comparison and evaluation. The appointment of an international council of experts to oversee the collection and analysis of all the performance information required by the improvement and delivery plans is ample illustration of how Scottish education will be subject to a system of expert techniques and judgement.

Political analytics
It is hard, then, to see the Scottish Government delivery plan as anything other than a series of policy instruments that via specific data-driven techniques and particular technical tools will reinforce performativity and accountability, all under the aspiration of closing attainment gaps and achieving equity.

Although no explicit mention is made of the technologies required to enact this system of judgement, it is clear that a complex data infrastructure of technologies and technical experts will also be needed to collect, store, clean, filter, analyse, visualize and communicate the vast masses of performance information. Insight and other dashboards already employed in Scottish education are existing products that doubtless anticipate a much more system-wide digital datafication of the sector. Data processing technologies are making the performance of education systems and institutions into enumerated timestreams of data by which they might be measured, evaluated and assessed, held up to both political and public scrutiny, and then made to account for their actions and decisions, and either rewarded or disciplined accordingly. A new kind of political analytics that prioritizes digitized forms of data collection and analysis is likely to play a powerful role in the governance of Scottish education in coming years.

Data technologies of various kinds are the enablers of performativity and accountability, and translate the numerical logics of the technologies into the material and practical realities of professional life. As a data-driven ‘system of judgement’, Scotland’s delivery plan for education will, in other words, usher in more and more ‘productive measures’ into Scottish education, reconfiguring it and those who work and learn in it in ways that will need to be studied closely for many years to come.

 

Posted in Uncategorized | Tagged , , | 1 Comment

Critical questions for big data in education

Ben Williamson

data center
Image: https://flic.kr/p/bnZvFX

Big data has arrived in education. Educational data science, learning analytics, computer adaptive testing, assessment analytics, educational data mining, adaptive learning platforms, new cognitive systems for learning and even educational applications based on artificial intelligence are fast becoming parts of the educational landscape, in schools, colleges and universities, as well as in the networked spaces of online courses.

As part of a recent conversation about the Shadow of the Smart Machine work on machine learning algorithms being undertaken by Nesta, I was asked what I thought were some the most critical questions about big data and machine learning in education. This reminded me of the highly influential paper ‘Critical questions for big data’ by danah boyd and Kate Crawford, in which they ‘ask critical questions about what all this data means, who gets access to what data, how data analysis is deployed, and to what ends.’

With that in mind, here are some preliminary (work-in-progress) critical questions to ask about big data in education.

How is ‘big data’ being conceptualized in relation to education?
Large-scale data collection has been at the centre of the statistical measurement, comparison and evaluation of the performance of education systems, policies, institutions, staff and students since the mid-1800s. Does big data constitute a novel way of enumerating education? The sociologist David Beer has suggested we need to think about the ways in which big data as both a concept and a material phenomenon has appeared as part of a history of statistical thinking, and in relation to the rise of the data analytics industry—he suggests social science still needs to understand ‘the concept itself, where it came from, how it is used, what it is used for, how it lends authority, validates, justifies, and makes promises.’ Within education specifically, how is big data being conceptualized, thought about, and used to animate specific kinds of projects and technical developments? Where did it come from–data science, computer science–and who are its promoters and sponsors in education? What promises are attached to the concept of big data as it is discussed within the domain of education? We might wish to think about a ‘big data imaginary’ in education—a certain way of thinking about, envisaging and visioning the future of education through the conceptual lens of big data—that is now animating specific technical projects, becoming embedded in the material reality of educational spaces and enacted in practice.

What theories  of learning underpin big data-driven educational technologies?
Big data-driven platforms such as learning analytics aim to ‘optimize learning’ but is it always clear what is meant by ‘learning’ by the organizations and actors that build, promote and evaluate them? Much of the emerging field of ‘educational data science’—which encompasses much educational data mining, learning analytics and adaptive learning software R&D—is informed by conceptualizations of learning that are rooted in cognitive science and cognitive neuroscience. These disciplines tend to focus on learning as an ‘information-processing’ event—to treat learning as something that can be monitored and optimized like a computer program—and pay less attention to the social, cultural, political and economic factors that structure education and individuals’ experiences of learning.

Given the statistical basis of big data, it’s perhaps also not surprising that many actors involved in educational big data analyses are deeply informed by the disciplinary practices and assumptions of psychometrics and its techniques of psychological measurement of knowledge, skills, personality and so on. Aspects of behaviourist theories of learning even persist in behaviour management technologies that are used to collect data on students’ observed behaviours and distribute rewards to reinforce desirable conduct. There is an emerging tension between the strongly psychological, neuroscientific and computational ways of conceptualizing and theorizing learning that dominate big data development in education, and more social scientific critiques of the limitations of such theories.

How are machine learning systems used in education being ‘trained’ and ‘taught’?
The machine learning algorithms that underpin much educational data mining, learning analytics and adaptive learning platforms need to be trained, and constantly tweaked, adjusted and optimized to ensure accuracy of results–such as predictions about future events. This requires ‘training data,’ a corpus of historical data that the algorithms can bee ‘taught’ with to then use to find patterns in data ‘in the wild.’ Who selects the training data? How do we know if it is appropriate, reliable and accurate? What if the historical data is in some ways biased, incomplete or inaccurate? Does this risk generating ‘statistical discrimination’ of the sort produced by ‘predictive policing,’ which has in some cases been found to disproportionately predict that black men will commit crime? Educational research has long asked questions about the selection of the knowledge for inclusion in school curricula that are to be taught to students—we may now need to ask about the selection of the data for inclusion in the training corpus of machine learning platforms, as these data could be consequential for learners’ subsequent educational experience.

Moreover, we might need to ask questions about the nature of the ‘learning’ being experienced by machine learning algorithms, particularly as enthusiastic advocates in places like IBM are beginning to propose that advanced machine learning is more ‘natural,’ with ‘human qualities,’ based on computational models of aspects of human brain functioning and cognition. To what extent do such claims appear to conflate understandings of the biological neural networks of the human brain that are mapped by neuroscientists with the artificial neural networks designed by computer scientists? Does this reinforce computational information-processing conceptualizations of learning, and risk addressing young human minds and the ‘learning brain’ as computable devices that can be debugged and rewired?

Who ‘owns’ educational big data?
The sociologist Evelyn Ruppert has asked ‘who owns big data?’, noting that numerous people, technologies, practices and actions are involved in how data is shaped, made and captured. The technical systems for conducting educational big data collection, analysis and knowledge production are expensive to build. Specialist technical staff are required to program and maintain them, to design their algorithms, to produce their interfaces. Commercial organizations see educational data as a potentially lucrative market, and ‘own’ the systems that are now being used to see, know and make sense of education and learning processes. Many of their systems are proprietorial, and are wrapped in IP and patents which makes it impossible for other parties to understand how they are collecting data, what analyses they are conducting, or how robust their big data samples are. Specific commercial and political ambitions may also be animating the development of educational data analytics platforms, particularly those associated with Silicon Valley where ed-tech funding for data-driven applications is soaring and tech entrepreneurs are rapidly developing data-driven educational software and even new institutions.

In this sense, we need to ask critical questions about how educational big data are made, analysed and circulated within specific social, disciplinary and institutional contexts that often involve powerful actors that possess significant economic capital in the shape of funding and resourcing, cultural capital in terms of the production of new specialist knowledge, and social capital through wider networks of affiliations, partnerships and connections. The question of the ownership of educational big data needs to be located in relation to these forms of capital and the networks where they circulate.

Who can ‘afford’ educational big data?
Not all schools, colleges or universities can necessarily afford to purchase a learning analytics or adaptive software platform—or to partner with platform providers. This risks certain wealthy institutions being able to benefit from real-time insights into learning practices and processes that such analytics afford, while other institutions will remain restricted to the more bureaucratic analysis of temporally discrete assessment events.

Can educational big data provide a real-time alternative to temporally discrete assessment techniques and bureaucratic policymaking?
Policy makers in recent years have depended on large-scale assessment data to help inform decision-making and drive reform—particularly the use of large-scale international comparative data such as the datasets collected by OECD testing instruments. Educational data mining and analytics can provide a real-time stream of data about learners’ progress, as well as automated real-time personalization of learning content appropriate to each individual learner. To some extent this changes the speed and scale of educational change—removing the need for cumbersome assessment and country comparison and distancing the requirement for policy intervention. But it potentially places commercial organizations (such as the global education business Pearson) in a powerful new role in education, with the capacity to predict outcomes and shape educational practices at timescales that government intervention cannot emulate.

Is there algorithmic accountability to educational analytics?
Learning analytics is focused on the optimization of learning and one of its main claims is the early identification of students at-risk of failure. What happens if, despite being enrolled on a learning analytics system that has personalized the learning experience for the individual, that individual still fails? Will the teacher and institution be accountable, or can the machine learning algorithms (and the platform organizations that designed them) be held accountable for their failure? Simon Buckingham Shum has written about the need to address algorithmic accountability in the learning analytics field, and noted that ‘making the algorithms underpinning analytics intelligible’ is one way of at least making them more transparent and less opaque.

Is student data replacing student voice?
Data are sometimes said to ‘speak for themselves,’ but education has a long history of encouraging learners to speak for themselves too. Is the history of pupil voice initiatives being overwritten by the potential of pupil data, which proposes a more reliable, accurate, objective and impartial view of the individual’s learning process unencumbered by personal bias? Or can student data become the basis for a data-dialogic form of student voice, one in which teachers and their students are able to develop meaningful and caring relationships through mutual understanding and discussion of student data?

Do teachers need ‘data literacy’?
Many teachers and school leaders possess little detailed understanding of the data systems that they are using, or required to use. As glossy educational technologies like ClassDojo are taken up enthusiastically by millions of teachers worldwide, might it be useful to ensure that teachers can ask important questions about data ethics, data privacy, data protection, and be able to engage with educational data in an informed way? Despite calls in the US to ensure that data literacy become the focus for teachers’ pre-service training, there appears little sign that the provision of data literacy education for educational practitioners is being developed in the UK.

What ethical frameworks are required for educational big data analysis and data science studies?
The UK government recently published an ethical framework for policymakers for use when planning data science projects. Similar ethical frameworks to guide the design of educational big data platforms and education data science projects are necessary.

Some of these questions clearly need more work, but make clear I think the need for more work to critically interrogate big data in education.

Posted in Uncategorized | Tagged , , , , , , | 4 Comments

Artificial intelligence, cognitive systems and biosocial spaces of education

By Ben Williamson

Brewbook_Wired brain_2012
Image: telephone cable model of corpus callosum by Brewbooks

Recently, new ideas about ‘artificial intelligence’ and ‘cognitive computing systems’ in education have been advanced by major computing and educational businesses. How might these ideas and the technical developments and business ambitions behind them impact on educational institutions such as schools, and on the role of human actors such as teachers and learners, in the near future? More particularly, what understandings of the human teacher and the learner are assumed in the development of such systems, and with what potential effects?

The focus here is on the education business Pearson, which published a report entitled Intelligence Unleashed: An argument for AI in education in February 2016, and the computing company IBM, which launched Personalized Education: from curriculum to career with cognitive systems in May 2016. Pearson’s interest in AI reflects its growing profile as an organization using advanced forms of data analytics to measure educational institutions and practices  while IBM’s report on cognitive systems makes a case for extending its existing R&D around cognitive computing into the education sector.

AI has been the subject of serious concern recently, with warnings from high-profile figures including Stephen Hawking, Bill Gates and Elon Musk, while awareness about cognitive computing has been fuelled by widespread media coverage of Google’s AlphaGo system, which beat one of the world’s leading Go players back in March. Commenting on these recent events, the philosopher Luciano Floridi has noted that contemporary AI and cognitive computing, however, cannot be characterized in monolithic terms as some kind of ‘ultraintelligence’; instead it is  manifesting itself in far more mundane ways through an ‘infosphere’ of ‘ordinary artefacts that outperform us in ever more tasks, despite being no cleverer than a toaster’:

The success of our technologies depends largely on the fact that, while we were speculating about the possibility of ultraintelligence, we increasingly enveloped the world in so many devices, sensors, applications and data that it became an IT-friendly environment, where technologies can replace us without having any understanding, mental states, intentions, interpretations, emotional states, semantic skills, consciousness, self-awareness or flexible intelligence. Memory (as in algorithms and immense datasets) outperforms intelligence when landing an aircraft, finding the fastest route from home to the office, or discovering the best price for your next fridge. Digital technologies can do more and more things better than us, by processing increasing amounts of data and improving their performance by analysing their own output as input for the next operations.

Contemporary algorithmic forms of AI that learn from the vast memory-banks of big data do not constitute either an apocalyptic or benevolent future of AI or cognitive systems, but, for Floridi, reflect human ambitions and problems.

So why are companies like Pearson and IBM advancing claims for their benefits in education, and to address which ambitions and problems? Extending from my recent work on both Pearson’s digital methods and IBM’s cognitive systems R&D programs (all part of an effort to map out the emerging field of ‘educational data science’), I suggest these developments can be understood in terms of growing recognition of the connections between computer technologies, social environments, and embodied human experience.

Pearson intelligence
Pearson has been promoting itself as a new source of expertise in educational big data analysis since establishing its Center for Digital Data, Analytics and Adaptive Learning in 2012. Its ambitions in the direction of educational data analytics are to make sense of the masses of data becoming available as educational activities increasingly occur via digital media, and to use these data and patterns extracted from them to derive new theories of learning processes, cognitive development, and non-academic social and emotional learning. It has also begun publishing reports under its ‘Open Ideas’ theme, which aim to make its research available publicly. It is under the Open Ideas banner that Pearson has published Intelligence Unleashed (authored by Rose Luckin and Wayne Holmes of the London Knowledge Lab at the University College London).

Pearson’s report proposes that artificial intelligence can transform teaching and learning. Its authors state that:

Although some might find the concept of AIEd alienating, the algorithms and models that comprise AIEd form the basis of an essentially human endeavour. AIEd offers the possibility of learning that is more personalised, flexible, inclusive, and engaging. It can provide teachers and learners with the tools that allow us to respond not only to what is being learnt, but also to how it is being learnt, and how the student feels.

Rather than seeking to construct a monolithic AI system, Pearson is proposing that a ‘marketplace’ of thousands of AI components will eventually combine to ‘enable system-level data collation and analysis that help us learn much more about learning itself and how to improve it.’

Underpinnings its vision of AIEd is a particular concern with ‘the most significant social challenge that AI has already brought – the steady replacement of jobs and occupations with clever algorithms and robots’:

It is our view that this phenomena provides a new innovation imperative in education, which can be expressed simply: as humans live and work alongside increasingly smart machines, our education systems will need to achieve at levels that none have managed to date.

In other words, in the Pearson view, a marketplace of AI applications will both be able to provide detailed real-time data analytics on education and learning, and also lead to far greater levels of achievement by both individuals and whole education systems. Its vision is of augmented educational systems, spaces and practices where humans and machines work symbiotically.

In technical terms, what Pearson terms AIEd relies on a particular form of AI. This is not the AI with sentience of sci-fi imaginings, but AI reimagined through the lens of big data and data analytics techniques–the ‘ordinary artefacts’ of machine learning systems. Notably, the report refers to advances in machine learning algorithms, computer modelling, statistics, artificial neural networks and neuroscience, since ‘AI involves computer software that has been programmed to interact with the world in ways normally requiring human intelligence. This means that AI depends both on knowledge about the world, and algorithms to intelligently process that knowledge.’

In order to do so, and importantly, Pearson’s brand of AIEd requires the development of sophisticated computational models. These include models of the learner, models of effective pedagogy, and models of the knowledge domain to be learned, as well as models that represent the social, emotional, and meta-cognitive aspects of learning:

Learner models are ways of representing the interactions that happen between the computer and the learner. The interactions represented in the model (such as the student’s current activities, previous achievements, emotional state, and whether or not they followed feedback) can then be used by the domain and pedagogy components of an AIEd programme to infer the success of the learner (and teacher). The domain and pedagogy models also use this information to determine the next most appropriate interaction (learning materials or learning activities). Importantly, the learner’s activities are continually fed back into the learner model, making the model richer and more complete, and the system ‘smarter’.

Based on the combination of these models with data analytics and machine learning processes, Pearson’s proposed vision of AIEd includes the development of Intelligent Tutoring Systems (ITS) which ‘use AI techniques to simulate one-to-one human tutoring, delivering learning activities best matched to a learner’s cognitive needs and providing targeted and timely feedback, all without an individual teacher having to be present.’ It also promises intelligent support for collaborative working—such as AI agents that can integrate into teamwork—and intelligent virtual reality environments that simulate authentic contexts for learning tasks. Its vision is of teachers supported by their own AIEd teaching assistants and AIEd-led professional development.

These techniques and applications are seen as contributors to a whole-scale reform of education systems:

Once we put the tools of AIEd in place as described above, we will have new and powerful ways to measure system level achievement. … AIEd will be able to provide analysis about teaching and learning at every level, whether that is a particular subject, class, college, district, or country. This will mean that evidence about country performance will be available from AIEd analysis, calling into question the need for international testing.

In other words, Pearson is proposing to bypass the cumbersome bureaucracy of mass standardized testing and assessment, and instead focus on real-time intelligent analytics conducted up-close within the pedagogic routines of the AI-enhanced classroom. This will rely on a detailed and intimate analytics of individual performance, which will be gained from detailed modelling of learners through their data.

Pearson’s vision of intelligent, personalized learning environments is therefore based on its new understandings of ‘how to blend human and machine intelligence effectively.’ Specific kinds of understandings of human intelligence and cognition are assumed here. As Pearson’s AIEd report acknowledges,

AIEd will continue to leverage new insights in disciplines such as psychology and educational neuroscience to better understand the learning process, and so build more accurate models that are better able to predict – and influence – a learner’s progress, motivation, and perseverance. … Increased collaboration between education neuroscience and AIEd developers will provide technologies that can offer better information, and support specific learning difficulties that might be standing in the way of a child’s progress.

These points highlight how the design of AIEd systems will embody neuroscientific insights into learning processes–insights that will then be translated into models that can be used to predict and intervene in individuals’ learning processes. This reflects the recent and growing interest in neuroscience in education, and the adoption of neuroscientific insights for ‘brain-targeted‘ teaching and learning. Such practices target the brain for educational intervention based on neuroscientific knowledge. IBM has taken inspiration from neuroscience even further in its cognitive computing systems for education.

IBM cognition
One of the world’s most successful computing companies, IBM has recently turned its attention to educational data analytics. According to its paper on ‘the future of learning’:

Analytics translates volumes of data into insights for policy makers, administrators and educators alike so they can identify which academic practices and programs work best and where investments should be directed. By turning masses of data into useful intelligence, educational institutions can create smarter schools for now and for the future.

An emerging development in IBM’s data analytic approach to education is ‘cognitive learning systems’ based on neuroscientific methodological innovations, technical developments in brain-inspired computing, and artificial neural networks algorithms. Over the last decade, IBM has positioned itself as a dominant research centre in cognitive computing, with huge teams of engineers and computer scientists working on both basic and applied research in this area. Its own ‘Brain Lab’ has provided the neuroscientific insight for these developments, leading to R&D in a variety of areas. Its work has proceeded through neuroscience and neuroanatomy to supercomputing, to a new computer architecture, to a new programming language, to artificial neural network algorithms, and finally cognitive system applications, all underpinned by its understanding of the human brain’s synaptic structures and functions.

IBM itself is not seeking to build an artificial brain but a computer inspired by the brain and certain neural structures and functions. It claims that cognitive computing aims to ’emulate the human brain’s abilities for perception, action and cognition,’ and has dedicated extensive R&D to the production of ‘neurosynaptic brain chips’ and scalable ‘neuromorphic systems,’ as well as its cognitive supercomputing system Watson. Based on this program of work, IBM defines cognitive systems as ‘a category of technologies that uses natural language processing and machine learning to enable people and machines to interact more naturally to extend and magnify human expertise and cognition.’

To apply its cognitive computing applications in education, IBM has developed a specific Cognitive Computing for Education program. Its program director has presented its intelligent, interactive systems that combine neuroscientific insights into cognitive learning processes with neurotechnologies that can:

learn and interact with humans in more natural ways. At the same time, advances in neuroscience, driven in part by progress in using supercomputers to model aspects of the brain … promise to bring us closer to a deeper understanding of some cognitive processes such as learning. At the intersection of cognitive neuroscience and cognitive computing lies an extraordinary opportunity … to refine cognitive theories of learning as well as derive new principles that should guide how learning content should be structured when using cognitive computing based technologies.

The prototype innovations developed by the program include automated ‘cognitive learning content’, ‘cognitive tutors’ and ‘cognitive assistants for learning’ that can understand the learner’s needs and ‘provide constant, patient, endless support and tuition personalized for the user.’ IBM has also developed an application called Codename: Watson Teacher Advisor, that is designed to observe, interpret and evaluate information to make informed decisions that should provide guidance and mentorship to help teachers improve their teaching.

IBM’s latest report on cognitive systems in education proposes that ‘deeply immersive interactive experiences with intelligent tutoring systems can transform how we learn,’ ultimately leading to the ‘utopia of personalized learning’:

Until recently, computing was programmable – based around human defined inputs, instructions (code) and outputs. Cognitive systems are in a wholly different paradigm of systems that understand, reason and learn. In short, systems that think. What could this mean for the educators? We see cognitive systems as being able to extend the capabilities of educators by providing deep domain insights and expert assistance through the provision of information in a timely, natural and usable way. These systems will play the role of an assistant, which is complementary to and not a substitute for the art and craft of teaching. At the heart of cognitive systems are advanced analytic capabilities. In particular, cognitive systems aim to answer the questions: ‘What will happen?’ and ‘What should I do?’

Rather than being hard-programmed, cognitive computing systems are designed like the brain to learn from experience and adapt to environmental stimuli. Thus, instead of seeking to displace the teacher, IBM sees cognitive systems as optimizing and enhancing the role of the teacher, as a kind of cognitive prosthetic or machinic extension of human qualities. This is part of a historical narrative about human-computer hybridity that IBM has wrapped around its cognitive computing R&D:

Across industries and professions we believe there will be an increasing marriage of man and machine that will be complementary in nature. This man-plus-machine process started with the first industrial revolution, and today we’re merely at a different point on that continuum. At IBM, we subscribe to the view that man plus machine is greater than either on their own.

As such, for IBM,

We believe technology will help educators to improve student outcomes, but must be applied in context and under the auspices of a ‘caring human’. The teacher-to-system relationship does not, in our view, lead to a dystopian future in which the teacher plays second fiddle to an algorithm.

The promise of cognitive computing for IBM is not just of more ‘natural systems’ with ‘human qualities,’ but a fundamental reimagining of the ‘next generation of human cognition, in which we think and reason in new and powerful ways,’ as claimed its white paper ‘Computing, cognition and the future of knowing’:

It’s true that cognitive systems are machines that are inspired by the human brain. But it’s also true that these machines will inspire the human brain, increase our capacity for reason and rewire the ways in which we learn.

A recursive relationship between machine cognition and human cognition is assumed in this statement. It sees cognitive systems as both brain-inspired and brain-inspiring, both modelled on the brain and remoulding the brain through interacting with users. The ‘caring human’ teacher mentioned in its report above is one whose capacities are not displaced by algorithms, but are algorithmically augmented and extended. Similarly, the student enrolled into a cognitive learning system is also part of a hybrid system. Perhaps the clearest illustration from IBM of how cognitive systems will penetrate into education systems is its vision of a ‘cognitive classroom.’ This is a ‘classroom that will learn you‘ through constant and symbiotic interaction between cognizing human subjects and nonhuman cognitive systems designed according to a model of the human brain.

Biosocial spaces
Some of the claims in these reports from Pearson and IBM may sound far-fetched and hyperbolic. It’s worth noting, however, that most of the technical developments underpinning them are already part of cutting-edge R&D in both the computing and neuroscience sectors. Two recent ‘foresight’ reports produced by the Human Brain Project document many of these developments and their implications. One, Future Neuroscience, details attempts to map the human brain, and ultimately understand it, through ‘big science’ techniques of data analysis and brain simulation. The other, Future Computing and Robotics, focuses on the implications of ‘machine intelligence,’ ‘human-machine integration,’ and other neurocomputational technologies that use the brain as inspiration; it states:

The power of these innovations has been increased by the development of data mining and machine learning techniques, that give computers the capacity to learn from their ‘experience’ without being specifically programmed, constructing algorithms, making predictions, and then improving those predictions by learning from their results, either in supervised or unsupervised regimes. In these and other ways, developments in ICT and robotics are reshaping human interactions, in economic activities, in consumption and in our most intimate relations.

These reports are the product of interdisciplinary research between sociologists and neuroscientists, and are part of a growing social scientific interest in ‘biosocial’ dynamics between biology and social environments.

Biosocial studies emphasize how social environments are now understood to ‘get under the skin’ and to actually influence the biological functions of the body. In a recent introduction to special issue on ‘biosocial matters,’ it was claimed that a key insight coming out of social scientific attention to biology is ‘the increasing understanding that the brain is a multiply connected device profoundly shaped by social influences,’ and that ‘the body bears the inscriptions of its socially and materially situated milieu.’ Concepts such as ‘neuroplasticity’ and ‘epigenetics’ are key here. Simply put, neuroplasticity recognizes that the brain is constantly adapting to external stimuli and social environments, while epigenetics acknowledges that social experience modulates the body at the genetic level. According to such work, the body and the brain are influenced by the structures and environments that constitute society, but are also the source for the creation of new kinds of structures and environments which will in turn (and recursively) shape life in the future.

As environments become increasingly inhabited by machine intelligence–albeit the machine intelligence of ordinary artefacts rather than superintelligences–then computer technologies need to be considered as part of the biosocial mix. Indeed, IBM’s R&D in cognitive computing fundamentally depends on its own neuroscientific findings about neuroplasticity, and the translation of biological neural networks used in computational neuroscience into the artificial neural networks used in cognitive computing and AI research.

Media theorist N Katherine Hayles has mobilized a form of biosocial inquiry in her recent work on ‘nonconscious cognitive systems’ which increasingly permeate information and communication networks and devices. For her, cognition in some instances may be located in technical systems rather than in the mental world of an individual participant, ‘an important change from a model of cognition centered in the self.’ Her non-anthropocentric view of ‘cognition everywhere’ suggests that cognitive computing devices can employ learning processes that are modelled like those of embodied biological organisms, using their experiences to learn, achieve skills and interact with people. Therefore, when nonconscious cognitive devices penetrate into human systems, they can then potentially modify the dynamics of human behaviours through changing brain morphology and functioning. The potential of nonhuman neurocomputational techniques based on the brain, then, is to become legible as traces in the neurological circuitry of the human brain itself, and to impress itself on the cerebral lives of both individuals and wider populations.

Biosocial explanations are beginning to be applied to education and learning. Jessica Pykett and Tom Disney have shown, for example, that:

an emphasis on the biosocial determinants of children’s learning, educational outcomes and life chances resonates with broader calls to develop hybrid accounts of social life which give adequate attention to the biological, the nonhuman, the technological, the material, … the neural and the epigenetic aspects of ‘life itself.’

In addition, Deborah Youdell‘s new work on biosocial education proposes that such conceptualizations might change our existing understandings of processes such as learning:

Learning is an interaction between a person and a thing; it is embedded in ways of being and understanding that are shared across communities; it is influenced by the social and cultural and economic conditions of lives; it involves changes to how genes are expressed in brain cells because it changes the histones that store DNA; it means that certain parts of the brain are provoked into electrochemical activity; and it relies on a person being recognised by others, and recognising themselves, as someone who learns. … These might be interacting with each other – shared meanings, gene expression, electrochemical signals, the everyday of the classroom, and a sense of self are actually all part of one phenomenon that is learning.

We can begin to understand what Pearson and IBM are proposing in the light of these emerging biosocial explanations and their application to emerging forms of neurocomputation. To some extent, Pearson and IBM are mobilizing biosocial explanations in the development of their own techniques and applications. Models of neural plasticity and epigenetics emerging from neuroscience have inspired the development of  cognitive computing systems, which are then used to activate environments such as Pearson’s AIEd intelligent learning environments or IBM’s cognitive classroom. These are reconfigured as neurocomputationally ‘brainy spaces’ in which learners are targeted for cognitive enhancement and neuro-optimization through interacting with other nonconscious cognitive agents and intelligent environments.

In brief, the biosocial process assumed by Pearson and IBM proceeds something like this:

> Neurotechnologies of brain imaging and simulation lead to new models and understandings of brain functioning and learning processes
> Models of brain functions are encoded in neural network algorithms and other cognitive and neurocomputational techniques
> Neurocomputational techniques are built-in to AIEd and cognitive systems applications for education
> AIEd and cognitive systems are embedded into the social environment of education institutions as ‘brain-targeted’ learning applications
> Educational environments are transformed into neuro-inspired, computer-augmented ‘brainy spaces’
> The brainy space of the educational environment interacts with human actors, getting ‘under the skin’ by becoming encoded in the embodied human learning brain
> Human brain functions are augmented, extended and optimized by machine intelligences

In this way, brain-based machine intelligences are proposed to meet the human brain, and, based on principles of neuroplasticity and epigenetics, to influence brain morphology and cognitive functioning. The artificially intelligent, cognitive educational environment is, in other words, translated into a hybrid, algorithmically-activated biosocial space in the visions of Pearson and IBM. Elsewhere, I’ve articulated the idea of brain/code/space–based on geographical work on technologically-mediated environments–to  describe environments that possess brain-like functions of learning and cognition performed by algorithmic processes. Pearson and IBM are proposing to turn educational environments into brain/code/spaces that are both brain-based and brain-targeted.

While we need to be cautious of the extent to which these developments might (or might not) actually occur (or be desirable), it is important to analyse them as part of a growing interest in how technologically-enhanced social environments based on the brain might interweave with the neurobiological mechanisms that underlie processes of learning and development. In other words, Pearson’s interest in AIEd and IBM’s application of cognitive systems to education need to be interpreted as biosocial matters of significant contemporary concern.

Of course, as Neil Selwyn cautions, technological changes in education cannot be inevitable or wholly beneficial. There are commercial and economic drivers behind them that do not necessarily translate smoothly into education, and most ‘technical fixes’ fail to have the impact intended by their designers and sponsors. A fuller analysis of Pearson’s aims for AIEd or IBM’s ambitions for cognitive systems in education would therefore need to acknowledge the business plans that animate them, and critically consider the visions of the future of education they are seeking to catalyse.

More pressingly, it would need to develop detailed insights into the ways that the brain is being mapped, known, understood, modelled and simulated in institutional contexts such as IBM, or how neuroscientific insights and models are being embodied in the kinds of AI applications that Pearson is promoting.  How IBM and Pearson conceive the brain is deeply consequential to the AI and cognitive systems they are developing, and to how those systems then might interact with human actors and possibly influence the cognition of those people by shaping the neural architectures of their brains. Are these models adequate approximations of human mental and cognitive functioning? Or do they treat the brain and cognition in reductive terms as a kind of computational system that can be debugged,  rewired and algorithmically optimized, in ways which reproduce the long-standing tendency by technologists and scientists to represent mental life as an information-processing computer?

Just a few years after the dawn of computer technology in the 1940s, the brain was said to operate like a computer, with the role of physical hardware played by the brain itself and our thoughts serving as software. … Propelled by subsequent advances in both computer technology and brain research, an ambitious multidisciplinary effort to understand human intelligence gradually developed, firmly rooted in the idea that humans are, like computers, information processors. This effort now involves thousands of researchers, consumes billions of dollars in funding, and has generated a vast literature consisting of both technical and mainstream articles and books … speculating about the ‘algorithms’ of the brain, how the brain ‘processes data’, and even how it superficially resembles integrated circuits in its structure. The information processing metaphor of human intelligence now dominates human thinking, both on the street and in the sciences.

To what extent, for example, are biological neural networks conflated with (or reduced to) artificial neural networks as findings and insights from computational neuroscience are translated into applied AI and cognitive systems R&D programs? A kind of biosocial enthusiasm about the plasticity of the brain and epigenetic modulation is animating the technological ambitions of Pearson and IBM, one that may be led more by computational understandings of the brain as an adaptive information-processing device than a culturally and socially situated organ. Future research in this direction would need to interrogate the specific forms of neuro knowledge production they draw upon, as well as engage with social scientific insights into how environments really work to shape human embodied experience (and vice versa).

The translation of educational environments into biosocial spaces that are technologically enhanced by new forms of AI, cognitive systems and other neurocomputational applications could have significant effects on teachers and learners right down to biological and neurological levels of life itself. As Luciano Floridi has noted, these are not forms of ‘ultraintelligence’ but ‘ordinary artefacts’ that can outperform us, and that are designed for specific purposes–but could always be made otherwise, for better purposes:

We should make AI human-friendly. It should be used to treat people always as ends, never as mere means…. We should make AI’s stupidity work for human intelligence. … And finally, we should make AI make us more human. The serious risk is that we might misuse our smart technologies, to the detriment of most of humanity.

The glossy imaginaries of AIEd and cognitive systems in education projected by Pearson and IBM reveal a complex intersection of technological and scientific developments–combined with business ambitions and future visions–that require detailed examination as biosocial matters of concern for the future of education.

Posted in Uncategorized | Tagged , , , , , , , , , , | 3 Comments