Imaginaries and materialities of education data science

Future educationImage: The .edu Ocunet, by Tim Beckhardt

Ben Williamson

This is a talk I presented at the Nordic Educational Research Association conference at Aalborg University, Copenhagen, on 23 March 2017.

Education is currently being reimagined for the future. In 2016, the online educational technology  magazine Bright featured a series of artistic visions of the future of education. One of them, by the artist Tim Beckhardt, imagined a vast new ‘Ocunet’ system.

The Ocunet is imagined as a decentralized educational virtual-reality streaming network using state­-of-­the-­art Panoptic headsets to deliver a universal knowledge experience. The infrastructure of public education has been repurposed as housing for the Ocunet’s vast server network. Teachers have been relieved of the stress of child-behavior management, and instead focus their skills on managing the Ocunet—editing our vast database to keep our students fully immersed in the latest curriculum—while principals process incoming student data at all times.

The Ocunet is an artistic and imaginative vision of the future of education. I use it as an example to start here because it illustrates a current fascination with reimagining education. The future it envisages is one where education has been thoroughly digitized and datafied—the educational experience has been completely embedded in digital technology systems, and every aspect of student performance is captured and processed as digital data.

This may all sound like speculative educational science fiction. But some similar imaginative visions of the future of education are now actually catalysing real-world technical innovations, which have the potential to change education in quite radical ways.

In this talk, I want to show you how education is being imagined by advocates of a field of research and development becoming known as ‘education data science.’ And I’ll explore how the social and technical future of education it imagines—one that is digitized and datafied much like the Ocunet—is also being materialized through the design of digital data-processing programs.

The social consequences for the field of education in general are significant:

  • Education data science is beginning to impact on how schools are imagined and managed.
  • It’s influencing how learning is thought about, both cognitively and emotionally, and introducing new vocabularies for talking about learning processes.
  • Its technologies and methods, many developed in the commercial sector, are being used in educational research and to produce new knowledge about education.
  • And education data science is also seeking to influence policy, by making educational big data seem an authoritative source for accelerated evidence collection.

Big data imaginaries and algorithmic governance

Just to set the scene here, education is not the only sector of society where big data and data science are being imagined as new ways of building the future. Big data are at the centre of future visions of social media, business, shopping, government, and much more. Gernot Rieder and Judith Simon have characterized a ‘big data imaginary’ as an attempt to apply ‘mechanized objectivity to the colonization of the future’:

  • Extending the reach of automation, from data collection to storage, curation, analysis, and decision-making processes
  • Capturing massive amounts of data and focusing on correlations rather than causes, thus reducing the need for theory, models, and human expertise
  • Expanding the realm of what can be measured, in order to trace and gauge movements, actions, and behaviours in ways that were previously unimaginable
  • Aspiring to calculate what is yet to come, using smart, fast, and cheap predictive techniques to support decision making and optimize resource allocation

And here the figure of the computer algorithm is especially significant. While in computer science terms algorithms are simply step-by-step processes for getting a computer program to do something, when these algorithms start to intervene in everyday life and the social world they can be understood as part of a process of governing—or ‘algorithmic governance.’

By governing here we are working with ideas broadly inspired by Michel Foucault. This is the argument that every society is organized and managed by interconnected systems of thinking, institutions, techniques and activities that are undertaken to control, shape and regulate human conduct and action—captured in phrases such as ‘conduct of conduct’ or ‘acting upon action.’

Because the focus of much big data analysis—and especially in education—is on measuring and predicting human activity (that most data are people), then we might say we are now living under conditions of algorithmic governance where algorithms play a role in directing or shaping human acts. Antoinette Rouvroy and Thomas Berns have conceptualized algorithmic governance as ‘the automated collection, aggregation and analysis of big data, using algorithms to model, anticipate and pre-emptively affect and govern possible behaviours.’ They claim it consists of three major techniques:

  • Digital behaviourism: behavioural observation stripped of context and reduced to data
  • Automated knowledge production: data mining and algorithmic processing to identify correlations with minimal human intervention
  • Action on behaviours: application of automated knowledge to profile individuals, infer probabilistic predictions, and then anticipate or even pre-empt possible behaviours

For my purposes, what I’m trying to suggest here is that new ways of imagining education through big data appear to mean that such practices of algorithmic governance could emerge, with various actions of schools, teachers and students all subjected to data-based forms of surveillance acted upon via computer systems.

Schools, teachers and students alike would become the objects of surveillant observation and transformation into data; their behaviours would be recorded as knowledge generated automatically from analysing those data; and those known behaviours could then become the target for invention and modification.

Importantly too, imaginaries don’t always remain imaginary. Sheila Jasanoff has described ‘sociotechnical imaginaries’ as models of the social and technical future that might be realized and materialized through technical invention.Imaginaries can originate in the visions of single individuals or small groups, she argues, but gather momentum through exercises of power to enter into the material conditions and practices of social life. So in this sense, sociotechnical imaginaries can be understood as catalysts for the material conditions in which we may live and learn.

The birth of education data science

One of the key things I want to stress here is that the field of education data science is imagining and seeking to materialize a ‘big data infrastructure’ for automated, algorithmic and anticipatory knowledge production, practical intervention and policy influence in education. By ‘infrastructure’ here I’m referring to the interlocking systems of people, skills, knowledge and expertise along with technologies, processes, methods and techniques required to perform big data analysis. It is such a big data infrastructure that education data science is seeking to build.

Now, education data science has, of course, to have come from somewhere. There is a history to its future gaze. We could go back well over a century, to the nineteenth century Great Expositions where national education departments exhibited great displays of educational performance data. And we could certainly say that education data science has evolved from the emphasis on large-scale educational data and comparison made possible by international testing in recent years. Organizations like the OECD and Pearson have made a huge industry out of global performance data, and reframed education as a measurable matter of balancing efficient inputs with effective outputs.

But these large-scale data are different from the big data that are the focus for education data science. Educational big data can be generated continuously within the pedagogic routines of a course or the classroom, rather than through national censuses or tests, and are understood to lead to insights into learning processes that may be generated in ‘real-time.’

In terms of its current emphasis on big data, the social origins of education data science actually lie in academic research and development going back a decade or so, particularly at sites like Stanford University. It’s actually from one of Stanford’s reports that I take the term ‘big data infrastructure for education.’

The technical origins of such an infrastructure lie in advances in educational data mining and learning analytics. Educational data mining can be understood as the use of algorithmic techniques to find patterns and generate insights from existing large datasets. Learning analytics, on the other hand, makes the data analysis process into a more ‘real-time’ event, where the data is automatically processed to generate insights and feedback synchronously with whatever learning task is being performed. Some learning analytics applications are even described as ‘adaptive learning platforms’ because they automatically adapt—or ‘personalize’—in accordance with calculations about students’ past and predicted future progress.

What’s really significant is how education data science has escaped the academic lab and travelled to the commercial sector. So, for example, Pearson, the world’s largest ‘edu-business,’ set up its own Center for Digital Data, Analytics and Adaptive Learning a few years ago to focus on big data analysis and product development. Other technology companies have jumped into the field. Facebook’s Mark Zuckerberg has begun dedicating huge funds to the task of ‘personalizing learning’ through technology. IBM has begun to promote its Watson supercomputing technologies to the same purposes.

And education data science approaches are also being popularized through various publications. Learning with Big Data by Viktor Mayer-Schonberger and Kenneth Cukier, for example, makes a case for ‘datafying the learning process’ in three overlapping ways:

  • Feedback: applications that can ‘learn’ from use and ‘talk back’ to the student and teacher
  • Personalization: adaptive-learning software where materials change and adapt as data is collected, analysed and transformed into feedback in real-time; and the generation of algorithmically personalized ‘playlists’
  • Probabilistic prediction: predictive learning analytics to improve how we teach and optimize student learning

The book reimagines school as a ‘data platform,’ the ‘cornerstone of a big-data ecosystem,’ in which ‘educational materials will be algorithmically customized’ and ‘constantly improved.’

This text is perhaps a paradigmatic statement of the imaginary and ambitions of education data science, with its emphasis on feedback, systems that can ‘learn,’ ‘personalization’ through ‘adaptive’ software, predictive ‘optimization,’ and the appeal to the power of algorithms to make measurable sense of the mess of education.

Smarter, semi-automated startup schools

The imaginary of education data science is now taking material form through a range of innovations in real settings. A significant materialization of education data science is in new data-driven schools, or what I call smarter, semi-automated startup schools.

Max Ventilla is perhaps the most prominent architect of data-driven startup schools. Max’s first job was at the World Bank, before he became a successful technology entrepreneur in Silicon Valley. He eventually moved to Google, where he became head of ‘personalization’ and launched the Google+ platform. But in 2013, Max left Google to set up AltSchool. Originally established as a fee-paying chain of ‘lab schools’ in San Francisco, it now has schools dotted around Silicon Valley and across to New York. Most of its startup costs were funded by venture capital firms, with Mark Zuckerberg from Facebook investing $100million in 2015.

Notably, only about half of AltSchool’s staff are teachers. It also employs software engineers and business staff, many recruited from Google, Uber and other successful tech companies. In fact, AltSchool is not just a private school chain, but describes itself as a ‘full-stack education company’ that provides day-to-day schooling while also engaging in serious software engineering and data analytics. The ‘full-stack’ model is much the same as Uber in the data analytics taxi business, or Airbnb in hospitality.

The two major products of AltSchool are called Progression and Playlist. In combination, Max Ventilla calls these ‘a new operating system for education.’ Progression is a data analytics ‘teacher tool’ for tracking and monitoring student progress, academically, socially and emotionally. It’s basically a ‘data dashboard’ for teachers to visualize individual student performance information. The ‘student tool’ Playlist then provides a ‘customized to-do list’ for learners, and is used for managing and documenting work completed. So, while Progression acts as the ‘learning analytics’ platform to help teachers track patterns of learning, Playlist is the ‘adaptive learning platform’ that ‘personalizes’ what happens next in the classroom for each individual student.

Recently, AltSchool began sharing its ‘operating system’ with other partner schools, and has clearly stated ambitions to move from being a startup to scaling-up across the US and beyond. It also has ambitious technical plans.

Looking forward, AltSchool’s future ambitions include fitting cameras that run constantly in the classroom, capturing each child’s every facial expression, fidget, and social interaction, as well as documenting the objects that every student touches throughout the day; microphones to record every word that each person utters; and wearable devices to track children’s movements and moods through skin sensors. This is so its in-house data scientists can then search for patterns in each student’s engagement level, moods, use of classroom resources, social habits, language and vocabulary use, attention span, academic performance, and more.

The AltSchool model is illustrative of how the imaginary of education data science is being materialized in new startup schools. Others include:

  • Summit Schools, which have received substantial Facebook backing, including the production of a personalized learning platform allegedly now being used by over 20,000 students across the US
  • The Primary School, set up by Mark Zuckerberg’s wife Priscilla Chan
  • The Khan Lab School founded by Salman Khan of the online Khan Academy.

All of these schools are basically experiments in how to imagine and manage a school by using continuous big data collection and analysis.

So, as AltSchool was described in a recent piece in the Financial Times, while ‘parents pay fees, hoping their kids will get a better education as guinea pigs, venture capitalists fund the R&D, hoping for financial returns from the technologies it develops.’

And these smarter, semi-automated startup schools are ambitiously seeking to expand the realm of what is measurable, not just test scores but also student movements, speech, emotions, and other indicators of learning.

Optimizing emotions

As indicated by AltSchool, education data science is seeking new ways to know, understand and improve both the cognitive and the social-emotional aspects of learning processes.

Roy Pea is one of the leading academic voices in education data science. Formerly the founding director of the Learning Analytics Lab at Stanford University, Pea has described techniques for measuring the ‘emotional state’ of learners. These include collecting ‘proximal indicators’ that relate to ‘non-cognitive factors’ in learning, such as academic persistence and perseverance, self-regulation, and engagement or motivation, all of which are seen to be improvable with the help of data analytics feedback.

Now, academic education data scientists and those who work in places like AltSchool are not the only people interested in data scientific ways of knowing and improving students’ social and emotional learning. The OECD has established a ‘Skills for Social Progress’ project to focus on ‘the power of social and emotional skills.’ It assumes that social and emotional skills can be measured meaningfully, and its ambition is to generate evidence about children’s emotional lives for ‘policy-makers, school administrators, practitioners and parents to help children achieve their full potential, improve their life prospects and contribute to societal progress.’

The World Economic Forum has its own New Vision for Education report which involves ‘fostering social and emotional learning through technology.’ Its vision is that social and emotional proficiency will equip students to succeed in a swiftly evolving digital economy, and that digital technologies could be used to build ‘character qualities’ and enable students to master important social and emotional skills. These are ‘valuable’ skills in quite narrowly economic terms.

Both the OECD and World Economic Forum are also seeking to make the language of social and emotional learning into a new global policy vocabulary—and there is certainly evidence of this in the UK and US already. The US Department of Education has been endorsing the measurement of non-cognitive learning for a few years, and the UK Department for Education has funded policy research in this area.

So how might education data science make measurable sense of students’ emotions? Well, according to education data scientists, it is possible to measure the emotional state of the student using webcams, facial vision technologies, speech analysis, and even wearable biometric devices.

Future Classroom_Josan GonzalesImage: Automated teachers & augmented reality classrooms by Josan Gonzalez

These are the kinds of ideas that have been taken up and endorsed very enthusiastically by the World Economic Forum, which strongly promotes the use of ‘affective computing’ techniques in its imaginary vision. Affective computing is the term for systems that can interpret, emulate and perhaps even influence human emotion. The WEF idea is that affective computing innovations will allow systems to recognize, interpret and simulate human emotions, using webcams, eye-tracking, databases of expressions and algorithms to capture, identify and analyse human emotions and reactions to external stimuli. ‘This technology holds great promise for developing social and emotional intelligence,’ it claims.

And it specifically identifies Affectiva as an example. Originating from R&D at MIT Media Lab, Affectiva has built what it claims to be the world’s largest emotion database, which it’s compiled by analysing the ‘micro-expressions’ of nearly 5 million faces. Affectiva uses psychological emotion scales and physiological facial metrics to measure seven categories of emotions, then utilizes algorithms trained on massive amounts of data to accurately analyse emotion from facial expressions. ‘In education,’ claims Affectiva, ‘emotion analytics can be an early indicator of student engagement, driving better learning outcomes.’

Such systems, then, would involve facial vision algorithms determining student engagement from facial expressions, and then adapting to respond to their mood. Similarly, the Silicon Valley magazine for educational technology, EdSurge, recently produced a promotional article for the role of ‘emotive computing in the classroom.’

‘Emotionally intelligent robots,’ its author claimed, ‘may actually be more useful than human [teachers] … as they are not clouded by emotion, instead using intelligent technology to detect hidden responses. … Emotionally intelligent computing systems can analyse sentiment and respond with appropriate expressions … to deliver highly-personalized content that motivates children.’

Both the World Economic Forum and EdSurge also promote a variety of wearable biometric devices to measure mood in the blood and the body of a seemingly ‘transparent child’:

  • Transdermal Optical Imaging, using cameras to measure facial blood flow information and determine student emotions where visual face cues are not obvious
  • Wearable social-emotional intelligence prosthetics which use a small camera and analyzes facial expressions and head movements to detect affects in children in real-time
  • Glove-like devices full of sensors to trace students’ arousal

This imaginary of affective or emotive computing in the classroom taps into the idea that automated, algorithmic systems are able to produce objective accounts of students’ emotional state. They can then personalize education by providing mood-optimized outputs which might actually nudge students towards more positive feelings.

In this last sense, affective computing is not just about making the emotions measurable, but about using automated systems to manipulate mood in the classroom, to make it more positive and preferable. Given that powerful organizations like the World Economic Forum and OECD are now seeking to make the language of social-emotional learning into the language of education policy, this appears to make it possible that politically preferred emotions could be engineered by the use of affective computing in education.

Cognizing systems

Not only are the non-cognitive aspects of learning being targeted by education data science however. One of its other targets is cognition itself. In the last couple of years, IBM has begun to promote its ‘cognitive computing’ systems for use in a variety of sectors—finance, business, healthcare but also education. These have been described as ‘cognitive technologies that can think like a human,’ based on neuroscientific insights into the human brain, technical developments in brain-inspired computing, and artificial ‘neural networks’ algorithms. So IBM is claiming that it can, to some extent, ‘emulate the human brain’s abilities for perception, action and cognition.’

To put it simply, cognitive systems are really advanced big data processing machines that employ machine learning processes modelled on those of embrained cognition, but then far exceed human capacities. These kind of super-advanced forms of real-time big data processing and machine learning are often called artificial intelligence these days.

The promise of IBM for education is to bring these brain-inspired technologies into the classroom, and to ‘bring education into the cognitive era.’ And it is seeking to do so through a partnership with Pearson announced late in 2016, which will embed ‘cognitive tutoring capabilities’ into Pearson’s digital courseware. Though this is only going to happen in limited college courses for now, both organizations have made it quite clear they see potential to take cognitive tutoring to scale across Pearson’s e-learning catalogue of courses.

Pearson itself has produced its own report on the possibilities of artificial intelligence in education, including the creation of ‘AI teaching assistants.’ Pearson claims to be ‘leveraging new insights in disciplines such as psychology and educational neuroscience to better understand the learning process, and build more accurate models that are better able to predict—and influence—a learner’s progress.’

Neuroscience is the important influence here. In recent years brain scientists have popularized the idea of ‘neuroplasticity,’ the idea that the brain modifies itself in response to experience and the environment. The brain, then, is in a lifelong state of transformation as synaptic pathways ‘wire together’.

But the idea of brain plasticity has taken on other meanings as it has entered into popular knowledge. According to a critical social scientific book by Victoria Pitts-Taylor, the idea of neuroplasticity resonates with ideas about flexibility, multitasking and self-alteration in late capitalism. And it also underpins interventions aimed as cognitive modification and enhancement, which target the brain for ‘re-wiring.’

Tapping into the popular understanding plasticity as the biological result of learning and experience, both IBM and Pearson view cognitive computing and artificial intelligence technologies as being based on the plastic brain. IBM’s own engineers have done a lot of R&D with ‘neuromorphic’ computing and ‘neurosynaptic chips,’ and have hired vast collaborative teams of neuroscientists, hardware engineers and algorithm designers to do so. But, they claim, cognitive and AI systems can also be potentially brain-boosting and cognition-enhancing, because they can interact with the plastic brain and ‘re-wire’ it.

The ambitions of IBM and Pearson to make classrooms into engines of cognitive enhancement are clearly put in a recent IBM white paper titled Computing, cognition and the future of knowing. The report’s author claims that:

  • Cognitive computing consists of ‘natural systems’ with ‘human qualities’
  • They learn and reason from their interactions with us and from their experiences with their environment
  • Cognitive systems are machines inspired by the human brain that will also inspire the human brain, increase our capacity for reason and rewire the ways we learn

So, Pearson and IBM are aiming to inhabit classrooms with artificial intelligences and cognitive systems that have been built to act like the brain and then act upon the brain to extend and magnify human cognition. Brain-inspired, but also brain-inspiring.

In some ways we shouldn’t see this as controversial. As computers get smarter, of course they might help us think differently, and act as cognitive extensions or add-ons. Just to anticipate one of our other keynotes at the conference, Katherine Hayles has written about how ‘cognitive systems’ are now becoming so embedded in our environments that we can say there is ‘cognition everywhere.’ We live and learn in extended cognitive networks. So, says Hayles, cognitive computing devices can employ learning processes that are modelled on those of embodied biological organisms, using their experiences to learn, achieve skills and interact with people. Therefore, when  cognitive devices penetrate into human systems, they can potentially modify human cognitive functioning and behaviours through manipulating and changing the plastic brain.

As IBM and Pearson push such systems into colleges and schools, maybe they will make students cognitively smarter by re-wiring their brains. But the question here is smarter how? My concern is that students may be treated as if they too can be reduced to ‘machine learning’ processes. The history of cognitive psychology for the past half-century has been dogged with criticisms that it treats cognition like the functions of a computer. The brain has been viewed as hardware; the mind as software; memory as data retrieval; cognition as information-processing.

With this new turn to brain-enhancing cognitive systems, maybe cognition is being viewed as big data processing; the brain as neuromorphic hardware; mind as neural network algorithms and so on. As IBM’s report indicates, ‘where code and data go, cognition can now follow.’

Owning the means of knowledge production

So we’ve seen how education data science is seeking to embed its systems into schools, and how its aims are to modify students’ non-cognitive learning and embrained cognition alike. I want now to raise a couple of issues that I think will be relevant and important for all researchers of education, not just the few of us looking at this big data business.

The first is the issue of knowledge production. As I showed at the start, big data systems are making knowledge production into a more automated process. That doesn’t mean there are no engineers and analysts involved—clearly education data science involves education data scientists. But what it does mean is that knowledge is now being produced about education through the kinds of advanced technical systems that only a very few specialist education data science researchers can access or use.

What’s more, as many of my examples have shown, education data science is primarily being practiced outside of academic education research. It’s being done inside of AltSchool and by Pearson and IBM. And these organizations have business plans, investors and bank accounts to look after. AltSchool’s ‘operating system for education,’ as we saw, is being turned into a commercial offering, while IBM and Pearson are seeking to make cognitive tutoring into marketable products for schools and colleges to buy.

These products are also proprietorial, wrapped up in intellectual property and patents law. So education data science is now producing knowledge about education through proprietorial systems designed, managed and marketed by commercial for-profit organizations. These companies have the means for knowledge production in data-driven educational research. We could say they ‘own’ educational big data, since companies that own the data and the tools to mine it—the data infrastructure—possess great power to understand and predict the world.

De-politicized real-time policy analytics

And finally, there are policy implications here too, with big data being positioned as an accelerated and efficient source of evidence about education. One of these implications is spelled out clearly by Pearson, in its artificial intelligence report. It states that:

  • AIEd will be able to provide analysis about teaching & learning at every level, whether a subject, class, college, district, or country
  • Evidence about country performance will be available from AIEd analysis, calling into question the need for international testing

So in this imaginary, AI is seen as a way of measuring school system performance via automated, real-time data mining of students rather than discrete testing at long temporal intervals.

This cuts out the need for either national or international testing. And since much national education policymaking has been decided on the basis of test-based systems in recent decades, then we can see how policy processes might be short-circuited or even circumvented altogether. When you have real-time data systems tracking, predicting and pre-empting students, then you don’t need cumbersome policy processes.

These technologies also appear de-politicized, because they generate knowledge about education from seemingly objective data, without the bias of the researcher or the policymaker. The decisions these technologies make are not based on politicized debates or ideologies, it is claimed anyway, but on algorithmic calculations.

A few years ago Dorothea Anagnostopoulos and colleagues edited an excellent book about the data infrastructure of test-based performance measurement in education. They made the key claim that test-based performance data was not just the product of governments but of a complex network of technology companies, technical experts, policies, computer systems, databases and software programs. They therefore argued that education is subject to a form of ‘informatic power.’

Informatic power, they argued, depends on the knowledge, use, production of, and control over measurement and computing technologies to produce performance measures that appear as transparent and accurate representations of the complex processes of teaching, learning, and schooling. And as they define who counts as ‘good’ teachers, students, and schools, these performance metrics shape how we practice, value and think about education.

If test-based data gives way to real-time big data, then perhaps we can say that informatic power is now mutating into algorithmic power. This new form of algorithmic power in education:

  • Relies on a big data infrastructure of real-time surveillance, predictive and prescriptive technologies
  • Depends on control over knowledge, expertise and technologies to monitor, measure, know and intervene in possible behaviours
  • Changing ways cognitive & non-cognitive aspects of learning may be understood & acted upon in policy & practice
  • Is concentrated in a limited number of well-resourced academic education data science labs, and in commercial settings where it is protected by IP, patents and proprietorial systems.

This form of algorithmic power, or algorithmic governance as we encountered it earlier, is not just about performance measurement, but about active performance management of possible behaviours–and opens up possibilities for more predictive and pre-emptive education policy.

Conclusion

Although many applications of big data in education may still be quite imaginary, the examples I’ve shown you today hopefully indicate something of the direction of travel. We’re not teaching and learning in the Ocunet just yet, but its imaginary of greater digitization and datafication is already being materialized.

As educators and researchers of education, we do urgently need to understand how a big data imaginary is animating new developments, and how this may be leading to new forms of algorithmic governance in education.

We need more studies of the sites where education data science is being developed and deployed, of the psychological and neuroscientific assumptions they rely on, of the power of education data science to define how education is known and understood, and of its emerging influence in educational policy.

Posted in Uncategorized | Tagged , , , , , , , , , | 1 Comment

Psychological surveillance and psycho-informatics in the classroom

Ben Williamson

questionnaire

Psychology has long played a role in education by providing the expert knowledge and survey instruments required to monitor students’ attitudes, dispositions and habits of mind. Today, though, psychology is coming to play an increasingly prevalent role in schools through intertwined developments in digital technology and education policy. New technologies of psychological surveillance, affective computing, and big data-driven psycho-informatics are being developed to conduct new forms of mood-monitoring and psychological experimentation within the classroom, supported by policy agendas that emphasize the emotional aspects of schooling.

Psycho-policy
A significant emerging area of education policy development focuses on the measurement and management of students’ ‘social-emotional learning.’ A number of related terms and psychological concepts have been used to describe social-emotional learning, such as non-cognitive learning, non-academic learning, character development, personal qualities, self-control, resilience, growth mindsets, mindfulness and grit. In the US an influential report entitled ‘Promoting Grit, Tenacity and Perseverance’ was published in 2013 by the Department of Education, followed in 2015 by the Every Student Succeeds Act, a federal law requiring all states to collect information on at least one ‘non-cognitive’ or ‘non-academic’ aspect of learning.

Major international organizations have begun to promote the development and measurement of social and emotional skills, particularly through technological means. The World Economic Forum published its report ‘New Vision for Education: Fostering Social and Emotional Learning through Technology’ in 2016. Likewise, the international organization the Organization of Economic Cooperation and Development (OECD) has established its Education and Social Progress project to develop specific measurement instruments for social and emotional skills and ‘better understand how school-aged children’s skills progressively develop overtime through investments from families, schools and communities.’ The project is intended to generate evidence about children’s emotional lives ‘for policy-makers, school administrators, practitioners and parents to help children achieve their full potential, improve their life prospects and contribute to societal progress.’

As a result of such developments, it has been suggested (controversially) that instruments to measure social-emotional learning, including national standardized tests, could even become new accountability mechanisms, used to judge schools’ performance in how effectively they have developed students’ non-academic personal qualities.

Psycho-surveillance
In a new article just published in Learning, Media and Technology, I have argued that the emerging social-emotional policy agenda is being introduced into schools indirectly through popular classroom apps such as ClassDojo. ClassDojo is a free mobile app that allows teachers to award ‘positive points’ for individual children’s behaviour and participation in the classroom. According to its website, by 2016 it was being used by over 3 million teachers and 35 million children in 180 countries, primarily in elementary and primary schools, with the stated aim to create happier students and develop qualities such as character, perseverance and grit.

In that respect, ClassDojo reinforces emerging governmental ambitions around the measurement and modification of children’s social and emotional learning in schools. It enacts these ambitions by facilitating psychological surveillance, that is, by requiring teachers to monitor and collect student data that relate to new measurable psychological concepts such as character development, growth mindsets and  other personal qualities.

The developers of ClassDojo claim they have been inspired by prominent psychologists such as Angela Duckworth, director of The Character Lab which aims to ‘advance the science and practice of character development’; James Heckman, the behavioural economist best known for his work on the economic benefits of ‘investing in the early and equal development of human potential’; and Carol Dweck, the psychologist responsible for the theory of growth mindsets. ClassDojo has even entered into partnership with Carol Dweck, and was strongly supported in the US ‘grit’ report.

ClassDojo therefore demonstrates how emerging policies about promoting and measuring social-emotional learning are being indirectly ushered into schools via new technologies designed to capture information about the non-academic aspects of learning, as defined by contemporary psychological expertise. It represents the introduction of ‘psycho-policies’ into schools. In a study of  governmental adoptions of psychology and behavioural economics in other aspects of public policy, Lynne Friedli and Robert Stearn have documented the emergence of state strategies of ‘psycho-compulsion, defined as the imposition of psychological explanations … together with mandatory activities intended to modify beliefs, attitude, disposition or personality.’

In this sense, ClassDojo exemplifies the rise of behavioural psycho-policies in schools that focus on both the surveillance of psychological characteristics and on the design of psycho-compulsion interventions intended to modify behaviours and emotions to meet specific measurable goals, particularly through the imposition of positive emotions and behavioural qualities.

Affective computing
But ClassDojo is just one early sign of much more intensive psychological surveillance in schools that will be enabled by the development of ‘mood-monitoring’ apps and even sophisticated forms of ‘affective computing.’

The World Economic Forum report on using technologies to foster social-emotional skills is indicative of future directions. One of the devices it promotes is the ‘Empathy watch,’ a wearable ‘engagement pedometer’ that can be used to measure students’ affective responses to learning situations. ‘The Embrace watch,’ the report claims, ‘is a wearable device that tracks physiological stress and activity. It can be programmed to vibrate when stress reaches a specific level, giving someone time to switch to a more positive response before stress gets out of control.  Combining the functionality of the Embrace watch with coaching from parents and teachers may further enhance opportunities to build a child’s social and emotional intelligence.’

The WEF report also advocates the use of wearable biometric sensor devices to track physical responses to learning situations, such as fluctuations in stress and emotion, and to ‘provide a minute-by-minute record  of someone’s emotional state, potentially helping to build self-awareness and even empathy, both of which are critical components of social and emotional skills.’

Even more recently, the educational technology site EdSurge has published a piece on ‘emotive computing’ in the classroom. The claims made in the piece are that emotive computing involves teaching computer-based robots ‘to recognize human emotions, based on signals, and then react appropriately based on an evaluation of how the person is feeling. Robots may actually be more useful than humans in this role, as they are not clouded by emotion, instead using intelligent technology to detect hidden responses.’

Some of the technologies profiled include facial recognition systems that use cameras to capture student responses, algorithms to identify their attention levels, and by measuring smiles, frowns and audio to classify student engagement. The article describes new psychological studies that have identified more than 5,000 facial movements to help identify human emotions. These findings are now powering a range of new technical innovations, ‘each using a combination of psychology and data-mining to detect micro expressions and classify human reactions.’

In addition, the EdSurge article profiles a number of innovations in ‘affective computing’ that are being applied to education. These include:

  • Transdermal Optical Imaging, with a camera that is able to measure facial blood flow information and determine student emotions where visual cues are not obvious
  • Electroencephalogram (EEG) electrical brain activity tests to measure students’ emotional arousal, task performance and provide computer mediation to individuals
  • Wearable affective technology such as a social-emotional intelligence prosthetic to detect human affects in children in real-time, which uses a small camera and analyzes facial expressions and head movements to infer the cognitive-affective state of the child
  • A glove-like device that maps students’ physiological arousal and measures the wearer’s skin conductivity, to deduce how excited a person it
  • Emotionally intelligent computing systems that can analyze sentiment and respond with appropriate expressions, enabling educators to deliver highly-personalized content that motivates children

As noted in the article, many of these innovations in emotive or affective computing originate in academic R&D settings, though commercial companies are taking them increasingly seriously and seeking to develop new tools to promote in the education technology market.

Real-time emotional feedback
All of these developments are part of ongoing attempts to make ‘real-time mood-monitoring’ and ‘real-time emotional feedback’ devices into key technologies for knowing, measuring, representing and governing human emotions in contemporary societies, as the sociologist William Davies has argued in a brilliant new article. Davies argues that positive emotions have attained a particularly privileged position in recent years, making the science of happiness, well-being indicators and ‘mood-tracking’ critical to contemporary forms of management and public policy.

‘The spread of “smart,” mobile, wearable technologies,’ Davies argues, ‘potentially allows humans to dwell in a purely “real-time” cognitive state (feeling, experiencing, responding and liking) and allowing machines to perform acts of judgment, evaluation and decision-making, at least as an ideal.’ Such forms of ‘affective capture,’ he suggests, represent new ways of ‘valuing’ the emotions, where the emotions become the object of assessment and judgment, and from there the targeted object of modification. Real-time mood-tracking devices are intended ‘to achieve a form of emotional augmentation,’ to ‘transform it’ and ‘render that emotion preferable in some way (be it more positive, more acceptable, simpler etc.), turning it into a different emotion.’

As Davies notes, psychological scales and categories pertaining to the measurement of emotions, such as the Positive and Negative Affect Schedule (PANAS), are integral to how many mood-monitoring devices operate. When wearable mood-monitoring devices are used for political purposes, such as in education and other areas of public policy, psychological scales can be used to define politically preferable forms of emotional conduct, and to impose interventions if individuals’ emotions are deemed to be at a negative deficit. In the process, students’ emotions would be changed to become more preferable, positive and acceptable.

Psycho-informatics in school
The forms of affective capture, mood-tracking and emotional augmentation and modification via psychological categories documented by Davies are indicative of future directions in the digitization and datafication of social-emotional learning. As both the WEF and EdSurge documents show, the alleged potential of new forms of affective computing in education is to data mine students according to psychological signals detected from the skin, face and brain. The capture and assessment of affective data can then be used to inform interventions that are ideally intended to impose socially desirable forms of positive affect on students, as defined by psychological expertise and legitimized by policy documents from both national governments and international policy influencers such as the OECD and WEF. In this way, affective computing does not just data-mine the emotions from body signals via psychological categories, but ideally attaches new emotional ways of being and conducting oneself to the body of the student.

The combination of real-time data mining with psychological experimentation is now even inspiring a new psychological subdiscipline of ‘psycho-informatics,’ which has been presented as an epochal shift in the science of psychological measurement. Psycho-informatics, it has been claimed, is ‘about to induce the single biggest methodological shift since the beginning of psychology or psychiatry. … Indeed, transferring techniques from computer science to psychiatry and psychology is about to establish Psycho-Informatics, an entire research direction of its own.’ Based on ‘the vision of a transparent human,’  psychological experimentation in psycho-informatics makes use of wearable sensors that can track movements and smartphones to trace online activities, and then deploys data mining and machine learning in order to detect, characterize and classify behavioural patterns and trends as they change over time.

The term psycho-informatics accurately captures current ambitions to apply psychology, data-mining and affective computing to the measurement and assessment of students’ social-emotional learning. It could become a key technique of government, allowing students’ emotions to be data-mined and assessed in real-time for the purposes of continuous, automated school performance measurement.

ClassDojo is prototypical of how wearable mood monitoring devices and psychological surveillance apps inspired by psycho-informatics might roll out to schools. New psycho-informatic techniques are being designed to enact and enforce new social-emotional learning policies and practices of affective measurement that value certain politically preferred forms of emotional conduct in classrooms. Psycho-informatics could become a key technique of psycho-compulsion by which schools can promote the student behaviours according to which they may in future be measured and governed. Even more technically advanced forms of psycho-informatic mood-tracking and affective computing devices driven by psychological insights into the emotional aspects of learning—and how to detect affect from the body—could become increasingly in-demand as social-emotional learning policy makes positive affect and psychological compulsion into key policy priorities, and even potentially into accountability mechanisms by which schools may be measured, assessed and judged.

Image credit: Scott Brown
Posted in Uncategorized | Tagged , , , , , , | 1 Comment

Algorithms in the news–why digital media literacy matters

Ben Williamson

Much of our work on Code Acts in Education over the past few years has focused on the work that algorithms do (and what they are made to do and by who) in relation to learning, policy and practice. But the work of algorithms extends far beyond education of course. The sociologist David Beer–a notable scholar of the ‘social power of algorithms’–has pointed out that there has been a startling increase in the amount of media coverage on algorithms over the last year. I thought it might be interesting to see whether different newspapers take distinctive editorial perspectives on the calculative devices that now play such a significant role in our lives. Below I have captured some results of simple Google searches using the ‘inurl:’ search feature to search the content of the websites of some of the UK’s best-known newspapers, just really as a test to see if any distinctive obvious patterns emerge. And I’m fully aware that Google results are highly contingent on the searcher and the date, time and location of the search. I even did a few of these searches more than once and got different results–which says a lot about how information is curated algorithmically, and why new digital media literacy approaches to news consumption and information access are going to be crucial in coming years.

The Guardian
Of all the newspapers I’ve searched, The Guardian returns the most results. Although Google reports over 10,000 search results, only 70 entries are actually displayed.

algorithms-guardian

Notably, of those results the top return is an article by Cathy O’Neill, author of the recent book Weapons of Math Destruction and a fierce critic of how algorithm-driven big data is affecting everyday life. A review of that book is the third result. Other results seem to indicate a largely critical take on algorithms from The Guardian, with coverage on calls for greater scrutiny of algorithms and their role in spreading false information. The Guardian certainly appears to take the most critical editorial line, with an emphasis on the connections between algorithms and politics. If we wanted to get categorical, perhaps we could say The Guardian‘s editorial line is to treat the algorithm as a governor.

The Telegraph
Switching to The Telegraph, we can see quite a different set of results.

algorithms-telegraph

The top result here is about the use of algorithms in cosmetics production, followed by a story about inserting algorithmic techniques into marketing strategy. Some of the other pieces focus on lie-detection algorithms, ‘anti-elitism’ school selection algorithms, and a quantum computer that can solve algorithms. The search apparently returned over 5000 results, though only 56 were displayed. Just from these results, it looks as though The Telegraph treats the algorithm as a useful scientist whose expertise is helping society.

The Sun
The Sun returned far fewer results, at around 500 (of which 50 displayed), though the vast majority of these were associated with its Striker cartoon strip.

algorithms-the-sun

However, scanning through the first few pages of the results, The Sun has covered Angela Merkel’s critical comments about algorithms, and the use of Instagram data to identify indicators of depression among users, and algorithmic surveillance techniques used by the CIA. Of Google results for algorithms in The Sun, only 4 news stories appeared. In short, The Sun is largely disinterested in algorithms in terms of newsworthiness.

The Mirror
Like The Sun, The Mirror has very limited coverage of algorithms. Of 500 results, only 63 displayed but many of these were not actually content from The Mirror site. Still, it had more news about algorithms than The Sun.

algorithms-the-mirror

The Mirror seems at first glance to focus on algorithms as scientifically reliable techniques. One of the results returned refers to any reader interested in the maths of algorithms as a ‘brain-box’–so perhaps we could say the editorial line of The Mirror is to treat algorithms in terms of brainy expertise.

The Daily Mail
Finally, for now anyway, The Daily Mail. Over 57,000 search results, of which 66 displayed.

algorithms-daily-mail

First look suggests that, other than in its piece about algorithmic trading, generally favourable coverage of algorithms, such as in crowdsourcing maps for natural disasters, stopping suicide, and anticipating terrorism. Algorithms as problem-solvers might be one way of categorizing its editorial line.

However, when I repeated the same search about an hour later, the top results were rather different.

algorithms-daily-mail-2

Now I could learn that algorithms are making us small-minded, that AI can predict the future, and that algorithms can detect prejudice from body language. The Daily Mail is certainly not disinterested in algorithms–the result returns are pretty high compared to the tabloids, and the Mail does frequently re-post scientific content from sources like The Conversation–but by no means does it adopt the kind of critical line found in The Guardian.

Critical digital media literacy
Certainly this quick scan of the newspaper coverage on algorithms indicates that a deeper study would be worthwhile. It perhaps also gives us some clues about diverse perspectives in relation to algorithms from different parts of the news media landscape. Algorithms have been a pretty big news event in relation to Brexit and the US elections–but this social and political power of algorithms doesn’t appear very well covered except by The Guardian. Contrast that with the New York Times:

algorithms-ny-times

Here, there is much more emphasis on making algorithms accountable, the involvement of algorithms in fake news and clickbait. That The Guardian and the NY Times are on to these kinds of stories is maybe not surprising given their political leanings and target audience. However, if we genuinely are concerned that algorithms are involved in political life by filtering and curating how we access information, then it’s perhaps concerning that these issues are much less well covered in papers from alternative political perspectives. Even what we know about filter bubbles and algorithmic curation is itself filtered and curated. The Daily Mail has covered this issue, but it didn’t show up in the Google results until I conducted a second search. Why? The mutability of theses search results also indicates that any careful study along these lines would have to pay close attention to methods in order to achieve  stable results.

From an educational perspective, the diverse ways in which algorithms are presented in the news is interesting because it suggests very different ways in which people might learn about algorithms in their everyday access to news. Many educators have long been committed to forms of digital and media literacy (part of our problem today is that the alt-right have colonized critical media literacy approaches to mainstream media). Developing forms of digital media literacy that attend to the role and power of algorithms in political and cultural life now appears to be real priority that will require dedicated attention in 2017.

Posted in Uncategorized | Tagged , , | Leave a comment

Fast policy networks in the construction of the computing curriculum

Ben Williamson

micro-bits-by-les-pounder

Computing has replaced ICT in the English National Curriculum, bringing an emphasis on Computer Science, learning to code, and computational thinking into schools. For the last couple of months,  Bethan Mitchell and I have been conducting a small-scale research project to detail how government policy on the computing curriculum was supported by a messy network of non-governmental organizations, actors and material objects, which is now also supporting its subsequent enactment. This has built on an earlier documentary study, published in Critical Policy Studies, which mapped parts of the policy network behind the curriculum and its public statements. In our new work, we’ve been conducting interviews with key people that occupied the network and sought to  get insiders’ perspectives on the development and enactment of computing curriculum policy. In this post I outline a few observations emerging from the interviews, which we will be analysing more thoroughly in the new year.

Speeding up policy
Perhaps one of the most notable things about the introduction of computing in education policy is the speed with which it became part of the official prescribed curriculum. A fairly niche concern in 2010 among a diverse range of organizations, but with little governmental support, computing had by 2013 been translated into new programmes of study for schools that were published on the Gov.uk site. In our analysis, then, we are hoping to understand the policy process around the computing curriculum both as the product of a distributed cross-sector ‘policy network’ and an accelerated ‘fast policy’ event. Here we’re drawing on Jamie Peck and Nik Theodore’s conceptualization of both the spatial distribution and temporal speed-up of policymaking in Fast Policy:

The modern policymaking process may still be focused on centers of political authority, but … sources, channels, and sites of policy advice encompass sprawling networks of human and nonhuman actors/actants, including consultants, web sites, practitioner communities, norm-­setting models, conferences, guru per­formances, evaluation scientists, think tanks, blogs, global policy institutes, and best-­practice peddlers…

They further articulate how fast policy has been marked by ‘shortening of policy development cycles, fast-tracking decision-making and rapid programme roll-out,’ all features that can be seen in the development and diffusion of computing curriculum policy. Not only has the timescale of its development, implementation and enactment been highly compressed, but has also involved advice and influence from across different sectors and positions.

Network nodes
The network of organizations that has combined to influence and enact the computing curriculum consists of a diverse range of public, private and third sector actors. Some of the first advocacy for computing in the curriculum came from very different perspectives. The campaigning organization Computing at School, for example, produced a white paper back in 2010. The innovation charity Nesta produced a report in 2011 about the needs of the videogames and visual effects industry that emphasized computing in school as a solution to a skills gap. The Royal Society followed in 2012 with a report intended to protect academic Computer Science.

2012 was a key year. The Secretary of State for Education at that time, Michael Gove, gave a key speech that highlighted government ambitions to replace ICT with computing. The Department for Education then formed a working group to design draft programmes of study for the new subject. The working group was led by the British Computing Society, the Royal Society of Engineering, and Computing at School, with membership that encompassed interests from industry, education and academia.

At around the same time, organizations to support children to learn how to programme computer code started emerging—such as Code Club, Raspberry Pi, and the Festival of Code event run by Young Rewired State. In the years since, a large number of coding initiatives have sprung up in support of the curriculum, including, most notably, the BBC’s flagship Make it Digital campaign and its distribution of a million ‘Micro-Bit’ programmable computers to children across the UK.

Beyond the UK, a more distributed network exists. These include the US Hour of Code initiative (a UK version now exists too) and the Computer Science for All campaign supported by President Barack Obama. Commercial support from global technology companies such as Google, Microsoft and Oracle has also helped solidify computing in schools. Oracle, for example, has spent hundreds of millions of US dollars supporting Computer Science for All, and in 2016 announced over a billion dollars of funding for computing education in European Union member states.

Policy actor positions
Within the cross-sector, interorganizational network that supports computing curriculum policy, key individuals have been able to take up different positions. From our interviews, we have begun to build up a rough typology of these positions:

  • Guru figureheads—influential individuals, often with industry background in major global tech companies, who use their position to make persuasive public statements and galvanize political and public support
  • Relationship brokers—actors who are able to build connections between seemingly diverse organizations, sectors, discourses and individual actors; who capture good ideas and propel them forwards through building and coordinating collaborations between others
  • Lobbyists—specific campaigners who advocate the interests of the groups they represent through the production of key campaigning messages, fixing meetings, organizing events, and generating public visibility
  • Practical experts—mostly former or present educators committed to the educational benefits of computing, they take a pragmatic view of the opportunities available to drive forward their agenda through work with other educators
  • Troublemakers—network insiders who feel their own interests are not being heard or acted upon, and publicly resist and critique the dominant network activities—sometimes risking being marginalized from key events and actions
  • Geek insiders—activist programmers and technology experts, usually affiliated to voluntary groups, who are seen by government officials  as trusted sources of technical know-how and inside-knowledge about technology development and its implications for education
  • Venture entrepreneurs—influential and wealthy venture capital actors from the technology and innovation sector, some enjoy a revolving-door relationship with government departments and senior politicians, and represent major global VC firms

Other positions in the network include those for volunteer programmers who teach young people how to code outside of school, and computing teachers who enact the computing curriculum itself through their pedagogies.

Many of these actors have collaborated on working groups, campaign alliances, lobbying associations, cross-sector collaborations, and the shared production of future visions and practical strategies.

Policy materiality
A policy network never merely consists of people and organizations, but a vast material tapestry of objects, technical hardware, software and texts. The computing curriculum network can operate in a distributed and accelerated way because it encompasses a sprawling network of nonhuman stuff. Websites for all the organizations involved in the network function as key sources of information, advocacy and advice. Most of these organizations are also accomplished users of social media, with Twitter accounts and Facebook pages used to attract followers and diffuse ideas, events and key messages.

Much of the original support for computing in schools became possible because of a growing mass of reports, white papers, manifestos, working papers and draft curriculum proposals. Some of the early documents produced by Computing at School and Nesta, for example, only caught public attention months or years after initial publication, as these organizations acted opportunistically to insert their expertise into emerging political openings. Newspaper and magazine coverage in both educational and mainstream media has helped propel these ideas into public visibility.

In terms of the practical enactment of computing curriculum policy, this has also been supported in very material ways, such as through the provision of printed curriculum guidance, the supply of online teacher training materials, and the easy availability of free coding software for use in schools. Physical computing devices such as the Raspberry Pi and the Micro-Bit instantiate the computing curriculum in hardware.

Policy speak
What the network says is as important as how it works. According to our interviewees, the computing curriculum has been relatively successful because it encompasses a range of quite diverse interests and agendas. The interests of disciplinary computer science can be accommodated alongside the more practical agenda of coding and programming usually associated with the field of software engineering. Some advocates of computing talk more of computational thinking and the capacity of young people to solve real-world problems by thinking like a computer, while others talk of the urgent need to develop digital citizenship and critical digital literacy to cope in a world of massive social and technical complexity.

Indeed, some of our interviewees talked of strategic ambiguity—evident even at specific high level meetings—where computer science, programming and notions of digital literacy were treated as one and the same thing. This is despite others’ protestations that over-emphasizing computer science risks turning computing into a high stakes scientific subject for academic high achievers, or that prioritizing coding risks treating computing as a talent pipeline for the commercial software development industry.

A fast policy infrastructure
There remains much analysis to be done of the specific interviews we have conducted, and of the webs of materials and technologies that have proliferated since the computing curriculum fully came into force in English schools. We also need to make better sense of the implications for computing of the wider current context of education policymaking. Notably, for example, Computer Science is now available as a GCSE qualification in the English Baccalaureate, making it a high stakes exam subject by which school performance might be measured. At the same time, however, mass academization means secondary schools are under no obligation to even teach computing at all. Primary schools, we are told, have probably done more with computing than secondary schools, which raises real concern about transition and progress. And funding and teacher shortages remain significant problems which, despite the extensive network activity roughed-out above, shows no sign of being resolved.

However, what we think we can see emerging is a kind of ad hoc architecture of relationships, practices and materials, or a fast policy infrastructure, which has orchestrated the ways in which the computing curriculum has been diffused and is continuing to influence the ways in which it is being enacted, extended to new sites, and expanded to encompass diverse interests and agendas. This infrastructure consists of diverse organizations from the public, private and third sectors; of individual actors and mutating relationships; of technologies, texts and web materials; and of discourses, good ideas and strategically-deployed ambiguities. As an unfolding policy event, the computing curriculum is typical of fast policy and policy network enactment among networks of cross-sector human and nonhuman networks of people and materials, and their translation into delicate but mutable affiliations and strategically combined interests that have been mobilized to achieve varied aims and goals.

Image credit: Les Pounder
Posted in Uncategorized | Leave a comment

Social media and public pedagogies of political mis-education

Ben Williamson

circuit-board

Over the past few months the close knit relationship of education with software and data has become a defining feature of political life in democratic societies. In a year that has seen ‘post-truth‘ named as word of the year by Oxford Dictionaries, social media fueled by big data has been blamed for creating deep political polarization. At the same time, the organization of formal education has itself been accused of increasing inequalities and widening a gap in the worldviews between young people who leave education with high-status qualification and those who do not. What is the link?

The education gap
Both the UK’s Brexit referendum and the US election have raised significant questions about education. One question was about why, on average, people with fewer educational qualifications had tended to vote for the UK to leave the EU, or for Trump to take the presidency despite his lack of political experience, while those with more qualifications tended to vote the other way. A new ‘education gap’ has emerged as an apparent determinant of people’s political preference. This education gap has begun to raise concerns about divisions in democracy itself, as the political scientist David Runciman has argued:

The possibility that education has become a fundamental divide in democracy—with the educated on one side and the less educated on another—is an alarming prospect. It points to a deep alienation that cuts both ways. The less educated fear they are being governed by intellectual snobs who know nothing of their lives and experiences. The educated fear their fate may be decided by know-nothings who are ignorant of how the world really works.

Of course, plenty of wealthy educated people in the UK voted out of the EU, and voted for Trump in the US. But statistics from both votes did indicate significant population differences in terms of educational qualification, in relation to a range of other social factors, in determining voting patterns.

Significantly, the statistics from the EU referendum indicate that the vote for leaving the EU was concentrated in geographical areas already most affected by growing economic, cultural and social inequalities, as well as by physical pain and mental ill-health and rising mortality rates. The sociologists Mike Savage and Niall Cunningham have vividly articulated the consequences of growing inequalities for citizens’ political participation:

There is ample evidence that political dynamics are being increasingly driven by the dramatic spiraling of escalating inequalities. To put this another way, growing economic inequalities are spilling over into all aspects of social, cultural, and political life, and that there are powerful feedback loops between these different spheres which are generating highly worrying trends.

Education, of course, is itself highly unequally distributed in terms of how well children achieve in schools, in ways that reproduce all sorts of social, cultural and economic inequalities. The increasing separation of children from more or less affluent backgrounds, and according to geographical locales and social and cultural contexts, is part of the dramatic spiralling of inequalities observed by sociologists. The kind of political polarization that materialized during both Brexit and the US election is the result of the related dynamics of education, geography, economics, and cultural and social networks, and the feedback loops between them.

It would be naive to suggest that those people with fewer qualifications are somehow to blame for not being critically aware of how their perspectives were being sculpted by populist propaganda during these campaigns. Anxiety among highly educated elites about the consequences of a lack of political awareness are far from novel. Moreover, the challenge here is to reconcile the polarizing interests of both those who are highly educated and those who are less educated. As Savage and Cunningham concluded, ‘the way that the wealthy elite are increasingly culturally and socially cocooned, and the extent to which large numbers of disadvantaged groups are outside their purview is deeply worrying.’ In their view, a kind of educated ignorance is the problem.

In the EU referendum and the US presidential election alike, neither side appeared to have any deep awareness of the other or of the deep-seated social issues that led to such distinctive and divided patterns of voting, as David Runciman explained:

Social media now enhances these patterns. Friendship groups of like-minded individuals reinforce each other’s worldviews. Facebook’s news feed is designed to deliver information that users are more inclined to ‘like’. Much of the shock that followed the Brexit result in educated circles came from the fact that few people had been exposed to arguments that did not match their preferences. Education does not provide any protection against these social media effects. It reinforces them. … [T]he gap between the educated and the less educated is going to become more entrenched over time, because it … represents a gulf in mutual understanding.

This point raises the other question, which was couched much less explicitly in terms of education. This concerned the role of social media in filtering how people learned about the issues on which they were being invited to vote.

Personalized political learning
The issue of how social media has participated in filtering people’s exposure to diverse political perspectives has become one of the defining debates in the wake of Brexit and the US election. An article in the tech culture magazine Wired on the day of the US election even asked readers, uncharacteristically, to consider the ‘dark side of tech’:

Even as the internet has made it easier to spread information and knowledge, it’s made it just as easy to undermine the truth. On the internet, all ideas appear equal, even when they’re lies. … Social media exacerbates this problem, allowing people to fall easily into echo chambers that circulate their own versions of the truth. … Both Facebook and Twitter are now grappling with how to stem the spread of disinformation on their platforms, without becoming the sole arbiters of truth on the internet.

The involvement of social media in the spread of ‘post-truth politics’ points to how it is leading citizens into informational enclaves designed to feed them news and knowledge that has been filtered to match their interests, based on data analysis of their previous online habits, what they have ‘liked’ or watched, what news sources they prefer, who they follow and what social networks they belong to.

‘Platforms like Twitter and Facebook now provide a structure for our political lives,’ Phil Howard , a sociologist of information and international affairs, has argued. He claims that social algorithms allow ‘large volumes of fake news stories, false factoids, and absurd claims’ to be ‘passed over social media networks, often by Twitter’s highly automated accounts and Facebook’s algorithms.’

Since the US election, it has been revealed that Trump’s campaign team worked closely with Facebook data to generate audience lists and targeted social media campaigns. Added to this, other more politically-activist social media sites such as Breitbart and Infowars have actively disseminated right-wing political agendas, reaching audiences that count in the tens of millions, as  Alex Krasodomski-Jones has detailed. ‘Computational propaganda’ involving automated bots spreading sensationalist political memes across social media networks have further compounded the problematic polarization of news consumption. Facebook and Twitter now accelerate the spread of fake news or sensationalized political bias through mechanisms such as trending topics and moments, which are engineered to be personalized to users’ preferences.

Clearly there are important implications here for how young people access and evaluate information. Jamie Bartlett and Carl Miller of the think tank Demos wrote a report 5 years ago that highlighted a need to teach young people critical thinking and scepticism online to ‘allow them to better identify outright lies, scams, hoaxes, selective half-truths, and mistakes.’

But the debate is not just about how to protect young people from online trolling, propagandist bias and fake news. Just as with the debate about the education gap, it’s important to note that people from across the political spectrum, whether highly educated or not, are all increasingly ‘socially and culturally cocooned’ as Mike Savage and Niel Cunningham phrased it. Education and social media are both involved in producing these cocooning effects.

The sociologist of social media Tarleton Gillespie wrote a few years ago about how big data-driven social media creates not just ‘networked publics’ who cohere together online around shared tastes and preferences, but ‘calculated publics‘: algorithmically produced snapshots of the ‘public’ around us and what most concerns it. He argued that search engines, recommendation systems, algorithms on social networking sites, and ‘trend’ identification algorithms not only help us find information, but provide a means to know what there is to know and to participate in social and political discourse.

Algorithmic calculations are now at the very centre of how people are learning to take part in political and democratic life, by filtering, curating and shaping what information and news we consume based on calculations of what most concerns and engages us — the logic of social media personalization now applies to political life. In other words, we are now living in a period of personalized political learning, whereby our existing political preferences are being reinforced by the consumption of news and information via social media and our participation in calculated, networked publics, with the consequence that  alternative perspectives are being systematically curated out of our feeds and out of our minds.

So seriously is this problem being taken that, in the fallout from the US election, it has been reported that a team of ‘renegade’ Facebook employees has established itself to deal with fake news and misinformation, although Mark Zuckerberg has denied Facebook had anything to do with it. The web entrepreneur Tim O’Reilly has suggested it would be a mistake for Facebook to reinstate human editors — whose alleged political bias was itself the centre of a major controversy not so long ago — but to design more intelligent techniques for separating information from sensationalist misinformation:

The answer is not for Facebook to put journalists to work weeding out the good from the bad. It is to understand, in the same way that they’ve so successfully divined the features that lead to higher engagement, how to build algorithms that take into account ‘truth’ as well as popularity.

Expect the quest for truth-divining algorithms to become a dominant feature of technical development in the social media field over the next years. Google in Europe, for example, has already announced support for a startup company that is developing automated, real-time fact-checking software (called RoboCheck) for online news. The appeal of apparently objective, impartial and unbiased truth-seeking algorithms in post-truth times is obvious, though as recent work in digital sociology and geography has repeatedly shown, algorithms are always dependent on the choices and decisions of their designers and engineers. The ‘social power of algorithms‘ such as those of Facebook to intervene in political life may not easily be resolved by new algorithms.

Public pedagogies of political mis-education
The post-truth spread of misinformation twinned with the magnification of political and social polarization via social media platforms and algorithms is at the core of a new public pedagogy of political mis-education. Public pedagogy is a term used to refer to the lessons that are taught outside of formal educational institutions by popular culture, informal institutions and public spaces,  dominant cultural discourses, and public intellectualism and social activism. Big data and social media are fast becoming the most successful sources of public pedagogy in the everyday lives of millions around the world. They are educating people by sealing them off into filter bubbles and echo chambers, where access to information, culture, news, and intellectual and activist discourse is being curated algorithmically.

The filter bubbles or echo chambers that calculated publics inhabit when they spend time on the web are consequential because they appear to close off access to alternative perspectives, and potentially lead people to think that everyone thinks like they do, shares their political sentiments, their aspirations, their fears. This is further related to, reproduced and exacerbated by  social inequalities in education, economics and cultural access. Doing well in formal education or not now appears to be a determinant of which kinds of social networks and calculated publics you belong to. ‘The educational divide that is opening up in our politics is not really between knowledge and ignorance,’ David Runciman argues. ‘It is a clash between one worldview and another.’

In an age where highly educated people and less educated people are being sharply divided both by social media and by their experience of education alike, serious issues are raised for the future of education as a social institution itself and the part it plays in supporting democratic processes. Existing educational inequalities and the experience of being parts of calculated publics in social media networks are now in a dynamic feedback loop. The public pedagogies of social media are becoming mis-educational in their effects, polarizing public opinion along different axes but most especially between the highly educated and the less educated.

Forms of measurement using data have long been at the core of how governments know and manage populations, as the sociologist David Beer has demonstrated in his work on ‘metric power.’ Today, the measurement of people’s interests, preferences and sentiments via social media, and the use of that information to feed-back content that people will like and that matches their existing preferences, is leading to a form of calculating governance that is exacerbating divisive politics and eroding democratic cohesion. Via social media data, people are being educated and governed according to measurements that indicate their existing worldview, and then provided access to more of the same.

As Brexit and the US election indicate, increasingly people in the UK and US are being governed as two separate publics, with many of the less-educated incited to support political campaigns that the more-educated find alien and incomprehensible, and vice versa. The philosopher Bruno Latour has described them as ‘two bubbles of unrealism,’ one clinging to an imagined future of globalization and the other retreating to the imagined ‘old countries of the past,’ or ‘a utopia of the future confronting a utopia of the past’:

For now, the utopia of the past has won out. But there’s little reason to think that the situation would be much better and more sustainable had the utopia of the future triumphed instead. … If the horizon of ‘globalization’ can no longer attract the masses, it is because everyone now understands more or less clearly that there is no real, material world in the offing corresponding to that vision of a promised land. … Nor can we count any longer on returning to the old countries of the past.

Education has long reinforced these utopias of unrealism — we’ve been teaching and learning in ‘post-truth’ times for years. Contradictory policy demands over the last two decades have pointed simultaneously towards an education for the future of a high-skills, globalized knowledge economy (as reinforced by global policy actors like the OECD), and an education of the past which emphasizes traditional values, national legacy, social order and authority. Social media algorithms and architectures have further enabled these utopias of unrealism to embed themselves across the US and Europe.

The mis-education of democratic society by the public pedagogies of big data and social media is being enabled by algorithmic techniques that are designed to optimize and personalize people’s everyday experiences in digital environments. But in the name of personalization and optimization, the same techniques are leading to post-truth forms of political mis-education and democratic polarization.

Sociologists have begun asking hard questions about the capacity of their field to address the new problems surfaced by Brexit and Trump. The field of education needs to involve itself too in this new problem space, in order to probe how young people are measured and known through traces of their data from early age; how their tastes and preferences are  formed through the dynamics between imagined utopias and social media feedback loops; how these relate to entrenched patterns of educational and other social inequalities; and how their sense of their place and their futures in democratic societies is formed as they encounter the public pedagogies of big data and social media in their everyday lives. How, in short, should we approach education in post-truth times?

Image credit: Quapan
Posted in Uncategorized | 5 Comments

Pearson, IBM Watson and cognitive enhancement technologies in education

Ben Williamson

ibm-watsonImage: Atomic Taco

The world’s largest edu-business, Pearson, partnered with one of the world’s largest computing companies, IBM, at the end of October 2016 to develop new approaches to education in the ‘cognitive era.’ Their partnership was anticipated earlier in the year when both organizations produced reports about the future trajectories of cognitive computing and artificial intelligence for personalizing learning. I wrote a piece highlighting the key claims of both at the time, and have previously published some articles tracing both Pearson’s interests in big data and IBM’s development of cognitive systems for learning. The announcement of their partnership is the next step in their efforts to install new machine intelligences and cognitive systems into educational institutions and processes.

At first sight, it might seem surprising that IBM and Pearson have partnered together. Their reports would suggest they were competing to produce a new educational market for artificially intelligent or cognitive systems applications. Pearson, however, has had a bad couple of years, with falling revenue and reputational decline, which appears to have resulted in the closure of its own in-house Center for Digital Data, Analytics and Adaptive Learning. IBM, meanwhile, has been marketing its cognitive computing systems furiously for use in business, government, healthcare, education, and other sectors. The key to the partnership is that, despite its business troubles, Pearson retains massive penetration into schools and colleges through its digital courseware, while IBM has spent years developing and refining its cognitive systems. A mutually beneficial strategic business plan underpins their partnership.

The Pearson-IBM partnership also taps into current enthusiasm and interest in new forms of machine-based intelligence. This is reflected, for example, in the recent establishment  of the Leverhulme Centre for Future Intelligences at the University of Cambridge, a White House report on preparing the future of artificial intelligence, and a Partnership on AI established by Facebook, Amazon, Google, IBM and Microsoft. The central tenet of the partnership on AI is that ‘artificial intelligence technologies hold great promise for raising the quality of people’s lives and can be leveraged to help humanity address important global challenges such as climate change, food, inequality, health, and education.’

Together, these developments point to a growing contemporary concern with forms of machine intelligence that are sometimes described as ‘weak’ or ‘narrow’ forms of AI. Weak or narrow AI includes techniques such as cognitive computing, deep learning, genetic algorithms, machine learning, and other automated, algorithmic processes, rather than aspiring to ‘strong’ or ‘general’ models of AI which assume computers might become autonomous superintelligences. A recent report on future computing produced by the Human Brain Project noted that:

The power of these innovations has been increased by the development of data mining and machine learning techniques, that give computers the capacity to learn from their ‘experience’ without being specifically programmed, constructing algorithms, making predictions, and then improving those predictions by learning from their results, either in supervised or unsupervised regimes. In these and other ways, developments in ICT and robotics are reshaping human interactions, in economic activities, in consumption and in our most intimate relations.

Ultimately, such technologies can be described as cognitive or intelligent because they have been built to learn and adapt in ways that are inspired by the human brain. Neuroscientific insights into the plasticity of the brain, how it adapts to input and stimuli from the social environment, have been at the centre of the current resurgence of interest in machine intelligence.

So what is education likely to look like if the glossy imaginary projected by Pearson and IBM of learning in the cognitive era materializes in the future?

Learning machines

The key technology underpinning their ambitions is Watson, IBM’s highly-publicized cognitive supercomputing system. The IBM webpages describe Watson as ‘a cognitive technology that can think like a human,’ and which has the capacity to:

  • Understand: With Watson, you can analyze and interpret all of your data, including unstructured text, images, audio and video.
  • Reason: With Watson, you can provide personalized recommendations by understanding a user’s personality, tone, and emotion.
  • Learn: With Watson, you can utilize machine learning to grow the subject matter expertise in your apps and systems.
  • Interact: With Watson, you can create chat bots that can engage in dialog.

Key to the way IBM is marketing Watson is that it has been built with extraordinary flexibility, with Watson APIs and starter code provided to allow organizations to build their own apps and products.

Though IBM has been promoting cognitive computing in education for a few years—in 2013 it produced a glossy visualization of the classroom in 5 years time, a ‘classroom that will learn you’—it is now firmly seeking to establish Watson in the educational landscape. IBM Watson Education, it claims, ‘is bringing education into the cognitive era’:

We are transforming the learning experience through personalization. Cognitive solutions that understand, reason and learn help educators gain insights into learning styles, preferences, and aptitude of every student. The results are holistic learning paths, for every learner, through their lifelong learning journey.

One of the key applications IBM has developed is a data-based performance tracking tool for schools and colleges called IBM Watson Element for Educators:

Watson Element is designed to transform the classroom by providing critical insights about each student – demographics, strengths, challenges, optimal learning styles, and more – which the educator can use to create targeted instructional plans, in real-time. Gone are the days of paper-based performance tracking, which means educators have more face time with students, and immediate feedback to guide instructional decisions.

Designed for use on an iPad so it can be employed directly in the classroom, Element can capture conventional performance information, but also student interests and other contextual information, which it can feed into detailed student profiles. This is student data mining that goes beyond test performance to social context (demographics) and psychological classification (learning styles). It can also be used to track whole classes, and automatically generates alerts and notifications if any students are off-track and need further intervention.

Another, complementary application is IBM Watson Enlight for Educators. Enlight is designed as a tool to support teachers to personalize their instructional techniques and content:

IBM Watson Enlight embodies three guiding principles: 1. Know Me: empower teachers with a comprehensive view of relevant data to help understand each student’s strengths and areas of growth 2. Guide Me: provide teachers with guidance as to how best to support each student 3. Help Me: support teachers with curated, personalized learning content and activities aligned with each student’s needs.

The application is marketed as a support system for understanding a class and the individual students in it, and for generating ‘actionable insights’ to ‘target learning experiences. ‘Teachers can optimize their time and impact throughout the year using actionable, on-demand insights about their students,’ it claims, and then ‘craft targeted learning experiences on-the-fly from content they trust.’ In many ways, these applications are extraordinarily similar to those being promoted for schools by companies like Facebook, with its Summit Personalized Learning platform, or AltSchool’s Playlist and Progression tools.

The partnership with Pearson will allow Watson to penetrate into educational institutions at a much bigger scale than it could do on its own, thanks to the massive reach of Pearson’s courseware products. Specifically, the partnership is focusing on the higher education sector (though time will tell whether it further migrates into the schools sector). The press release issued by Pearson stated that its new global education partnership would ‘make Watson’s cognitive capabilities available to millions of college students and professors’:

Pearson and IBM are innovating with Watson APIs, education-specific diagnostics and remediation capabilities. Watson will be able to search through an expanded set of education resources to retrieve relevant information to answer student questions, show how the new knowledge they gain relates to their own existing knowledge and, finally, ask them questions to check their understanding.

Strikingly, it proposes that Watson will act as a:

flexible virtual tutor that college students can access when they need it. With the combination of Watson and Pearson, students will be able to get the specific help they need in real time, ask questions and be able to recognize areas in which they still need help from an instructor.

The press release issued by IBM added that Watson would be ‘embedded in the Pearson courseware’:

Watson has already read the Pearson courseware content and is ready to spot patterns and generate insights.  Serving as a digital resource, Watson will assess the student’s responses to guide them with hints, feedback, explanations and help identify common misconceptions, working with the student at their pace to help them master the topic.

What Watson will do, then, is commit the entirety of Pearson’s content to its computer memory, and then, by constantly monitoring each individual student, cognitively calculate the precise content or structure of a learning experience that would best suit or support that individual.

The partnership is ultimately the material operationalization of a shared imaginary of machine intelligences in education that both IBM and Pearson have been projecting for some time. But this imaginary is slowly moving out of the institutional enclosures of these organizations to become more widely perceived as desirable and attainable in the future, and it is beginning to animate policy ideas as well as technical projects. The White House report on AI, for example, specifically advocates the development of AI digital tutors for use in education, and has suggested the need for a new technical agency within the US Department for Education that is modelled on its defence research agency DARPA. The think tank the Center for Data Innovation has also produced a report on ‘the promise of AI‘ that admiringly promotes Watson applications such as its automated Teacher Advisor.

Cognitive enhancement technologies

Underpinning these efforts is a shared vision of how machine intelligence might act as cognitive-enhancement technologies in educational settings, though we clearly need to be cautious about the extent to which the technology will live up to its futuristic hype. As educational technology critic Audrey Watters has recently argued, ‘the best way to predict the future is to issue a press release.’ IBM and Pearson are both busily marketing their vision of the cognitive future of education because their businesses depend on it. For them, it’s necessary to suggest that people today are at a cognitive deficit when faced with the complexities of the technologized era, so they can sell products offering cognitive enhancement.

The promise of cognitive computing for IBM, as stated in its recent white paper on ‘Computing, cognition and the future of knowing,’ is not just of more ‘natural systems’ with ‘human qualities,’ but a fundamental reimagining of the ‘next generation of human cognition, in which we think and reason in new and powerful ways’:

It’s true that cognitive systems are machines that are inspired by the human brain. But it’s also true that these machines will inspire the human brain, increase our capacity for reason and rewire the ways in which we learn.

These are extraordinary claims that put companies like IBM and Pearson in the cognitive-enhancement business. They have positioned themselves at the vanguard of the generation of hybrid ‘more-than-human’ cognition, learning and thinking.

Clearly there may be consequences of the development of cognitive enhancement technologies and machine intelligences in education. These technologies could ultimately become responsible for establishing the educational pathway and progress of millions of students. They could ‘learn’ some bad habits, like Microsoft’s infamous AI chatbot. They could be found to discriminate against certain groups of students, and reinforce and reproduce existing social inequalities. Privacy and data protection is an obvious issue as supposedly clever technologies ingest all the intimate details of individual students and store them in vast databanks on the IBM cloud. If Watson scales across Pearson’s content and courseware, it is ultimately going to be able to collect and data-mine huge amounts of information about potentially millions of students worldwide.

Moreover, access to these technologies won’t be cheap for institutions. This could lead to competitive cognitive advantage for students from wealthy institutions, whose learning and development may be supported by cognitive enhancement technologies. A new form of hybrid cognitive capital may become available for students at institutions that invest in these cognitive systems. Given that Pearson’s own global databank of country performance, the Learning Curve, compares education systems according to students’ ‘cognitive skills,’ measuring national cognitive capital as a comparative advantage in the ‘global race’ could also become attractive to government agencies.

Regarding this last point, IBM and Pearson also anticipate the development of real-time adaptive forms of governance in education. Both Pearson and IBM are trying to bypass the cumbersome bureaucratic systems of testing and assessment by creating real-time analytics that perform constant diagnostics and adaptive, personalized intervention on the individual. Pearson’s previous report on AI in education spells this out clearly:

Once we put the tools of AIEd in place as described above, we will have new and powerful ways to measure system level achievement. … AIEd will be able to provide analysis about teaching and learning at every level, whether that is a particular subject, class, college, district, or country. This will mean that evidence about country performance will be available from AIEd analysis, calling into question the need for international testing.

Although the current partnership with IBM is focused on college students, then, this is just part of a serious aspiration to govern the entire infrastructure of education systems through real-time analytics and machine intelligences, rather than through the infrastructure of test-based accountability that currently dominates schools and colleges.

Educational institutions are by now well used to accountability systems that involve collecting and processing test scores to produce performance measures, comparisons and ratings. IBM and Pearson are proposing to make cognitive systems orchestrate this infrastructure of accountability. As Adrian Mackenzie has put it, ‘cognitive infrastructures’ such as Watson ‘present problems of seeing, hearing, checking and comparing as no longer the province of human operators, experts, professionals or workers … but as challenges set for an often almost Cyclopean cognition to reorganise and optimise.’ IBM and Pearson are seeking to sink a cognitive infrastructure of accountability into the background of education, one which is intended to not just to measure and compare performance, but to reorganize and optimize whole systems, institutions and individuals alike.

Posted in Uncategorized | 1 Comment

Assembling ClassDojo

A sociotechnical survey of a public sphere platform

Ben Williamson

ClassDojo mojo

The world’s most successful educational technology is ClassDojo. Originally developed as a smartphone app for teachers to reward ‘positive behaviour’ in classrooms, it has recently extended significantly to become a communication channel between teachers and parents, a school-wide reporting and communication platform, an educational video channel, and a platform for schoolchildren to collect and present digital portfolios of their class work.

In a previous post I began sketching out a critical approach to the ClassDojo app. In this follow-up (note that it’s a long read, more a working paper than a post)  I want to explore ClassDojo as a more extensive platform, and to consider it as a sociotechnical ‘assemblage’ of many moving parts. It is, I argue, simultaneously composed of technical components,  people, policies, funding arrangements, expert knowledge and discourse, all of which combine and work together as a hybrid product of human and nonhuman actors to enable the functioning of the platform. Each of these components has been assembled together over time to make ClassDojo what it is today. The purpose of the post is twofold: to help generate greater public understanding and awareness of ClassDojo among teachers and parents, and also to scope out the contours of the platform for further detailed research.

Education in the ‘platform society’
When it was first launched as a beta product in 2011, ClassDojo was a simple app designed for use on mobile devices. It has subsequently become a much more extensive platform, spreading rapidly across the US and around the world. As new features have been added over its 5 year lifespan to date, ClassDojo has become much more like a social media platform for schools. It allows teachers to award points for behaviour, somewhat akin to pressing the ‘like’ button on Facebook; permits text and video communication between teachers and parents, as many social media platforms do; acts as a channel for video content; and also has capacity for schoolchildren to create digital portfolios of their work. It has also extended to become a ‘schoolwide’ platform, whereby all teachers, school leaders and pupils are signed up to the platform and school leaders can take an overview of everything occurring on it.

Given its expansion beyond its original design as an app, ClassDojo needs to be understood in relation to emerging critical research on digital platforms, where ‘platform’ refers to internet-based applications such as social media sites that process information and communication. Jose van Dijck and Thomas Poell have argued that ‘over the past decade, social media platforms have penetrated deeply into the mechanics of everyday life, affecting people’s informal interactions, as well as institutional structures and professional routines.’ More recently, van Dijck has suggested that we are entering a new kind of ‘platform society’ in which ‘social, economic and interpersonal traffic is largely channeled by an (overwhelmingly corporate) global online infrastructure that is driven by algorithms and fueled by data.’ This emerging platform society is gradually interfering with more and more aspects of everyday life, including key public institutions of society such as health and education. Van Dijck has called these ‘public sphere platforms’ that promise to contribute to the public good in areas which are under funded by governments, but are owned and structured by private actors and networks.

ClassDojo is prototypical of a public sphere platform for education, one that is designed to contribute to the public good by supporting teachers to manage children’s classroom behaviour and allow parents to communicate with schools at a time when schools are increasingly under pressure. Before detailing its various dimensions as a platform, however, it is important to note that any platform ultimate consists of multiple moving parts, both human and nonhuman, that have to be assembled together. Putting it simply, social researchers have recently begun to attend to the messy ‘assemblages’ of digital technologies such as online platforms, while education researchers have begun to acknowledge that their objects of study—classrooms, tests, policies, or educational technologies—are in fact assemblies of myriad things. In recent work on ‘critical data studies,’ Rob Kitchin and Tracey Lauriault have described a ‘data assemblage’ as:

a complex socio-technical system, composed of many apparatuses and elements that are thoroughly entwined, whose central concern is the production of data. A data assemblage consists of more than the data system/infrastructure itself, such as a big data system, an open data repository, or a data archive, to include all of the technological, political, social and economic apparatuses that frames their nature, operation and work.

An assemblage such as a digital platform, then, needs to be understood in terms of the ways that all its moving parts—whether human and social or nonhuman, material or technical—come together to form a relatively coherent and stable whole. For Kitchin and Lauriault, researching such an assemblage would therefore involve an investigation of its technical and material components; the people that inhabit it and the practices they undertake; the organizations and institutions that are part of it; the marketplaces and financial techniques that enable it; the policies and frameworks that govern it; and the knowledges and discourses that promote and support it.

Importantly, they—like others working with the assemblage concept—acknowledge that assemblages are contingent and mutable rather than fixed entities:

Data assemblages evolve and mutate as new ideas and knowledges emerge, technologies are invented, organisations change, business models are created, the political economy alters, regulations and laws introduced and repealed, skill sets develop, debates take place, and markets grow or shrink.

Utilizing the concept of a sociotechnical assemblage, in what follows I aim to detail how ClassDojo has been assembled over time as a mutating and evolving public sphere platform for education that consists of many human and nonhuman moving parts. I have arranged this as a kind of sociotechnical survey of the elements that constitute the ClassDojo assemblage.

Technicalities & materialities
As a technical platform ClassDojo consists of a mobile app and an online platform. Teachers can access and use the app on a smartphone or tablet in the classroom, and open up the online platform on any other computing device or display hardware for pupils to view. The platform allows class teachers to set their own behavioural categories, though it comes pre-loaded with a series of behaviours that teachers can use to award or deduct feedback points. Each child in the system is represented by a cute avatar, a dojo monster, which can be customized by the user. Behavioural targets can be set for both individuals and groups to achieve positive goals, and teachers can also deduct points. Children’s points are represented as a ‘doughnut’ of green positive points and red ‘needs work’ deductions. Teachers are able, if they choose, to display each child’s aggregate points to their entire class as a kind of league table of behaviour, and school leaders can access each child’s profile to monitor their behavioural progress.

Launched in 2016, its ‘school-wide’ features to allow whole schools, not just individual teachers, to sign up for accounts, which enables ‘teachers and school leaders to safely share photos, videos, and messages with all parents connected to the school at once, replacing cumbersome school websites, group email threads, newsletters, and paper flyers.’ At the same time that ClassDojo is expanding in scope to encompass new technical innovations and serve other practical and social functions, it is therefore obsolescing existing school technologies and materials. The new school-wide application of ClassDojo also makes it easier for the platform to be used by administrators, and means that a child’s individual profile remains persistent over time as that child moves between classes. Teachers can also create ‘Student Stories‘ for each child in a class, where digital portfolios of their class work can be uploaded and maintained.

The public ClassDojo website acts as a glossy front door and public face to the platform and the company behind it. It presents the brand through highly attractive visual graphics, high-production promotional video content and carefully crafted text copy. The website also features an ‘Idea Board’ where ideas about the use of the platform in the classroom can be submitted by teachers to be shared publicly, plus a blogging area for teachers and an engineering blog where the technical details of the platform are discussed and shared by its engineers. For parents assigned a login, it is possible to access the ‘Class Story’ area where teachers can share messages and video with all parents of children in a specific class, and individual teachers and parents can also exchange short text messages.

Less visibly, ClassDojo consists of technical standards relating to network security, data storage, interoperability, and communication protocols. All of the technical aspects of ClassDojo also need to be written in the code and algorithms that make the platform function. The ClassDojo engineering blog details some of the complexity of the code and algorithms that have been used or designed to make all the different elements of the platform function. Much of its source code is available to view on the ClassDojo area of the GitHub code repository. GitHub is therefore part of the assemblage of ClassDojo, a resource that both contains the code and algorithms used in the platform and a resource used by its engineers to locate existing re-useable code.

As a cloud-based service, all of ClassDojo’s data servers and analytics are hosted externally. For this it employs Amazon Web Services. The safety and security page of the ClassDojo website notes that the web servers of Amazon Web Services ‘are physically located in high-security data centers – the same data centers used to hold secure financial information. … Our database provider uses the same https security connections used by banks and government departments to store and transfer the most sensitive data.’ (Unfortunately, at the time of writing the link provided on the ClassDojo website to the ‘security measures’ provided by AWS does not work.) Any interaction with the ClassDojo platform, therefore, takes place via Amazon’s vast global infrastructure of cloud technologies, including being physically stored in one of Amazon’s data centres. ClassDojo is, in other words, physically, financially and technically located within one of the key global organisations that orchestrate the emerging ‘platform society.’

As well as being a technical platform, ClassDojo consists of a variety of material artefacts under the ‘Resources‘ section of the website . These include teacher resources to support the use of ClassDojo in the classroom and lesson planning, training resources such as powerpoint presentations to enable school leaders to train staff, and a variety of glossy printable posts and other display materials that can be used to decorate the classroom. In addition to this, the website provides resources for parents such as introductory letters that can be distributed by schools to explain the platform, detailed parent guides as downloadable PDF files, and simple video content that can be used in the classroom to help young children understand it too.

ClassDojo also extends into other platforms. It has its own Facebook page and a popular @ClassDojo account on Twitter with 61,000 followers. Much of its initial word-of-mouth marketing worked through these platforms, allowing ClassDojo to rapidly extend to new users as enthusiastic early adopters recommended it to friends and colleagues via social media. Facebook and Twitter are part of the ClassDojo community, enabling its vast user base to engage with the organization and other community members. User-generated materials such as lesson plans and classroom resources to support the use of ClassDojo are made available for sharing by teacher advocates of the platform on teaching websites and other public sharing sites such as Pinterest, thus extending it beyond the enclosures of its own technical infrastructure to other platforms and material resources. Via other platforms, teachers have created and shared, for example, ‘Dojo dollars,’ ‘reward coupons’ and ‘vouchers,’ created their own incentives and rewards systems and displays, posted ‘points tracker’ posters and sets of ‘Dojo goals for data folders, and suggested the use of ‘prize centres’ where physical prizes are displayed for pupils that top the ClassDojo league tables.

As this survey of the technical aspects of ClassDojo demonstrates, it consists of myriad technologies, materials, standards and so on; but these technical elements all need to be orchestrated by human hands.

People & organizations
Who makes ClassDojo? Critical studies of software code and algorithms have demonstrated that their function cannot be separated from their designers. As Tarleton Gillespie has phrased it, ‘algorithms are full of people.’ Humans make decisions about what algorithms do, their goals and purposes and the tasks to which they are put. Likewise, any system of data collection or online communication platform has to be programmed to perform its tasks according to particular objectives, business plans and within financial and regulatory constraints.

ClassDojo depends on a vast network of people and organizations. It was founded in 2011 by two young British entrepreneurs, Liam Don and Sam Chaudhary. Don was educated as a computer scientist and Chaudhary as an economist—with experience of working for the consultancy McKinsey in its education division in London—before both moved to Silicon Valley after successfully applying to the education technology ‘incubator’ program Imagine K-12. Imagine K-12’s founder Tim Brady was the very first investor in ClassDojo and continues to sit on its board; he has been described by ClassDojo’s founders as a key mentor and influence in the early days of its development. Brady himself was one of the very first employees at Yahoo! in the 1990s, where he acted as Chief Product Officer for 8 years. Considerable Silicon Valley experience is therefore accommodated on the ClassDojo board.

In addition to its founders, ClassDojo is staffed by a variety of software engineers, designers, product managers, communications and marketing officers, privacy, encryption and security experts and human-computer interaction designers. Notably, none of ClassDojo’s staff are listed as educational experts, but instead are all drawn from the culture of software development, many of them with experience in other Silicon Valley technology companies, social media organizations and consultancies. Founders Don and Chaudhary themselves have some limited educational experience of working with schools in the UK prior to moving to Silicon Valley.

ClassDojo mojo 2

Through external partnerships, ClassDojo employs three independent privacy experts to guide it in relation to data privacy regulation in north America and Europe, and works with a team of security researchers to continually test ClassDojo’s security practices for vulnerabilities. Its board consists primarily of its major investors (detailed more below under funding and finance). ClassDojo also works with over 20 third-party essential service providers who primarily support the platform with specific technical services, including data storage, video encoding, photo uploading, server performance, data visualization, web analytics, performance metrics, conducting A/B testing on different versions of the website, and managing real-time communication data. The third party service providers include Amazon Web Services, which hosts ClassDojo’s servers and data analytics, Google Analytics, for analytics on its website, and many others, without which the platform could not function.

Support for ClassDojo has been confirmed through the award of a number of prizes. The business magazine FastCompany listed ClassDojo as one of the 10 most innovative education companies in 2013, and in 2015 it won the Crunchie award for best education startup from the TechCrunch awards while its founders were featured in the ’30 under 30’ list of Inc magazine. These prizes and recognitions have helped ClassDojo and its founders to consolidate their reputations and brand, both as a successful classroom tool and an entrepreneurial business.

As a sociotechnical assemblage it is important to note that ClassDojo functions through the involvement of its users. Users are both configured by ClassDojo—in the sense that it makes new practices possible—but can also reshape ClassDojo to their own purposes. The basic reward mechanism at the heart of the ClassDojo behaviour management application can be customized by any signed up teacher. These reward categories then shape the ways in which points are awarded in classrooms, changing both the practices of the staff employing it and the experience of the pupils who are its subjects. With the announcement of school-wide features in 2016, entire schools can be signed up to ClassDojo, ultimately becoming institutional network nodes of the platform. By mid-2016 the ClassDojo website reported that the platform was in use in 180 countries, with other 3 million subscribing teachers serving over 35 million pupils. ClassDojo is, in other words, constituted partly through the practices of a vast global constellation of users.

Teachers using ClassDojo are repositioned by the platform by being conferred new responsibilities. Huw Davies suggests teachers are transformed into data entry clerical workers by the platform, becoming responsible for data collection in the classroom that will ultimately contribute to big datasets that could be analysed and then ‘sold’ back to school leaders as premium features. Although ClassDojo does not market itself as a big data company, its access to behavioural data on millions of children confers it with tremendous capacity to report detailed and comparative analyses that could be used to measure teachers’ and schools’ records on the management of pupil behaviour.

Policy, regulation & governance
The way that the technical platform of ClassDojo operates, and the work of the people who build and use it, is all governed by particular forms of regulation and policy. Data privacy is an area that the ClassDojo organization is especially keen to promote, not least following a critical article in the New York Times in 2014, which the ClassDojo company vigorously countered in an open letter entitled ‘What the NYTimes got wrong.’ Its website features an extensive privacy policy, the product of its privacy advisers. This policy is extensive and regularly updated, organized on the website to detail exactly what information the platform collects, its student data protection policy, and available opt-outs. Notably, ClassDojo claims that it deletes all pupils’ feedback points after 12 months, unless students or parents create accounts. Where schools or individual teachers have set up accounts that parents have then subscribed to, then a persistent record of the child’s personal information will be retained.

ClassDojo claims it is completely compliant with north American data privacy regulatory frameworks such as FERPA (Family Educational Rights and Privacy Act) and COPPA (Children’s Online Privacy Protection Act). FERPA is a Federal law that protects the privacy of student education records, while the primary goal of COPPA is to place parents in control over what information is collected from their young children online. ClassDojo’s ‘privacy center’ displays ‘iKeepSafe’ privacy seals from both FERPA and COPPA, alongside a badge proclaiming it as a signatory of the Student Privacy Pledge. iKeepSafe (Internet Keep Safe Coalition) is itself a nonprofit international alliance of more than 100 policy leaders, educators, law enforcement members, technology experts, public health experts and advocates, and acts to ensure that both FERPA and COPPA are enforced.

ClassDojo is additionally compliant with the US-EU Safe Harbor framework set forth by the US Department of Commerce regarding the collection, use, and retention of personal data from European Union member countries. The European Court of Human Justice, however, declared this agreement invalid in 2015, meaning there is a grey area in terms of data protection for children logged on ClassDjo outside the US. Its commitment to data privacy would seem to depend on specific agreements made between the EU and the cloud service provider hosting its data, in this case Amazon Web Services. This seems to put pressure on schools to make sense of complex international data protection policies. Schools making use of ClassDojo in the UK, for example, might need to ensure they are familiar with the Information Commissioner’s Office code of practice and checklists for data sharing. This code covers such activities as ‘a school providing information about pupils to a research organisation’ and would arguably extend to the providing of information of pupil behaviour to an organization like ClassDojo (and stored by Amazon Web Services), not least as the data may be used to construct behavioural profiles of individuals and classes.

ClassDojo also subscribes to the principles of ‘privacy by design,’ an approach which encourages the embedding of privacy frameworks into a company’s products or services. ClassDojo’s Sam Chaudhary has co-authored an article on privacy by design with the Future of Privacy Forum, a Washington DC-based think tank dedicated to promoting responsible data practices through lobbying government leaders, and a sounding board for companies proposing new products and services. The founders of ClassDojo have therefore situated themselves among a network of data privacy expertise and lobbying groups in order to ensure their compliance with federal law and to be seen as a leading data privacy organization in relation to education and young children.

How ClassDojo operates in relation to data protection and privacy is therefore circumscribed by federal regulatory frameworks which govern how and why ClassDojo can collect, process and store users’ data and what rights children and their parents have to withdraw their consent for its collection or request its deletion. Privacy regulation is ‘designed-in’ to its architecture, though inevitably some concerns persist, not least about ClassDojo’s admission that if it experienced a ‘change of control’ that all users’ personal information would be transferred to its new owner, with only 30 days for parents to request deletion of their children’s data.

Besides privacy policy and regulation, ClassDojo is also shaped by education policy, although less directly. A distinctive policy discourse of ‘character’ education, ‘positive behaviour support’ and ‘social-emotional learning’ frames ClassDojo, shaping the way in which the organization presents the platform. For example, ClassDojo’s founders present the platform through the language of character development and positive behaviour management. This is entirely compatible with US Department of Education policy documents and initiatives which, in the wake of a softening of the dominant test-based policy emphasis, has begun to emphasize concepts such as ‘character,’ ‘grit,’ ‘perseverance,’ ‘personal qualities’ and other ‘non-cognitive’ dimensions of ‘social-emotional learning’—the most prominent example being the 2013 US Department of Education, Office of Educational Technology report Promoting grit, tenacity and perseverance. ClassDojo is directly promoted in the report as ‘a classroom management tool that helps teachers maintain a supportive learning environment and keep students persisting on task in the classroom,’ allowing ‘teachers to track and reinforce good behaviors for individual students, and get instant reports to share with parents or administrators.’

As a consequence of the ‘grit’ report, controversial attempts have been made to make the measurement of ‘personal qualities’ of non-cognitive and social-emotional learning into school accountability mechanisms in the US. The prominent think tank the Brookings Institute has described these new school accountability systems as compatible with the Every Child Succeeds Act, the US law governing K-12 education signed in late 2015. The act requires states to include at least one non-academic measure when monitoring school performance. It therefore permits states to focus to a greater degree that previous acts on concepts such as competency-based and personalized learning, and promotes the role of the educational technology sector in supporting such changes. ClassDojo has been described in a commentary as an ideal educational technology to support the new law.

The ClassDojo website also suggests that its behaviour points system can be aligned with PBIS. PBIS stands for Positive Behavior Interventions and Supports and is an initiative of the US Department of Education, Office of Special Education Programs. Its aim is to support the adoption of the ‘applied science’ of Positive Behavior Support in schools and emphasizes social, emotional and academic outcomes for students. Through both its connections with the non-cognitive learning policy agenda and PBIS, ClassDojo has been positioned, and located itself, in relation to major political agendas about school priorities. It is in this sense an indirect technology of government that can help schools to support students’ non-cognitive learning. In turn, those schools are increasingly been held accountable for the development and effective measurement of those qualities.

ClassDojo is, in other words, a bit-part player in emerging policy networks that are changing the priorities of education policy to focus on the management and measurement of children’s personal qualities rather than academic attainment alone. Such changes are being brought about through processes of ‘fast policy’ as Jamie Peck and Nik Theodore describe it, where policy is a thoroughly distributed achievement of ‘sprawling networks of human and nonhuman actors’ that include web sites, practitioner communities, guru per­formances, evaluation scientists, think tanks, consultants, blogs, and media channels and sources, as well as the more hierarchical influence of centres of political authority. As both an organization and a platform, ClassDojo acts indirectly as a voice and a technology of networked fast policy in the educational domain, particularly as a catalyst and an accelerant that translates the priorities of government around non-cognitive learning and character development into classroom practice.

Markets, finances & investment
ClassDojo is part of a significant growing marketplace of educational technologies. The new Every Child Succeeds Act gives states in the US much more flexibility to spend on ed-tech, which has been growing as a sector at extraordinary rates in recent years. Some enthusiastic assessments suggest that global education technology sector spending was $67.8bn in 2015, part of a global e-learning market worth $165bn and estimated to reach $243.8bn by 2022.

This marketplace is being supported vigorously in Silicon Valley, where most investments are made, particularly through networks of venture capital firms and entrepreneurs and business ‘incubator’ and ‘accelerator’ programs dedicated to supporting startup ed-tech companies to go to scale. ClassDojo was developed as a working product through the Imagine K12 accelerator program for education technology startups in Silicon Valley. When ClassDojo emerged from its beta phase in 2013, it announced that it had further secured $1.6million in investment from venture capital sources from Silicon Valley. It raised another $21million in venture funding in spring 2016. Its investors include over 20 venture capital companies and entrepreneurial individuals, including Tim Brady from Imagine K12 (now merged with Y Combinator, a leading Silicon Valley Startup accelerator), General Catalyst Partners, GSV Capital and Learn Capital, ‘a venture capital firm focused exclusively on funding entrepreneurs with a vision for better and smarter learning.’ Learn Capital has invested in a large number of ed-tech products in recent years and is a key catalyst of the growth of the sector; its biggest limited partner is Pearson, the world’s biggest edu-business, which links ClassDojo firmly into the global ed-tech market. Many of ClassDojo’s investors also sit on the ClassDojo board.

Investment in ClassDojo has followed the standard model for startup funding in Silicon Valley. It first received seed funding from Imagine K12 and others, before securing Series A investment in 2013 and Series B in 2016. While seed funding refers to financial support for startup ideas, Series A funding is used to optimize a product and secure its user base, and Series B is about funding the business development, technology, support, and other people required for taking a business beyond its development stage. Sometime after 2016, ClassDojo will look to scale fast and wide through Series C funding—investment at this stage can reach hundreds of millions of dollars.

The ClassDojo success story for classroom practitioners and school leaders is therefore reflected and enabled by its success as a desirable product of venture capital funding, all of it framed by a buoyant marketplace of ed-tech development and finance. This marketplace is also itself framed and supported by specific kinds of discourses of technological disruption and solutionism. Many Silicon Valley companies and entrepreneurs have latched on to the education sector in recent years, seeing it in terms of problems that might be solved through technological developments and applications. Greg Ferenstein has noted that many Silicon Valley startup founders and their investors ‘believe that the solution to nearly every problem is more innovation, conversation or education,’ and therefore ‘believe in massive investments in education because they see it as a panacea for nearly all problems in society.’ The marketplace in which ClassDojo is located, therefore, is framed by a discourse that emphasizes the importance of fixing education systems and institutions in order to make them into effective mechanisms for the production of innovative problem-solving people.

Expert knowledge & discourse
As already noted above in relation to ClassDojo’s connections to education policy agendas, an emerging educational discourse is that of personal qualities and character education. ‘Education goes beyond just a test score to developing who the student is as a person—including all the character strengths like curiosity, creativity, teamwork and persistence,’ its co-founder and chief executive Sam Chaudhury has said. ‘There’s so much research showing that if you focus on building students’ character and persistence early on, that creates a 3 to 5 times multiplier on education results, graduation rates, health outcomes. It’s pretty intuitive. We shouldn’t just reduce people to how much content they know; we have to develop them as individuals.’

Underpinning the policy shift to character development of which ClassDojo plays a small bit-part are particular forms of expertise and disciplinary knowledge. The particular forms of expertise to which ClassDojo is attached are those of the psychological sciences, neuroscience and the behavioural sciences, in particular as they have been translated into the discourse of character education, grit, resilience and so on. One of the key voices in this emerging discourse is Paul Tough, author of a book about educating children with ‘grit,’ who has mapped out some of the networks of psychological, neuroscientific and economics experts contributing their knowledge and understandings to this field,  including names such as Angela Duckworth and Carol Dweck.

Duckworth and Dweck are both directly cited by ClassDojo’s founders as key influences, alongside other ‘thought leaders’ such as James Heckman, and Doug Lemov. Heckman is a Nobel prize-winning economist noted for his work on building character. Lemov is a former free-market advocate of the charter schools movement and author of the popular Teach Like a Champion. Duckworth has her own named psychological lab where she researches ‘personal qualities’ such as ‘grit’ and ‘self-control’ as dimensions of human character. The relationship between ClassDojo and Carol Dweck’s concept of ‘growth mindsets’ is the most pronounced. In January 2016, ClassDojo announced a partnership with the Project for Education Research That Scales (PERTS), an applied research center at Stanford University led by Dweck that has become the intellectual home of the theory of growth mindsets.

Dweck has argued that teachers can ‘engender a growth mind-set in children by praising them for their persistence or strategies (rather than for their intelligence), by telling success stories that emphasize hard work and love of learning, and by teaching them about the brain as a learning machine.’ Notably, Dweck’s PERTS lab itself has a close relationship with Silicon Valley, where the growth mindsets concept has been popularized as part of a recent trend in behavior-change training programs designed to enable valley workers to to ‘fix personal problems.’ Dweck herself has presented the concept at Google and other PERTS staff have advisory roles in Silicon Valley companies. The growth mindset concept is, therefore, closely aligned with the wider governmental behaviour change agenda associated with behavioural economics. Governments have long sought to use psychological and behavioural insights into citizens’ behaviours as the basis for designing policies and services that are intended to modify their future behaviours. ClassDojo seeks to accomplish this goal within schools by nudging children to change their behaviours at exactly the same time that schools are being encouraged to measure students’ non-cognitive social-emotional skills.

The partnership between ClassDojo and PERTS takes the form of a series of short animations on the ‘Big Ideas’ section of the ClassDojo website that help explain the growth mindsets idea for teachers and learners themselves. They present the brain as a malleable ‘muscle’ that can constantly grow and adapt as it is put to the task of addressing challenging problems. The presentation of the brain as a muscle in ClassDojo is part of the recent popularization of recent neuroscience concepts of ‘neuroplasticity,’ where the brain is seen as constantly adapting to the social environment. Rather than being seen as a structurally static organ, the brain has been reconceived as dynamic, with new neural pathways constantly forming through adaptation to environmental stimuli. The videos are basically high-production updates of instructional resources previously developed by Dweck and disseminated through her Mindset Works spin-out company. ClassDojo approached Dweck about adapting these materials, and the videos were produced by ClassDojo with input from PERTS. The ClassDojo website claims that ’15 million students are now building a growth mindset’–this figure is presumably based on web analytics of the numbers of schools in which the videos have been viewed–while at the time of writing in September 2016 the ClassDojo Facebook page was promoting ‘Growth Mindset Month’ .

ClassDojo is increasingly aligned with psychological and behavioral norms associated with growth mindsets, both by teaching children about growth mindsets through its Big Ideas videos and, through the app, by nudging children to conduct themselves in ways appropriate to the development of such a growth-oriented character. In this sense, ClassDojo is perfectly aligned with the controversial recent federal law which allows states to measure the performance of schools on the basis of ‘non-academic’ measures, such as students’ non-cognitive social-emotional skills, personal qualities, and growth mindsets. This governmental agenda sees children themselves as a problem to be fixed through schooling. Its logic is that if children’s non-cognitive personal qualities, such as character, mindset and grit, can be nudged and configured to the new measurable norm, then many of the problems facing contemporary schools will be solved.

The close relationship between ClassDojo, psychological expertise and government policy is indicative of the extent to which the ‘psy-sciences’ are involved in establishing the norms by which children are measured and governed in schools—a relationship which is by no means new, as Nikolas Rose has shown, but is now rapidly being accelerated by psy-based educational technologies such as ClassDojo. A science of mental measurement infuses ClassDojo, as operationalized by its behavioural points system, but it is also dedicated to an applied science of mental modification, involved in the current pursuit of the development of children as characters with grit and growth mindsets. By changing the language of learning to that of growth mindsets and other personal qualities, ClassDojo and the forms of expertise with which it is associated are changing the ways in which children may be understood and acted upon in the name of personal improvement and optimization.

^^^
ClassDojo is prototypical of how education is being reshaped in a ‘platform society.’ This sociotechnical survey of the ClassDojo assemblage provides some sense of its messy complexity as an emerging public sphere platform that has attained substantial success and popularity in education. Approached as a sociotechnical assemblage, ClassDojo is simultaneously a technical platform that serves a variety of practical, pedagogical and social functions; an organizational mosaic of engineers, marketers, product managers and other third party providers and partners; the subject of a wider regulatory environment and also a bit-part actor in new policy networks; the serious object for financial investment in the ed-tech marketplace; and a mediator of diverse expert psychological, neuroscientific and behavioural scientific knowledges and discourses pertaining to contemporary schooling and learning.

Like any digital assemblage, ClassDojo is mutating and evolving in response to the various elements that co-constitute it. As policy discourse shifts, ClassDojo follows suit–as its shift to embrace growth mindsets and its positioning in relation to policy discourses of character and positive behaviour support demonstrate. It is benefiting financially from a currently optimistic ed-tech marketplace, which is itself now being supported politically via the Every Child Succeeds Act. Its engineering blog also demonstrates how the technical platform of ClassDojo is changing as new code and algorithms become available, while its privacy policies are constantly being updated as data privacy regulation pertaining to children becomes an increasing priority and a concern–as it demonstrated in its response to a critical New York Times article in 2014. ClassDojo is not being ‘scaled up’ in a simple and linear manner, but messily and contingently, through a relational interweaving of human actions and nonhuman technologies, materials, policies, and technical standards.

Given its rapid proliferation globally into the practices of over 3 million teachers and the classroom experiences of over 35 million children in 180 countries, ClassDojo can accurately be described as a public sphere platform that is interfering in how teaching and learning take place. It is doing so according to psychological forms of expertise and governmental priorities, supported by financial instruments and organizations, and is being enacted through a technical infrastructure of devices and platforms and a human infrastructure of entrepreneurs, engineers, managers, and other experts, as well as the users who incorporate it into their own practices and extend it through the creation of user-generated content and materials. As it continues to scale and mutate, it deserves to be the focus of much further in-depth analysis. This work-in-progress has surveyed ClassDojo to point to possible future lines of inquiry into the reshaping of education in a platform society.

Images from ClassDojo media assets
Posted in Uncategorized | Tagged , , , , , | 7 Comments