Big Data in Education book launch

Ben Williamson

I was asked to prepare a few things to say to launch my book Big Data in Education: The digital future of learning, policy and practice in the Faculty of Social Sciences at the University of Stirling. Below are my notes.

Big data book cover

Seven years ago, like many other well-off parents in San Francisco, Max Ventilla was scoping out local schools. What he saw appalled him. State education, he concluded, was dangerously broken; a new model of school was required.

So he got a load of ‘progressive’ education literature about the state of American education, child-centred learning, school accountability, education technology and school design. He quit his job at Google, where he ran projects using big data to profile its millions of users, to set up his own new school.

AltSchool, as he called it, would be a ‘lab school’ combining child-centred progressivism with big data methods to deliver ‘personalized education’.

Max called venture capitalists he knew in Silicon Valley. 33 million dollars later, he hired teachers, managers—and a team of data analysts and software engineers to work on a ‘new operating system for education.’

Mark Zuckerberg of Facebook, among other investors, gave Max another 100 million for more AltSchools.

The tech and business press went wild. The Financial Times called AltSchool an example of ‘Silicon Valley’s classrooms of the future.’

Then Max revealed what his engineers were up to.

They’d built a software platform that could crunch data about almost everything students did. Student work could be uploaded to the system. Teachers’ responses would be logged. This all fed into a ‘Progress’ app—a ‘data dashboard’ displaying the progress students were making in academic learning and social-emotional development.

A ‘Playlist’ app was developed to recommend personalized tasks for students based on analysis of their past performance and predictions of their likely future progress.

Then AltSchool revealed it had cameras everywhere, tracking every movement and gesture of each student to assess engagement and attention.

Critics started to call it a ‘surveillance school’—using students as ‘guinea pigs’ for experimental data analytics. But Max and his investors, wanted it to scale-up across state education, to make more schools look like AltSchools.

Max had figured out a business model to satisfy investors. The AltSchool software platform would be offered for sale to all schools, starting in 2019. Meanwhile, last month Max shut down two of his lab schools, with three more to close in spring.

With the experimental beta-testing over, now Max and donors such as Mark Zuckerberg want to install the laboratory in every school.

AltSchool is prototypical of big data in education, and highlights a number of themes explored in the book.

So this book is about how educational data are produced and for what purposes, and about the technologies and companies that generate and process it.

And it’s about fantasy. A ‘big data imaginary’ of education is not just hype dreamt up in Silicon Valley, but a normative vision of education for the future shared by many. It has a seductive new data discourse of ‘personalization,’ ‘adaptive learning,’ ‘student playlists,’ ‘learning analytics,’ ‘computer-adaptive testing,’ ‘data-enriched assessment,’ and even ‘artificial intelligence tutors.’

It’s about ‘evidence-based’ education policy—that data analytics can provide real-time diagnostics and feedback at state, school, class and student levels—and commercial lobbying, venture capital and new forms of corporate philanthropy too, with ed-tech trying to capture public education for profit while attracting policymakers to their persuasive ideas.

It’s about science, with psychological, cognitive and neuro-scientists becoming expert in the experimental uses of student data.

And it’s about challenges to education research. Education research usually deals with human learning within social institutions, but now nonhuman ‘learning machines’ that can learn from and feedback to their human companions are starting to inhabit learning spaces as well. Some social science education researchers feel under threat from ‘education data science’ too.

Finally, the book is about power and the everyday ‘public pedagogies’ that teach lessons to millions globally, not just in educational institutions. Social media’s trending algorithms and filters direct attention to current events, politics, culture, and more, based on calculations of what you might like, what you’ve done, who you know. Tastes are being shaped, opinions and sentiments tweaked, and political views targeted and entrenched by political bots and computational propaganda. The power of big data in education extends beyond school to these public pedagogies of mis-education too.

Advertisements
Posted in Uncategorized | Tagged | Leave a comment

Learning machines

Ben Williamson

TerryKimura_facial recognition

When educators talk about theories of learning they are normally referring to psychological conceptions of human cognition and thinking. Current trends in machine learning, data analytics, deep learning, and artificial intelligence, however, complicate human-centred psychological accounts about learning. Today’s most influential theories of learning are those that apply to how computers ‘learn’ from ‘experience,’ how algorithms are ‘trained’ on selections of data, and how engineers ‘teach’ their machines to ‘behave’ through specific ‘instructions.’

It is important for education research to engage with how some of its central concerns—learning, training, experience, behaviour, curriculum selection, teaching, instruction and pedagogy—are being reworked and applied within the tech sector. In some ways, we might say that engineers, data scientists, programmers and algorithm designers are becoming today’s most powerful teachers, since they are enabling machines to learn to do things that are radically changing our everyday lives.

Can the field of social scientific educational research yet account for how its core concerns have escaped the classroom and entered the programming lab—and, recursively, how technical ‘learning machines’ are re-entering classrooms and other digitized learning environments?

Non-human machine learning processes, and their effects in the world, ought to be the object of scrutiny if the field of education research is to have a voice with which to intervene in the data revolution. While educational research from different disciplinary perspectives has long fought over the ways that ‘learning’ is conceptualized and understood as a human process, we also need to understand better the nonhuman learning that occurs in machines. This is especially important as machines that have been designed to learn are performing a kind of ‘public pedagogy’ role in contemporary societies, and are also being pushed in commercial and political efforts to reform education systems at large scale.

Algorithmic autodidacts
One of the big tech stories of recent months concerns DeepMind, the Google-owned AI company pioneering next-generation machine learning and deep learning techniques. Machine learning is often divided into two categories. ‘Supervised learning’ involves algorithms being ‘trained’ on a selected dataset in order to spot patterns in other data later encountered ‘in the wild.’ ‘Unsupervised learning,’ by contrast, refers to systems that can learn ‘from scratch’ through immersion in data.

In 2016 DeepMind demonstrated AlphaGo, a Go-playing AI system that learned in a supervised way from a training dataset of thousands of games played by professionals and accomplished amateurs. Its improved 2017 version, AlphaGo Zero, however, is able to learn without any human supervision or assistance other than being taught the rules of the game. It simply plays the game millions of times over at rapid speed to work out winning strategies.

In essence, AlphaGo Zero is a self-teaching autodidactic algorithmic system.

‘It’s more powerful than previous approaches because by not using human data, or human expertise in any fashion, we’ve removed the constraints of human knowledge and it is able to create knowledge itself,’ said AlphaGo’s lead researcher in The Guardian.

At the core of AlphaGo Zero is a training technique that will sound familiar to any education researchers who have encountered the psychological learning theory of ‘behaviourism’—the theory that learning is an observable change in behaviours that can be influenced and conditioned through reinforcements and rewards.

Alongside neural network architecture, a cutting-edge ‘self-play reinforcement learning algorithm’ is AlphaGo Zero’s primary technical innovation, It is ‘trained solely by self-play reinforcement learning, starting from random play, without any supervision or use of human data,’ as its science team described it in Nature. Its ‘reinforcement learning systems are trained from their own experience, in principle allowing them to exceed human capabilities, and to operate in domains where human expertise is lacking.’ As the reinforcement algorithm processes its own experiences in the game, it is ‘rewarded’ and ‘reinforced’ by the wins it achieves, in order ‘to train to superhuman level.’

Beyond being a superhuman learning machine in itself, however, AlphaGo may also be used ‘to help teach human AlphaGo players about additional, “alien” moves and stratagems that they can study to improve their own play,’ according to DeepMind’s CEO and co-founder Demis Hassabis. During testing, AlphaGo Zero was able not just to recover past human knowledge about Go, but also to produce new knowledge based on a constant process of self-play reinforcement.

The implication, in other words, is that powerful learning algorithms could be put to the task of training better humans, or even of outperforming humans to solve real-world problems.

The computing company IBM, which has also piled huge effort and resources into ‘cognitive computing’ in the shape of IBM Watson, has applied similar claims in relation to the optimization of human cognition. Its own cognitive systems, it claims, are based on neuroscientific insights into the structure and functioning of the human brain—as Jessica Pykett, Selena Nemorin and I have documented.

‘It’s true that cognitive systems are machines that are inspired by the human brain,’ IBM’s senior vice-president of research and solutions has argued in a recent paper. ‘But it’s also true that these machines will inspire the human brain, increase our capacity for reason and rewire the ways in which we learn.’

DeepMind and IBM Watson are both based on scientific theories of learning—psychological behaviourism and cognitive neuroscience—which are being utilized to create ‘superhuman’ algorithmic systems of learning and knowledge creation. They translate the underlying theories of behaviourist psychology and cognitive neuroscience into code and algorithms which can be trained, reinforced and rewarded, and even become auodidactic self-reinforcing machines that can exceed human expertise.

For educators and researchers of education this should raise pressing questions. In particular, it challenges us to rethink how well we are able to comprehend processes normally considered part of our domain as they are now being refigured computationally. What does it mean to talk about theories of learning when the learning in question takes place in neural network algorithms?

‘Machine behaviourism’ of the kind developed at DeepMind may be one of today’s most significant theories of learning. But because the processes it explains occur in computers rather than in humans, education research has little to say about it or its implications.

Developments in machine learning, autodidactic algorithms and self-reinforcement processes might enlarge the scope for educational studies. Cognitive science and neuroscience already embrace computational methods to understand learning processes—in ways which sometimes appear to reduce the human mind to algorithmic processes and the brain to software. IBM’s engineers for cognitive computing in education, for example, believe their technical developments will inspire new understandings of human cognition.

A social scientific approach to these computational theories of learning will be essential, as we seek to understand better how a population of nonhuman systems is being trained to learn from experience and thereby learning to interact with human learning processes. In this sense, the models of learning that are encoded in machine learning systems may have significant social consequences. They need to be examined as closely as previous sociological studies have examined the expertise of the ‘psy-sciences’ in contemporary expressions of authority and management over human beings.

Public hypernudge pedagogy
The social implications of machine learning can be approached in two ways requiring further educational examination. The first relates to how behavioural psychology has become a source of inspiration for social media platform designers, and how social media platforms are taking on a distinctively pedagogic role.

Most modern social media platforms are based on insights from behaviour change science, or related variants of behavioural economics. They make use of extensive data about users to produce recommendations and prompts which might shape users’ subsequent experiences. Machine learning processes are utilized to mine user data for patterns of behaviours, preferences and sentiments, compare those data and results with vast databases of other users’ activities, and then filter, recommend or suggest what the user sees or experiences on the platform.

Machine learning-based data analytics processes have, of course, become controversial following  news about psychological profiling and microtargeting via social media during elections—otherwise described as ‘public opinion manipulation’ and ‘computational propaganda.’ The field of education needs to be involved in this debate because the machine learning conducted on social media performs the role of a kind of ‘public pedagogy’—that is, the lessons taught outside of formal educational institutions by popular culture, informal institutions, public spaces, dominant cultural discourses, and both the traditional and social media.

The public pedagogies of social media are significant not just because they are led by machine learning, though. They are also deeply informed by psychology, and specifically by behavioural psychology. The behavioural psy-sciences are today deeply involved in defining the nature of human behaviours through their disciplinary explanations, and in informing strategic commercial and governmental aspirations.

In Neuroliberalism, Mark Whitehead and coauthors suggest that big data software is being regarded as spelling a ‘golden age’ for behavioural science, since data will be used not just to reflect the user’s behaviour but to determine it as well. At the core of the social media and behavioural science connection are the psychological ideas that people’s attention can be ‘hooked’ through simple psychological tricks, and then that their subsequent behaviours and persistent habits can be ‘triggered’ through ‘persuasive computing’ and ‘behavioural design.’

Silicon Valley’s social media designers know how to shape behaviour through technical design since, according to Jacob Weisberg, ‘the disciplines that prepare you for such a career are software architecture, applied psychology, and behavioral economics—using what we know about human vulnerabilities in order to engineer compulsion.’ Weisberg highlights how many of Silicon Valley’s engineers are graduates of the Persuasive Computing Lab at Stanford University, which uses ‘methods from experimental psychology to demonstrate that computers can change people’s thoughts and behaviors in predictable ways.’

Behaviourist rewards—or reinforcement learning—is important in the field of persuasive computing since it compels people to keep coming back to the platform. In so doing, they generate more data about themselves, their preferences and behaviours, which can then be processed to make the platform experience more rewarding. These techniques are, in turn, interesting to behaviour change scientists and policymakers because they offer ways of triggering certain behaviours or ‘nudging’ people to make decisions within the ‘choice architecture’ offered by the environment.

Karen Yeung describes the application of psychological data about people to predict, target and change their emotions and behaviours as ‘hypernudging.’ Hypernudging techniques make use of both persuasive computing techniques of hooking users and of behavioural change science insights into how to trigger particular actions and responses.

‘These techniques are being used to shape the informational choice context in which individual decision-making occurs,’ argues Yeung, ‘with the aim of channelling attention and decision-making in directions preferred by the “choice architect”.’

Through the design of psychological nudging strategies, digital media organizations are beginning to play a powerful role in shaping and governing behaviours and sentiments.

Some Silicon Valley engineers have begun to worry about the negative psychological and neurological consequences of social media’s ‘psychological tricks’ on people’s attention and cognition. Silicon Valley has become a ‘global behaviour-modification empire,’ claims Jaron Lanier. Likewise, AI critics are concerned that increasingly sophisticated algorithms will nudge and cajole people to act in ways which have been deemed most appropriate—or optimally rewarding—by their underlying algorithms, with significant potential social implications.

Underpinning all of this is a particular behaviourist view of learning which holds that people’s behaviours can be manipulated and conditioned through the design of digital architectures. Audrey Watters has suggested that behaviourism is already re-emerging in the field of ed-tech, through apps and platforms that emphasize ‘continuous automatic reinforcement’ of ‘correct behaviours’ as defined by software engineers. In both the public pedagogies of social media and the pedagogies of the tech-enhanced classroom, a digital re-boot of behaviourist learning theory is being put into practice.

Behavioural nudging through algorithmic machine learning is now becoming integral to the public hypernudge pedagogies of social media. It is part of the instructional architecture of the digital environment that people inhabit in their everyday lives, constantly seeking to hook, trigger and nudge people towards particular persistent routines and to condition ‘correct’ behavioural habits that have been defined by platform designers as preferable in some way. Educational research should engage closely with the public hypernudge pedagogies that occur when the behavioural sciences combine with the behaviourism of algorithmic machine learning, and look more closely at the underlying behavioural science theories of learning on which they are based and the behaviours they are designed to condition.

Big Dewey
The second major set of implications of machine learning relates to the uptake of data-driven technologies within education specifically. Although the concept of ‘personalized learning’ has many different faces, its dominant contemporary framing is through the logic of big data analytics. Personalized learning has become a powerful idea for the ed-tech sector, which is increasingly influential in envisioning large-scale educational reform through its adaptive platforms.

Personalized learning platforms usually consist of some combination of data-mining, learning analytics, and adaptive software. Student data are collected by such systems, then compared with an ideal model of student performance, in order to generate predictions of likely future progress and outcomes, or adapt responsively to meet individual students’ needs as deemed appropriate by the analysis.

In short, personalized learning depends on autodidactic machine learning algorithms being put to work to mine, extract and process student data in an automated fashion.

The discourse surrounding personalized learning frames it as a new mode of ‘progressive’ education, with conscious echoes of John Dewey’s student-centred pedagogies and associated models of project-based, experiential and inquiry-based learning. Dewey’s work has proven to be one of the most influential and durable philosophical theories in education, often used in conjunction with more overtly psychological accounts of the role that experience plays in learning.

With its combination of big data analytics and machine learning with progressivism, we could call the learning theory behind personalization ‘Big Dewey.’

Mark Zuckerberg’s philanthropic Chan-Zuckerberg Initiative is typical of the application of Big Dewey to education. CZI aims ‘to support the development and broad adoption of powerful personalized learning solutions. … Many philanthropic organizations give away money, but the Chan Zuckerberg Initiative is uniquely positioned to design, build and scale software systems … to help teachers bring personalized learning tools into hundreds of schools.’

To test out this model of learning in practice, new startup ‘lab schools’ have been established by Silicon Valley entrepreneurs. Many act as experimental beta-testing sites for personalized learning platforms–using students as guinea pigs–that might then be sold to other schools. As Benjamin Doxtdator has documented, these new lab school models of ‘hyperpersonalization’ utilize digital data technologies to ‘extract’ the ‘mental work’ of students from the learning environment in order to tune and optimize their platforms prior to marketing to other institutions.

Larry Cuban, however, has detailed the variety of ways that personalized learning has been taken up in schools in Silicon Valley, and himself sees strong traces of progressivism in their practices.

However, Cuban also notes that many employ methods more similar to the kind of ‘administrative progressivism’ associated with the psychologist EL Thorndike than Dewey. Thorndike was interested in identifying the ‘laws of learning’ through statistical analysis, which might then be used to inform the design of interventions to improve ‘human resources.’ Measurement of learning could thereby contribute to the optimization of ‘industrial management’ techniques both within the school and the workplace. Administrative progressivism was concerned with measurement, standardization and scientific management of schools rather than the student-centred pedagogies of Dewey.

‘What exists now is a re-emergence of the efficiency-minded “administrative progressives” from a century ago,’ argues Cuban, ‘who now, as entrepreneurs and practical reformers want public schools to be more market-like where supply and demand reign, and more realistic in preparing students for a competitive job market.’

With machine learning as its basis, personalization is a twenty-first century algorithmic spin on administrative progressivism. The ‘laws of learning’ are becoming visible to those organizations with the technical capacity to mine and analyse student data, who can then use this knowledge to derive new theoretical explanations of learning processes and produce personalized learning software solutions. As an emerging form of algorithmic progressivism, personalization combines the appeal of Dewey with the scientific promise of big data and autodidactic machine learning.

Ultimately, with the Big Dewey model, the logics of machine learning are being applied to the personalization of the learning experiences to be had by human learners. With this new model of education being supported with massive financial power and influence by Bill Gates, Mark Zuckerberg, and other edtech entrepreneurs, philanthropists and investors, Big Dewey is being forwarded as the philosophy and learning theory for the technological reform of education.

Machine learning escapes the lab
The machine behaviourism of autodidactic algorithm systems, public hypernudge pedagogies and personalized learning have become three of the most significant educational developments of recent years. All are challenging to educational research in related ways.

Machine behaviourism requires educational researchers to move their focus on to the kinds of reinforcement learning that occurs in automated nonhuman systems, and on how computational systems are being taught and trained by programmers, algorithm designers and engineers to learn from experience in an increasingly autodidactic way.

It’s not a sufficient response to claim that companies like DeepMind, IBM and so on take a reductionist view of what learning is—DeepMind’s Nature paper reveals an incredibly sophisticated learning model as pertains to neural networks software, while IBM has built its cognitive systems on the basis of established neuroscience knowledge about the human brain.

These systems can learn, but are not the same forms of learning known to most education researchers. As technical innovation proceeds, more and more learning is going to be happening inside computers. Just as educators hope to cultivate young minds to become lifelong independent learners, the tech sector is super-powering learning processes to create increasingly automated nonhuman machine learning agents to share the world with humans. What’s to say that educational researchers should not seek to develop their expertise in understanding nonhuman machine learning?

Theories of nonhuman learning are also becoming increasingly influential since machine learning processes underpin both the public hypernudge pedagogies of social media and personalized learning platforms I’ve outlined. The new behaviourist public hypernudge pedagogies, inspired both by behavioural science and behaviour design, are occurring at great scale among different publics, often according to political and commercial objectives, yet education research is oddly silent in this area.

While much has been written about big data and personalization, we’ve also still to fully explore how the tech sector philosophy of Big Dewey might affect and influence schools, teachers and students as adaptive learning platforms escape from the beta-testing lab and begin to colonize state education. Future studies of personalized learning could examine the forms of autodidactic machine learning occurring in the computer as well as the educational effects and outcomes produced in the classroom.

Image by Terry Kimura
Posted in Uncategorized | Tagged , , , , , | Leave a comment

Fast psycho-policy & the datafication of social-emotional learning

Ben Williamson

ClassDojo monster 25

[Paper prepared for the Annual Ethnography Symposium, University of Manchester, 30 August-1 September 2017, with the full title ‘The infrastructure of fast psycho-policy: psychological governance & the datafication of social-emotional learning’]

Mojo is a small, green alien student with the appearance of an extra from the animated movie Monsters University. Many researchers of education and technology may not know Mojo, but over 3 million teachers and 35 million students do, because Mojo is the cute brand mascot of the successful educational technology application ClassDojo used in primary schools worldwide to promote students’ ‘character’ development and ‘social-emotional learning’. Mojo is also, though, the friendly, visible face of an emerging infrastructure of interlocking technologies, organizations and policy discourses focused on the application of psychological expertise and techniques to measure and manage students’ behaviours and feelings in the classroom.

Launched with Silicon Valley venture capital support in 2011, ClassDojo started life as a simple behaviour management app available for free download to teachers. Designed for use on smartphones so it can be used in real-time in the classroom, ClassDojo encourages teachers to award ‘positive points’ for specific observable behaviours, gathers these points as data about student behaviour, and then allows teachers and school leaders to identify behavioural trends using its TrendSpotter visualization tool and automated report generator. Visualizations and reports are also available to parents. Its website claims ClassDojo builds ‘happier classrooms.’

However, in the last two years ClassDojo has extended its functionality to become a social media platform for schools, with real-time messaging, photo and video communication between schools and home, user-generated content, online video content hosting, and ‘school-wide’ functionality. Its founders and funders have likened it to Netflix, Spotify, LinkedIn and Facebook, and have claimed it can replace cumbersome school websites, group email threads, newsletters and paper flyers. It also has an online ClassDojo store targeted at teachers where they can purchase ClassDojo posters, resources and clothing.

In this paper I examine how ClassDojo has evolved  into an educational social media platform and  a key sociotechnical actor in the diffusion and enactment of a policy discourse of ‘social-emotional learning’ (SEL) worldwide. ClassDojo is just one app in a fast-growing industry of software tools designed to shape students’ social-emotional learning in the classroom–an industry which enjoys significant support from political centres of authority, international policy influencers, think tanks and philanthropic foundations. The paper draws on material published in two articles (‘Decoding ClassDojo’ and ‘Learning in the platform society’) where you can find full references to sources cited below.

Psycho-policy platforms
The evolution of ClassDojo needs to be understood as exemplifying within the educational field a trend that Jose van Dijck and Thomas Poell have described as the penetration of social media platforms into all kinds of everyday interactions, institutional practices and professional routines. Platforms are, argues Tarleton Gillespie, digital intermediaries that allow users to interact, host and share content, and buy or sell. But, he adds, platforms are also ‘curators of public discourse’—the result of choices about what can appear, how it is organized and monetized, and what its technical architecture permits or forbids. As such, technical, social and economic concerns determine platforms’ structure, function and use, note Jean-Christophe Plantin and coauthors. Moreover, they note, many social media platforms are now undergoing ‘infrastructuralization’ as ‘media environments essential to our daily lives (infrastructures) are dominated by corporate entities (platforms).’

In other words, platform operators are not mere ‘owners of information’ but ‘becoming owners of the infrastructures of society,’ as Nick Srnicek argues. In his view, platforms are characterized by acting as intermediaries to enable interaction between customers, advertisers, service providers, producers, suppliers, and even physical objects; thriving on network effects, whereby they accumulate users from whom they can gather data and generate value; offering free products and services; and deploying a strategy of constant user engagement through attractive presentations of themselves and their offerings. As emerging infrastructures, these platforms are increasingly acting as substrates to society.

As an educational intermediary, a curator of educational discourse, and a provider of free services that deploys strategies of user engagement to gain users through network effects, ClassDojo needs therefore to be studied and understood as the assembled product of a complex web of people and organizations that designed and maintain it; technical components; business plans; expert discourses, and the technical, social and economic concerns that frame them. By disassembling it into its component parts and examining it as contextually framed and produced, it becomes possible to see how it has evolved from a classroom app to a platform for schools to part of an infrastructure for social-emotional education across public education.

My methodological strategy is to approach ClassDojo as assembling and evolving in the context of a shift to ‘fast policy’ processes in education. By ‘fast policy’ I’m drawing on Jamie Peck and Nik Theodore’s argument that while contemporary policymaking may still be primarily government-centred, it also involves ‘sources, channels, & sites of policy advice’ that ‘encompass sprawling networks of human & nonhuman actors’. This means digital technologies, infrastructures, platforms, websites, social media activities, database devices and so on can all be considered as policy actors which may be followed as they are assembled, evolve, mutate and ‘become real’. Methodologically, an attention to fast policy processes demands ‘network ethnography’ approaches which seek to ‘follow policies’ as they are developed and realized. In my research, I am specifically following ClassDojo as a fast policy actor, both seeking to ‘disassemble’ it into the various parts it has been assembled from, and ‘reassembling’ the wider infrastructure of people, technologies and policy discourses that are seeking to ‘make real’ and enact the ‘social-emotional learning’ agenda in schools.

The term social-emotional learning (SEL) encompasses concepts such as character education, growth mindset, grit and perseverance, and other so-called ‘non-cognitive’ or ‘non-academic’ ‘personal qualities’ and competences. In the last couple of years, social-emotional learning has emerged as a key policy priority from the work of international policy influencers such as the OECD and World Economic Forum; psychological entrepreneurs such as Angela Duckworth’s ‘Character Lab’ and Carol Dweck’s ‘growth mindset’ work; venture capital-backed philanthropic advocates  (e.g. Edutopia); powerful lobbying coalitions (CASEL) and institutions (Aspen Institute) and government agencies and partners, especially in the US (for example, the US Department of Education ‘grit’ report of 2013) and UK (in 2014 an all-party parliamentary committee produced a ‘Character and Resilience Manifesto’ in partnership with the Centre Forum think tank, with the Department for Education following up with funding for schools to develop character education programs).

In sum, social-emotional learning is the product of a fast policy network of ‘psy’ entrepreneurs, global policy advice, media advocacy, philanthropy, think tanks, tech R&D and venture capital investment. Together, this loose alliance of actors has produced shared vocabularies, aspirations, and practical techniques of measurement of the ‘behavioural indicators’ of classroom conduct that correlate to psychologically-defined categories of character, mindset, grit, and other personal qualities of social-emotional learning. As Agnieszka Bates has argued, psychological advocates of SEL have conceptualized character as malleable and measurable, and defined the character skills that are most valuable to the labour market. As such, she describes SEL as a psycho-economic fusion of economic goals and psychological discourse in a corporatized education system. Specific algorithms and metrics have already been devised by prominent psycho-economic centres of expertise to measure the economic value of social-emotional learning.

Moreover, as Emily Talmage has identified, social-emotional learning is being advocated by some of the same organizations that promote social impact bonds, or ‘pay for success’ schemes whereby investors provide capital to start a new program and receive repayment with interest if it meets agreed metrics of success. In other words, says Talmage, ‘investors are using kids’ psychological profiles to gamble on the results of social programs, while using technology to generate a compliant, productive workforce.’

In these ways, social-emotional learning exemplifies the emergence of what has been termed psycho-policy and psychological governance in relation to public policy more widely—that is, the application of psychological expertise, interventions and explanations to public policy problems, specifically the application of practical techniques and ‘know-how’ for quantifying and then ‘nudging’ individuals to perform the ‘correct’ behaviours and affects. If character is malleable, it can be moulded and made to fit political and economic models.

ClassDojo as a psycho-policy platform
So, ClassDojo can be viewed as a platform diffuser of SEL psycho-policy and practice. The rest of this paper examines how it is being assembled to perform this task.

Shaping shared vocabularies
ClassDojo’s popular, and publicly charismatic founders have become spokespeople for social-emotional learning. They are regularly interviewed in the education technology and business media, and use these as venues for diffusing social-emotional learning discourses. They name-check psychological entrepreneurs such as Angela Duckworth and Carol Dweck to relay into classroom practices their psychological theories for classifying and measuring the correct behaviours of students. In a sense, they are governing educational language at a distance, working by making governmental and commercial aspirations around SEL into the shared concerns and aspirations of classroom practitioners and school leaders, and thereby penetrating into institutional practices and professional routines.

Rewarding character
ClassDojo’s most well-known feature is its rewards app for teachers to award ‘feedback points’ to individuals or groups, in real-time during classroom activities. This allows teachers to produce report cards on each student and whole classes, and also school leaders to take a view of behavioural trends across the whole school.

At the core of its rewards system is the psychological assumption that observable behavioural indicators transmitted from the embodied conduct of students in classrooms can be correlated with character skills and other aspects of SEL. By rewarding students who perform the correct behavioural indicators of SEL and character, ClassDojo is also designed to actively promote specific kinds of preferred behaviours. As one of ClassDojo’s founders has noted, it collects ‘millions of behaviour observations every day’ to enable ‘real-time information from the classroom,’ while one of its research partners says, ‘We want teachers to think about the kind of norms they want to set in the classroom, so growth mindset is integrated in it.’

Partner networks
As ClassDojo has sought income, it has successfully won venture capital funding to support new features and platform development. In 2016 it was awarded $21million US dollars to develop as a platform for school communication and distribution of in-house educational video content.

In particular, it has developed a number of ‘Big Ideas’ serials of animated videos as classroom resources for teachers to use to teach children the language of social-emotional learning, including video series on growth mindset, perseverance or ‘grit’ and mindfulness. These have been created through partnerships with major psychological centres of expertise. Carol Dweck’s mindset centre, PERTS at Stanford, was ClassDojo’s first academic partner on its mindset series, followed by Harvard’s Making Caring Common for the empathy series, and most recently Yale’s Center for Emotional Intelligence co-produced the mindfulness series.

As ClassDojo’s head product designer has claimed, through these videos, the ClassDojo ‘characters model for the kids the behaviour you are trying to instil.’ So what we see is how classroom norms of behaviour are being defined, via ClassDojo, through suturing together venture capital aspirations and psychological entrepreneurship. And it is doing so directly through reaching out to teachers, currently for free, to distribute and instil in students the ‘model’ behaviours defined by psy experts of SEL and character.

Network effects
Significantly, if we think of ClassDojo as a technology of fast psycho-policy, it has enjoyed spectacularly accelerated success in finding its way into pedagogic practice. One of its founders has claimed that ‘watching the graph of the user numbers has been incredible’ as ‘millions of teachers and students’ have begun ‘using this every day.’ Its growth has been fuelled through highly effective word-of-mouth marketing campaign on Facebook, Twitter and Instagram, which has allowed it to grow via network effects ‘faster than any other ed tech company.’

These network effects also exert material effects. Its product designer has claimed that ‘We look for an idea that can be powerful and high-impact and is working in pockets, and work to bring it to scale more quickly … incorporated into the habits of classrooms.’ We can understand this in two ways—‘working in pockets’ refers to taking a small-scale idea up to scale through network effects. But ‘working in pockets’ also well describes ClassDojo’s strategy—its platform sits on the smart phone of millions of teachers, sitting in their pockets, and never far away from their eyes and hands where it might be incorporated into the habits of classrooms. ClassDojo is in this sense a kind of pocket instructor that enacts psychological expertise through teachers’ own fingertips.

ClassDojo is, in other words, ambitiously attempting to ‘shift what happens inside and around classrooms’ as a way of changing ‘education at huge scale’ as its chief executive has claimed. However, these network effects are also generating value for the ClassDojo company and its investors as it amasses a huge global user base.

Monetizing behaviour
ClassDojo is also seeking to monetize student behaviour data. Although it has received over $30million dollars in venture capital investment, it has to date generated no revenue whatsoever, and is fairly opaque about its monetization plans. One of its investors has said that ‘This company has a greater market share than Coke in the U.S. Let’s get all the stakeholders on the platform … and scale before we think about monetization.’

However, its founders have given some clues. They have spoken of ClassDojo as a ‘huge distribution platform’ to parents, who might be willing to pay for additional content—such as additional Big Ideas videos—to take the ClassDojo platform and its preferred habits of character into the family home. Its ClassDojo store is already active for the sale of merchandise.

But also, its founders have noted ‘There’s a macro-trend happening where schools want to collect more data about behaviour.’ In fact, with the new federal law, Every Student Succeeds, that governs US schooling, states are now required to record at least one measure of ‘non-academic learning.’ So when ClassDojo’s founders have suggested that they will ‘build new, premium features that parents or school districts may be interested in paying for,’ it seems likely they are referring to the production of detailed behavioural reports of the kind that might support schools in their delivery of data recording their progress in supporting students’ non-academic learning targets.

Infrastructuralizing
Not only has ClassDojo extended through network effects to huge numbers of users. It has also developed ‘school-wide’ functionality to enable entire institutions to be signed in to the system in order to orchestrate institutional communication, data-sharing between classes, and establish ‘school values’ consistent with SEL across the school. Through its ongoing function creep, it is becoming an integral and embedded sociotechnical substrate to schooling practices. A teacher in a ClassDojo press release stated, ‘We can now create a school community that includes all of us: teachers, parents, students, principals, vice principals and other school staff. None of us can imagine teaching without it!’

Furthermore, one of its founders has said: ‘Looking back in 5-10 years, I hope to see that this other half of education—going beyond test scores to focus on building students as people—has become really important and that we helped to make that happen by connecting teachers, parents and students.’ This aspiration registers the emergence of a global effort to develop student character and SEL rather than to reduce them to test scores, which ClassDojo is seeking to support through mobilizing itself as an infrastructural underlay to connect teachers, parents and students around shared psychological vocabularies, normative values and aims. In other words, ClassDojo is becoming more infrastructural for schools, but it is also nested in a global infrastructure of educational measurement.

From test-based infrastructure to infrastructuralized psycho-policy platforms
In recent years, schools have been locked-in to data infrastructures of test-based performance measurement. As Dorothea Anagnostopoulos and colleagues have argued, the existing test-based data infrastructure is an assemblage of people, technologies and policies that stretches across and beyond formal education systems. It has produced ‘objective measures’ of students’, teachers’ and schools’ performance based on test results data and thereby defined ‘who counts’ as ‘good’ teachers, students and schools.

However, with the emergence of new kinds of technical platforms, such as ClassDojo, that emphasize SEL and character education, the data infrastructure of test-based performance measurement may be evolving. Jean-Christophe Plantin and coauthors have argued that ‘The rise of ubiquitous, networked computing’ twinned with ‘changing political sentiment have created an environment in which platforms can achieve enormous scales, co-exist with infrastructures, and in some cases compete with or even supplant them. … Rapidly “infrastructuralized platforms” have arisen in the digital age.’

ClassDojo is evidence of how a platform now integrated into and integral to many schools and classrooms worldwide is now co-existing alongside, and potentially even competing with or threatening to supplant the existing data infrastructure of test-based performance metrics. The ClassDojo platform operators are mobilizing networked computing to curate and diffuse the psy vocabulary of SEL and character into public education, reflecting changing political sentiment which has begun to focus on alleviating student anxiety and high stakes testing resulting from test-based performance measurement. In so doing, they are continually assembling and engaging users through attractive presentations and new features, generating network effects of valuable users all the time. The results is that ClassDojo has become an ‘infrastructuralizing platform’ for the measurement of behavioural indicators of social-emotional skills—and for nudging and compelling students to perform the ‘correct’ behavioural indicators that correlate with the affects of ‘good students’ in ‘happier classrooms’

Conclusion
In conclusion, we can see how ClassDojo is participating as an actor in current fast psycho-policy development and enactment. It is curating and diffusing SEL discourses into practice through teachers’ pockets and fingertips. In so doing, ClassDojo treats students as embodied behavioural indicators whose affects are rendered traceable through psychological categories of character, mindset and grit; it treats teachers as data entry clerks responsible for amassing ClassDojo’s global database, attracting their own social networks as new users, and as consumers at the online store; treats school leaders as data demanders, who require staff to enter the feedback points in order to generate school-wide behavioural trend insights; and treats parents as data consumers, who receive the data visualizations and report cards. ClassDojo also treats classrooms as little data markets where psycho-economically defined ‘valuable’ character skills and the performance of ‘correct’ behavioural indicators can be incentivized, nudged and exchanged for rewards. All the while, ClassDojo is thriving on the network effects of these activities to generate value for the company and its investors—driving up its user graph, its reach, and the value of its global datasets on student behaviour.

Finally, ClassDojo is nested in an emerging global infrastructure of measurement and intervention in social-emotional education. In its report on ‘The Power of Social and Emotional Skills,’ the OECD has claimed that ‘While everyone acknowledges the importance of social and emotional skills, there is insufficient awareness of “what works” to enhance these skills and efforts to measure and foster them.’ ClassDojo is currently positioning itself as a fast psycho-policy exemplar of ‘what works’ in social-emotional learning practice. In contrast to the existing infrastructure of test-based performance measurement, ClassDojo is a platform for translating psychological theories into the habits of classrooms, teachers and students, which is also nested in an expanding global infrastructure dedicated to the measurement and management of  the social and emotional lives of young people. This global policy infrastructure stretches across and beyond the borders of state education systems, and includes international policy influencers, think tanks, independent institutions, venture capital investors, software startups, and even impact investment market experts. In these ways, if policy trends shift toward the performance measurement of schools, teachers and students based on data recorded about the behavioural indicators of social-emotional learning, then ClassDojo will itself become integrated into existing metric practices of school evaluation, judgment and ranking.

Image credit: ClassDojo resources
Posted in Uncategorized | Tagged , , , , , | 7 Comments

Coding for what? Lessons from computing in the curriculum

Ben Williamson

Christiaan Colen

[This is a talk prepared for the Pop Up Digital conference, Gothenburg, Sweden, 19 June 2017]

There was a key moment in the popular American drama Homeland this year when a group of talented young computer programmers finally launched their new software system. You could see their arms in the air, and hear their cheers, before their boss said, ‘Now it’s time to get to work.’

But what have they created in this darkened software bunker? What work are they about to put their coding skills to? This is what the Homeland’s creators call a ‘propaganda boiler room.’ Driven by extreme political convictions, they’ve created thousands of fake social media accounts to spread disinformation into the news feeds of millions of users.

As all of you will have heard recently, the role of the web and social media in political life have now become major global concerns—not just the plots of TV drama. We’re hearing more and more in the news about fake news, hacking, cyberattacks, political bots and weaponized computational propaganda.

And from critical technology thinkers, too, we’re hearing that ‘software is taking command,’ that automation and ‘algorithms rule the world,’ and that you can either ‘program or be programmed.’ Digital technologies, we now know, aren’t just neutral tools—but powerful devices for shaping our actions, influencing our feelings, changing our minds, filtering the information we receive, automating our jobs, recommending products and media to consume, manipulating our political convictions—even for ‘personalizing’ what and how we learn.

But as Homeland dramatizes, if software is becoming more powerful in our everyday lives, then we also need to acknowledge there are people behind it—programmers who have learned to code to make the technologies we live with.

Within our own field, education and teaching, some have begun to suggest that we need to equip children with the tools and skills to take an active part in this increasingly software-supported and automated world. Recently, for example, the Financial Times magazine ran a piece on ‘Silicon Valley’s classrooms of the future.’

‘Having disrupted the world,’ it claimed, ‘the tech community now wants to prepare children for their new place in it. Leading venture capitalist Marc Andreessen predicts a future with two types of job: people who tell computers what to do, and people who are told what to do by computers. Silicon Valley wants to equip young people to rule the machines.’

As a result, Silicon Valley companies are now investing billions of dollars to re-engineer public education to achieve that aim.

One such effort, according to the New York Times, is the learning to code organization Code.org, ‘a major nonprofit group financed with more than $60 million from Silicon Valley luminaries and their companies, which has the stated goal of getting every public school in the United States to teach computer science. Its argument is twofold: Students would benefit from these classes, and companies need more programmers.’

But it’s not just in Silicon Valley that this enthusiasm for teaching children to ‘rule the machines’ has taken hold. Across the world, children are being told they must ‘learn to code’ to become ‘digital makers.’

In the UK, learning to code and computer science are now part of the formal curriculum for schools, in England, Wales and Scotland alike. Over the last couple of years, I’ve been studying the documents produced to promote learning to code, following how coding and computing have been embedded in the curriculum, and recently interviewing relevant policy influencers involved in the new computing curriculum in England.

Sweden is now embarking on a shift to embed coding, computing and digital competence in its schools—so what can we learn from how things have worked out in England? In our recent interviews, we’ve been trying to work out why various influencers want computer programming in schools—what are the purposes of learning to code in the curriculum? In other words, ‘coding for what?’

Now, we need to go back in time a little here, back to 2011, and to Edinburgh. Here, at the Edinburgh Television Festival, was Eric Schmidt, then chief executive of Google, giving the keynote address to an audience of media, industry and policy leaders. After talking about disrupting TV broadcasting through media streaming, Schmidt suddenly turned his attention to attacking the British education system.

‘In the 1980s the BBC not only broadcast programming for kids about coding, but (in partnership with Acorn) shipped over a million BBC Micro computers into schools and homes,’ he said. ‘That was a fabulous initiative, but it’s long gone. I was flabbergasted to learn that today computer science isn’t even taught as standard in UK schools. Your IT curriculum focuses on teaching how to use software, but gives no insight into how it’s made. That is just throwing away your great computing heritage.’

The talk tapped into a growing concern in the UK at the time that teaching children how to use Microsoft Office applications was inadequate to preparing them for living and working with more complex computer systems.

In fact, within six months of Schmidt’s speech, the Secretary of State for education in England at the time, Michael Gove, announced a complete reform of IT education during his own speech at a 2012 ed-tech trade show for IT teachers.

‘I am announcing today that the Department for Education is … withdrawing the existing National Curriculum Programme of Study for ICT from September this year,’ he announced. ‘The traditional approach would have been to keep the Programme of Study in place for the next 4 years, while we assembled a panel of experts, wrote a new ICT curriculum….  We will not be doing that. Technology in schools will no longer be micromanaged by Whitehall.’

So what happened?

Well despite Gove’s argument about not micromanaging the new curriculum, by September 2013, just 20 months later, entirely new programmes of study for computing in the National Curriculum appeared, to apply at all stage of compulsory schooling in England.

I’m going to fill in the gaps in this story in a minute, but if we briefly come back to the present, we find Google now much more positive about British education.

This is Google’s proposed new London headquarters, the enormous ‘landscraper’ building it plans to build next to King’s Cross railway station. On its announcement last November, new Google chief executive Sundar Pichai said:

‘Here in the UK, it’s clear to me that computer science has a great future with the talent, educational institutions, and passion for innovation we see all around us. We are committed to the UK and excited to continue our investment in our new King’s Cross campus.’

So in 5 years, Google has reversed its opinion of computing in the UK, and even of its educational institutions.

I think you’ll have detected the theme I’m developing here. Programming and computing in education is a shared agenda of major global commercial firms and national government departments and policymakers. One of the interviewees we spoke to about the new curriculum said, ‘Would you have got the attention of Michael Gove without Google or Microsoft government relations? I don’t think you would. You wouldn’t reach that level of policymaking.’

But actually it’s not as straightforward as business driving policy. What happened in England with computing in the curriculum was the result of a much messier mix of ambitions and activities including government, businesses, professional societies, venture capitalists, think tanks, charities, non-profit organizations, the media and campaigning groups. As another of our interviewees said, from the outside the new curriculum looked ‘sudden and organized’ but was actually a more ‘anarchic’ mess of ‘passions’ and ‘reasons’.

So, for example, the year before Eric Schmidt’s Edinburgh speech, the campaigning organization Computing at School had already produced a ‘white paper’ detailing a new approach to computing teaching. Computing at School is a teacher members’ organization, originally set up by Microsoft and chaired by a senior Microsoft executive.

Its 2010 white paper focused on ‘how computers work,’ the knowledge and skills of programming, and ‘computational thinking’—that is, it said, a ‘philosophy that underpins computing’ and a distinctive way to ‘tackle problems, to break them down into solvable chunks and to devise algorithms to solve them’ in a way that a computer can understand. The Computing at School white paper, and the outline computing curriculum it contained, was then put forward after Michael Gove’s speech as a suggested blueprint for the national curriculum.

Computing at School, we were told, was concerned that Gove’s decision not to ‘micro-manage’ the new subject would lead to an ‘implementation vacuum,’ and worked hard to lobby for its own vision. As we were told in an interview we conducted with Computing at School:

‘The Department for Education held consultation meetings for the ICT curriculum, I went to one. Afterwards I stayed behind to talk to the civil servant involved and told him about computer science as a school subject. I was able to put [our] curriculum on the table … complete revelation … led to relationship with the DfE, they went from thinking of us as a weird guerrilla group with special interests to a group they could consult with about computing.’

In fact, it was the Computing at School chairperson who was then appointed by the Department for Education to oversee the development of the new curriculum, and who led a 3 month process of stakeholder consultation and drafting of the new curriculum in autumn 2012.

One of the other key groups influencing computing in schools was Nesta—which is a bit like a think tank for innovation in public services. In 2011 Nesta oversaw a review of the skills requirements for the videogames and visual effects industries in the UK. The review was led by the digital entrepreneur Ian Livingstone, the chair of Eidos Interactive games company, and then the government’s ‘Skills Champion.’

Livingstone actually called his Nesta report, Next Gen, a ‘complete bottom up review of the whole education system relating to games.’ Nesta also produced a report on the legacy of the BBC Micro that Eric Schmidt had credited as a ‘fabulous initiative’ to get kids coding in the 80s. Nesta has continued to produce reports along similar lines, including one on getting more ‘digital making’ into schools, and another on the role of computer science education to build skills for the data analytics industry and the data-driven economy.

Soon after the Next Gen report was released, Livingstone and Nesta formed a pressure group, the Next Gen Skills campaign, which lobbied government hard to get programming and computer science in the curriculum. The campaign was supported by Google, Facebook, Nintendo, Microsoft, and was led by the interactive games and entertainment trade body UKIE.

Videogames, visual effects, data analytics and the creative digital economy are the real drivers for computing in the curriculum here—which Nesta claims has ‘influenced policymakers, rallied industry and galvanised educators to improve computer science teaching.’

Ian Livingstone, meanwhile, is establishing his own Academy Schools. Like the Swedish free schools approach, the Livingstone Academies will be privately run but publicly funded, and have significant discretion over curriculum.

‘It is the combination of computer programming skills and creativity by which today’s world-changing companies are built,’ Livingstone said when announcing the Livingstone Academies in a Department for Education press release. ‘I encourage other digital entrepreneurs to seize the opportunity offered by the free schools programme in helping to give children an authentic education for the jobs and opportunities of the digital world.’

The Livingstone Academies are basically government-approved models of the Next Gen vision. They’ll have industry partnerships, design studios, and even on-site startup business hubs to, it claims, ‘provide wider opportunities for future careers for a new generation of successful and confident citizens who will contribute to local, national and international economic success.’

So, Nesta and Livingstone have highlighted the powerful role of digital entrepreneurs and the language of the digital economy in securing government approval for computing in schools. As you can see, their emphasis is very firmly on programming and software engineering, rather than the more abstract study of the mathematics and algorithms that are the focus of the discipline of Computer Science.

Although programming and Computer Science are of course related, many critics have pointed out that most new computing courses and curricula are more closely connected with software development. We asked people about this is in our interviews, and were told by several people, including those at Computing at School and Nesta, that it was in everyone’s best interests to allow terms like Computer Science, coding, programming, computational thinking, digital skills and even digital literacy to be treated as the same thing.

Several people we interviewed were especially critical of the Shut Down or Restart report produced by the Royal Society in 2012. Its emphasis was on disciplinary computer science, and its recommendations reflected the views of major computer science academics and associations.

The Royal Society report was published just a few days after Michael Gove’s speech—in fact, he said he was looking forward to reading it. And you can see the influence of the Royal Society in the strong emphasis on the idea of computing at the ‘fourth science’ in the English computing curriculum. This goes well with the current emphasis in English education on established subject knowledge—though the fact the Department for Education authorized the Livingstone Academies indicates how government sees computing as a hard science and an economic catalyst at the same time.

In fact, we were told by several interviewees that a major issue in the development of the computing curriculum was that the government ministers and special advisers responsible for it didn’t think it was academic enough—it needed more hard Computer Science content and theory. Even though they weren’t supposed to be micro-managing it of course.

When the computing curriculum consultation group submitted its draft in late 2012, ‘The exact words were ‘the minister is not minded to approve the draft you sent,”’ one interviewee told us. The group had submitted its draft curriculum at 5 o’clock on a Friday evening and the chair was then contacted over the weekend by the special adviser to the minister.

One of our interviewees described how he called the working group chair to ask, ‘are we going to reform the drafting group…? And the answer was, “No, we’ve already done it. We were told unless we got it back to the minister by 9 o’clock on Monday morning with a greater emphasis on Computer Science, then computing would not be in the national curriculum.”’

Despite being a consultative curriculum drafting process, in the end the new programmes of study, we were told, were the product of just two senior executives responding to the demands of the minister and her special adviser to emphasize academic Computer Science.

But for many other people involved in trying to shape the new curriculum, the purpose wasn’t to reproduce disciplinary computer science through the school classroom, or skills development for the digital economy. One of the people we interviewed, also part of the curriculum consultation and drafting group, told us he was even banned from attending meetings after complaining about there being too much Computer Science content. Another had his expenses cancelled as part of the group to stop him doing wider consultation with teachers. The minister’s special adviser was allegedly behind both decisions.

Another area of influence on the computing curriculum was the role of charitable, non-profit and voluntary groups. Code Club is an after school programming scheme that puts volunteer coders together with children to teach them to code. It has its own coding curriculum that starts with visual programming applications like Scratch and then proceeds to programming with HTML, CSS and Python.

There are now over 5000 UK Code Clubs, teaching over 82,000 children programming. When it first started in 2012, the computing curriculum hadn’t even been drafted, yet Code Club is still going strong even though coding is now embedded in the curriculum.

One of the things that the continuing popularity of Code Club reveals is that computing remains very poorly resourced in schools. Code Club has an army of volunteer programmers—the computing curriculum has a teaching workforce of mostly ICT teachers who all need radical retraining. The government budget for this retraining worked out to about £100 per member of staff, which largely means external providers have stepped in.

As a result, Code Club now runs its own teacher training sessions, where volunteer programmers educate teachers in how to teach programming. Other training providers are available—Computing at School offers resources and training, but so do large commercial organizations, as we’ll see in a moment.

Code Club was also absorbed into the Raspberry Pi Foundation in 2015. The Raspberry Pi device itself is a very small, ‘hackable’ computer, and the foundation was set up as a charity to support its educational uses. But one of the other activities performed by Raspberry Pi is to catalyse the wider take-up of computing in schools. It has a couple of magazine titles, The MagPi and Hello World, to promote coding and making.

The MagPi is specifically about making with Raspberry Pi itself, while Hello World focuses on ‘plugging gaps’ in teachers’ knowledge and skills in computer science, coding, computational thinking, constructionism and digital making. Again, this reflects the deliberate ambiguity built in to the curriculum.

Probably the most high profile intervention into coding in schools so far came with the launch of the BBC nationwide campaign called Make It Digital in 2015.

‘BBC Make it Digital will capture the spirit of the BBC Micro, which helped Britain get to grips with the first wave of personal computers in the 1980s,’ the BBC claimed. ‘It will put digital creativity in the spotlight like never before, and help build the nation’s digital skills, through an ambitious range of new programmes, partnerships and projects.’

One of the key projects was the launch of the micro:bit, a small coding device which it distributed for free to a million UK schoolchildren in 2016. The BBC has also established a non-profit foundation to roll out the micro:bit internationally.

The micro:bit, Code Club’s courses, and Raspberry Pi’s magazines indicate how much the new curriculum relies on public and charitable organizations to provide the support and resources required when government departments withdraw their ‘micro-management’ of key subject areas but retain a strong steering capacity over strategy and direction. One of our interviewees, who worked at a coding charity, described how she acted as a ‘geek insider’ who could translate the language of ‘geek’ into government speak for ministers, their special advisers and civil servants.

But besides these charitable providers, the curriculum has also, as we’ve seen, become the target for promoters of academic computer science and for entrepreneurial influence based on arguments about the digital economy. I think it’s a model case of how education policy is being made and done these days—it’s steered by government but taken forward by wider networks of organizations, with the special advisers of government ministers taking a strong role in approving who’s involved and vetting the outputs produced by the participants.

Yes, it’s not micro-managed as Michael Gove promised, but it’s not unmanaged either. And that doesn’t make it easy to work out what the overall purpose of the curriculum is—because it means different things to different groups.

The missing aspect of the curriculum as it has ended up from this messy mix of organizations, interests and interventions, for me anyway, is a more critical understanding of the social power of computing. Several of our interviewees said that the more critical aspects of computing suggested during the curriculum consultation were systematically erased before the curriculum programmes of study were finally made public in 2013.

Look at the bottom left column of this table where text has been struck out—this is from the draft computing curriculum in 2012 and emphasized ‘critical evaluation of digital content,’ the ‘impacts’ of technology on individuals and society, and ‘implications’ for ‘rights, responsibilities and freedoms.’ The right hand column shows how this part of the draft curriculum was rewritten, now emphasizing the study of algorithms, Boolean logic, and data manipulation.

This is what was lost when the draft curriculum had to be rewritten between its submission on Friday night and the new deadline for 9 o’clock Monday morning specified by the minister’s special adviser.

I understand that here in Sweden there remains potential for more critical approaches to digital competence, so I want to spend the last few minutes focusing on that.

Just a week or so ago, the Austrian research group Cracked Labs produced a report on the commercial data industry. It demonstrated how we are being tracked and profiled via data collected from our use of telecoms, the media, retail, finance, social media and technology platforms, and public services.

One of the examples in the report is Oracle, one of the world’s largest business software and database vendors. Oracle’s ‘data cloud’ contains detailed information about 2 billion people, which it uses to ‘profile and sort,’ ‘find and target people,’ ‘sell data,’ ‘personalize content,’ and ‘measure how people behave.’

What does this have to do with computing in schools?

Well, last year Oracle announced it would fund European Union member states $1.4 billion dollars to advance computing and programming in schools through Oracle Academy, its global philanthropic arm. This is part of its ambition to spread computer science education around the world. It claims to have impacted on 30 million students in 110 countries already, mostly through retraining teachers, and annually invests $3.3 billion to ‘accelerate digital literacy worldwide.’

Most notably, in Europe, Oracle is seeking to ‘Level Oracle Academy’s entire curriculum to the European Qualifications Framework.’ This makes Oracle potentially very influential in European computing education. A European Union spokesperson said of the deal, ‘Digitally skilled professionals are critical to Europe’s competitiveness and capacity for innovation. Over the last ten years, we’ve seen the demand for workers with computer science and coding skills grow by four percent each year. Oracle’s efforts to bring computer science into classrooms across the European Union will help strengthen our digital economy.’

So, one of the world’s most powerful data harvesting companies is also one of the world’s most powerful computer science for education philanthropies, funding one of the world’s most powerful cross-national digital economies.

The Oracle example is an important one because it captures quite a lot of what’s going on with coding and computing more broadly:

First, coding and computer science are being put forward as solutions to the digital economy by businesses but also think tanks and government officials too, with students positioned as future digital workers and entrepreneurial citizens—or agents of social and economic progress through software.

Second, relationships are being built between national governments and commercial companies to deliver on major educational goals and purposes. This is changing how education systems are governed—not just by government departments but from the headquarters and philanthropic outgrowths of global technology companies. It’s an example of how tech companies, many from Silicon Valley, are becoming ‘shadow education ministries’ as Neil Selwyn has described them.

Third, and consequently, companies like Oracle, as well as Google and Microsoft and others, are directly influencing curricula across Europe and globally, changing what teachers practice and what students learn along the way. They are even actively supplying teacher training courses to equip teachers with skills and knowledge.

Fourth, these organizations are talking the language of ‘computer science’ which is appealing to many educational policymakers—in the UK, as we saw, giving coding the credibility of Computer Science has been really important. Yet what they are actually promoting is closer to software engineering as practised in the technology sector. Some, like Oracle, also mention ‘digital literacy’ but this clearly a functional kind of literacy in writing code.

And in doing so, these organizations are shaping computing to be a practical, skills-based subject area with a hard scientific surface—and definitely not a more critically-focused subject which might draw attention to the data surveillance practised by the same organizations persuading national governments to focus on computing education in schools.

As the Cracked Labs report shows, Oracle knows an awful lot about people. This is the kind of digital environment that children and young people are now living and learning in. That’s why, in closing here, I want to suggest the need for a different direction in coding and computing in the curriculum—or at least a proper discussion about its purposes. It’s great to see this conference as a space to start that dialogue here.

We are now teaching kids to code—which has all sorts of advantages in terms of tech skills, creativity and understanding how computers work. But there’s a risk we could just be reproducing the practices of Silicon Valley in our own classrooms.

As the philosopher of technology Ian Bogost has commented, ‘Not all students in computer-science programs think they’ll become startup billionaires… But not all of them don’t think so, either. Would-be “engineers” are encouraged to think of every project as a potential business ready to scale and sell.’ The commercial culture of computing that is creeping into computer science courses, he has added, downplays the social consequences of software engineering decisions while emphasizing ‘speculative finance.’

It is also notable that when the co-founder of Code Club criticized the ‘mass surveillance’ practices of Google a few years back that she was forced to resign by the Code Club board. Google was then one of Code Club’s main commercial sponsors.

‘We should not accept that privacy no longer exists, just because corporations doing mass surveillance also teach kids to code,’ she said. ‘I cannot stay silent about large corporations infringing on human rights, and I believe it is my moral obligation to speak out against it.’

We also need to think about the political uses and abuses of programming skills. Teaching children to code could actually be dangerous if it trains them with the right skills to work in Homeland’s propaganda boiler room. In many ways, young right wing activists are today’s most successful digital makers, using their programming skills to disseminate political values that many of us, I’m sure, find extreme and divisive.

Some critics are already arguing that learning to code is a distraction from learning ‘values filters so our children can interact in this environment.’

My view is that a properly purposeful and meaningful computing education would engage with the social and political power of code to engineer, in part, how we live and think. ‘To program or be programmed’ is a neat mantra, but you need a different kind of critical knowledge and skill set to understand how your information practices are being programmed by the engineers at Google, how you can be monitored and profiled through the Oracle data cloud, or how you can be manipulated via social media.

According to the Times Education Supplement, the weekly magazine for education professionals in the UK, ‘the algorithm’s gonna get you’ in the classroom too. That’s an overly paranoid headline—but maybe it might provoke educators to consider the social power of programming and the algorithmic techniques of data mining and surveillance it’s produced.

The programmer Maciej Ceglowski has said that ‘an enthusiastic group of nerds has decided to treat the rest of the world as a science experiment’ by creating ‘the greatest surveillance apparatus the world has ever seen.’

What would it mean to receive an education in computing that helped young people navigate life in the algorithmic data cloud in an informed and safe way, rather than as passive subjects of this vast science experiment?

Technical know-how in how computers work has its uses here, of course. But also knowing about privacy and data protection, knowing how news circulates, understanding cyberattacks and hacking, knowing about bots, understanding how algorithms and automation are changing the future of work—and knowing that there are programmers and business plans and political agendas and interest groups behind all of this—well, this seems to me worth including in a meaningful computing education too.

I am encouraged to see that there is scope for some more critical study in Sweden’s incoming digital competence curriculum. That type of study of computing and its impacts and implications, in the UK, was shut down before the curriculum had even started up.

Image by Christiaan Colen
Posted in Uncategorized | Tagged , , , , , , , , , | 4 Comments

ClassDojo app takes mindfulness to scale in public education

Ben Williamson

ClassDojo Messaging

A globally popular educational app used by millions of teachers and schoolchildren worldwide has begun to deliver mindfulness meditation training into classrooms. Based on a mobile app that teachers can carry in their pockets, ClassDojo is embedding positive psychology concepts in schools worldwide. In the process, it may be prototypical of new ways of enacting education policy through pocketable devices and social media platforms, while activating in children the psychological qualities that policymakers are seeking to measure.

The Beast

ClassDojo, launched just 6 years ago, is already used by over 3 million teachers and 35 million children in 180 countries—with penetration into the US K-8 sector at a staggering 90%. Originally designed as a behaviour monitoring app to allow teachers to reward ‘positive behaviour’ using a points system, more recently ClassDojo has extended into an educational content delivery platform to promote the latest ‘big ideas’ from positive psychology in the classroom.

Starting in early 2016 with a series of video animations on ‘growth mindsets,’ the ClassDojo company has since developed classroom content about ‘perseverance,’ ‘empathy’ and, in May 2017, ‘mindfulness.’ All its big ideas videos feature the cute Mojo character, a little green alien schoolchild, learning about these psychological ideas from his friend Katie while experiencing challenges, personal worries, setbacks and doubts about his learning abilities. In the mindfulness series, Mojo has to confront what Katie calls ‘The Beast’—‘your most powerful emotions, anger, fear and anxiety’—which, she tells Mojo, ‘can get out of control.’

The big ideas videos have been wildly popular with schools. ClassDojo has claimed that the growth mindset series alone has been viewed over 15 million times. The announcement of new big ideas series is accompanied by online content which is shared to its vast worldwide community of teachers via Facebook, Twitter and Instagram. To promote its new mindfulness series, ClassDojo has announced a ‘month of mindfulness’ across its social media accounts and communities.

ClassDojo’s expansion hasn’t just included video content delivery. It is also now used as a communication platform between schools and parents, to compile student portfolios, and to allow students to share their ‘stories.’ Its stated aim is to ‘connect teachers with students and parents to build amazing classroom communities’ and ‘happier classrooms.’ As a result ClassDojo is now one of the hottest educational technology companies in the world. It has raked in huge venture capital investment from Silicon Valley VC firms (about $31million in total, including $21m in 2016 alone), and is the regular subject of coverage in the educational, technology and business media.

It would not be overstating things much to suggest that ClassDojo has in fact become the default educational social media platform for a very large number of schools, functioning ‘like a social-media community where … the app creates a shared classroom experience between parents, teachers, and students. Teachers upload photos, videos, and classwork to their private classroom groups, which parents can view and “like.” They can also privately message teachers and monitor how their children are doing in their classrooms through the behavior-tracking aspect of the app.’

Many of ClassDojo’s features would be familiar to users of commercial social media such as Facebook, Snapchat and Slack. ‘If you’re an adult in the United States, you’ve got LinkedIn for work, Facebook for friends and family. This ends up being the third set of relationships, around your kids,’ one of ClassDojo’s major investors has claimed. As well as being geographically based in Silicon Valley, ClassDojo is strongly influenced by a Silicon Valley mindset of technical optimism in social media for relationships, sharing, and community-building. Like many recent education startups in Silicon Valley, ClassDojo’s founders are seeking to do good while turning a profit—specifically in their case by building a globally successful and scalable business brand on the back of building happier classroom communities through social media apps and platforms.

While social media organizations like Facebook and Twitter are now dealing with adverse issues such as fake news, political disinformation and computational propaganda on their platforms, however, ClassDojo has defined itself as a platform for diffusing positive psychology into schools. It’s aiming to achieve its ambitions directly through the mobile apps carried by millions of teachers in their pockets.

Emotions that count

The success of ClassDojo is due at least in part to the recent growth of interest in ‘social-emotional learning.’ A term that encompasses a range of concepts and ideas about the ‘non-cognitive’ aspects of learning—such as personal qualities of character, resilience, ‘grit,’ perseverance, mindfulness, and growth mindset—social-emotional learning has lately become the focus of attention among educational policymakers, international influencers and technology companies.

The OECD and the World Economic Forum have both begun promoting social-emotional learning and are seeking ways to foster it through technology and quantify it through measurement instruments. A US Department of Education report published in 2013 promoted a strong shift in policy priorities towards such qualities, and listed a then-young ClassDojo as a key resource. New accountability mechanisms have even been devised to judge schools’ performance in developing students’ non-academic personal qualities. The US Every Student Succeeds Act (ESSA) has now made it mandatory for states to assess at least one non-cognitive aspect of learning as part of updated performance measurement and accountability programs.

Notably, too, ClassDojo’s big ideas resources have been produced through partnerships with powerful US university departments. The original growth mindset series was devised with the Project for Education Research That Scales (PERTS) at Stanford University, as was its follow-up perseverance series. The empathy series late in 2016 was co-produced with the Making Caring Common Project at Harvard University’s Graduate School of Education, while the mindfulness series released in May 2017 is the result of collaboration with the Center for Emotional Intelligence at Yale University.

A concern for social-emotional learning is not just confined to dedicated educational organizations. The ed-tech researcher Audrey Watters has described social-emotional learning as a ‘trend to watch’ in 2017, and detailed some of the technology companies and investors involved in promoting it. ‘Ed-tech entrepreneurs and investors are getting in on the action, as have researchers like Angela Duckworth who’s created software to measure and track how well students perform on these “social emotional” measurements,’ she has argued. Meanwhile, ‘startups like ClassDojo,’ Watters adds, ‘promise to help teachers monitor these sorts of behaviors.’ She concludes by asking, ‘Can social emotional learning be taught? Can it be tested? Can it be profited from?’

Pocket policy platforms

ClassDojo needs to be understood as the product of a complex network of actors and activities including business interests, policy priorities, and expert psychological knowledges concerned with social-emotional learning (as I argued in earlier research published recently). With education policy increasingly influenced by the social-emotional learning agenda, ClassDojo and its academic partners and venture capital investors are increasingly part of distributed ‘policy networks.’ Although much education policy is still performed by government authorities, it is increasingly influenced by diverse sources, channels and sites of policy advice and ‘best practice’ models–of which ClassDojo is a good example

In this sense, ClassDojo is acting as an indirect best practice policy model and a diffuser of the social-emotional learning agenda into the practices of schools. In reality, it may even be prefiguring official policy. With venture capital funding from its investors driving its development and growth, ClassDojo has already distributed the vocabulary of social-emotional learning worldwide, and influenced the uptake of practices related to growth mindsets, perseverance and mindfulness among millions of teachers. It has done so through producing highly attractive content and then distributing it through its vast social media networks and communities on the Facebook, Twitter and Instagram platforms too.

‘If we can shift what happens inside and around classrooms then you can change education at a huge scale,’ ClassDojo’s CEO Sam Chaudhury has publicly stated. ‘We are looking for broad concepts really applicable to every classroom,’ its product designer has added. ‘We look for an idea that can be powerful and high-impact and is working in pockets, and work to bring it to scale more quickly … incorporated into the habits of classrooms.’

Although ‘working in pockets’ here clearly refers to potentially high-impact but small-scale startup activities, it is notable too that as a mobile app ClassDojo is already working in the pockets and palms of teachers. ClassDojo, in other words, represents a new way of doing large-scale policy through classroom apps that are already working in teachers’ pockets and hands rather than through political deliberation and direct interference. This would be an impossible task to coordinate at global scale through traditional government organs of education—although the interests of the global policy influencers OECD and WEF suggest ClassDojo could be prototypical of attempts to roll-out social-emotional learning into the habits of teachers through pocket-based policy platforms. Its method of enacting policy-by-app is being achieved by mobilizing practical classroom applications that can be carried in teachers’ pockets and enacted through their fingertips, generously funded by Silicon Valley venture capital, without the encumbrances of bureaucratic policymaking processes.

Psycho-policy

Beyond being a pocket-policy technology that prefigures official policy priorities, ClassDojo also represents another policy innovation—that of using an app to translate psychological expertise into practical techniques for teachers, and of acting as a technical relay between disciplinary knowledge and practitioner uptake.

The kind of policy that ClassDojo anticipates is already developing in other sectors. Lynne Friedli and Robert Stearn have identified the emergence of ‘psycho-policy’ as a new approach to policymaking in the area of ‘well-being.’ Techniques of psycho-policy, they argue, are characterized by being heavily influenced by psychological concepts and methods, and by the ‘coercive use of psychology’ to achieve desired governmental objectives. As such, psycho-policy initiatives emphasize the ‘surveillance of psychological characteristics’ and techniques of ‘psycho-compulsion,’ which Friedli and Stearn define as ‘interventions intended to modify attitudes, beliefs and personality, notably through the imposition of positive affect.’

Psycho-policy, then, is the use of psychology to impose well-being and activate positive feeling in individuals, and thereby to enrich social well-being at large. In this context, as the sociologist William Davies has argued, the use of mobile ‘real-time mood-monitoring’ apps is increasingly of interest to companies and governments as technologies for measuring human emotions, and then of intervening to make ‘that emotion preferable in some way.’ As a pocket policy diffuser of such positive psychological concepts as mindfulness and growth mindset into schools, the ClassDojo app and platform can therefore be seen as part of a loosely-coordinated, multi-sector psycho-policy network that is driven by aspirations to modify children’s emotions to become more preferable through imposing positive feelings in the classroom.

Viewing ClassDojo as a pocket precursor of potential educational psycho-policies and practices of social-emotional learning in schools raises some significant issues. Mindfulness itself, the subject of ClassDojo’s latest campaign, certainly has growing popular support in education. Its emphasis on focusing meditatively on the immediate present rather than the powerful emotional ‘Beast’ of ‘anger, fear and anxiety,’ however, does need to be approached with critical social scientific caution.

‘Much of the interest in “character,” “resilience” and mindfulness at school stems from the troubling evidence that depression and anxiety have risen rapidly amongst young people over the past decade,’ William Davies argues. ‘It seems obvious that teachers and health policy-makers would look around for therapies and training that might offset some of this damage,’ he continues. ‘In the age of social media, ubiquitous advertising and a turbulent global economy, children cannot be protected from the sources of depression and anxiety. The only solution is to help them build more durable psychological defences.’

According to this analysis, school-based mindfulness initiatives are based on the assumption that young people are stressed, fragile and vulnerable, and can benefit from meditative practices that focus their energies on present tasks rather than longer-term anxieties caused by uncontrollable external social processes. James Reveley has further argued that school-based mindfulness represents a ‘human enhancement strategy’ to insulate children from pathologies that stem from ‘digital capitalism.’ Mindfulness in schools, he adds, is ‘an exercise in pathology-proofing them in their capacity as the next generation of unpaid digital labourers.’ It trains young people to become responsible for augmenting their own emotional wellbeing and in doing so to secure the well-being of digital capitalism itself.

According to Davies, however, much of the stress experienced by children is actually caused more mundanely by the kinds of testing and performance measurement pressures forced on schools by current policy priorities. ‘The irony of turning schools into therapeutic institutions when they generate so much stress and anxiety seems lost on policy-makers who express concern about children’s mental health,’ he argues.

It is probably a step too far to suggest that ClassDojo may be the ideal educational technology for digital capitalism. However, it is clear that ClassDojo is acting as a psycho-policy platform and a channel for mindfulness and growth mindsets practices that is aimed at pathology-proofing children against anxious times through the imposition of positive feelings in the classroom. While taming ‘the Beast’ of his uncontrollable emotions of ‘anger, fear and anxiety’ through mindfulness meditation, ClassDojo’s Mojo mascot is both learning the lessons of positive psychology and acting as a relay of those lessons into the lives of millions of schoolchildren. Its model of pocket-based psycho-policy bypasses the kind of slow-paced bureaucracy so loathed in the fast-paced accelerationist culture of Silicon Valley, and imposes its preferred psychological techniques directly on the classroom at global scale.

Detoxing education policy

To its credit, the ClassDojo organization is seeking to expand the focus of schools to the non-cognitive aspects of learning rather than concentrate narrowly on teaching to the tests demanded by existing policy. Paradoxically, however, it is advancing the kinds of social and emotional qualities in children for which schools may in the near future be held accountable, and that may be measured, tested and quantified. Its accelerated Silicon Valley business model depends on increasing the scale and penetration of the app into schools, and by doing so is actively enabling schools to future-proof themselves in the event they are held responsible for children’s measurable social-emotional learning and development.

ClassDojo has also hit on the contemporary perception of child fragility and vulnerability among educational practitioners and policymakers as a market opportunity, one its investors have generously funded with millions of dollars in the hope of profitable future returns. It is designed to activate, reward and condition particular preferred emotions that have been defined by the experts of mindfulness, character and growth mindset, and that are increasingly coming to define educational policy discourse. The psycho-policy ideas ClassDojo has embedded in teachers’ pockets and habits across public education, through Silicon Valley venture capital support, are already prefiguring the imperatives of policymakers who are anxious about resolving the toxic effect of children’s negative emotions on school performance.

ClassDojo is simultaneously intoxicating teachers worldwide while seeking to detoxify the worst effects of education policy on children. In the process it—and the accelerated Silicon Valley mindset it represents—may be redefining what counts as a valuable measure of a good student or teacher in a ‘happier classroom community,’ and building a business plan to profit from their feelings.

Image credit: ClassDojo product shots
Posted in Uncategorized | Tagged , , , , , , | 4 Comments

Brain data, neurotechnology and education

Ben Williamson

Brains Neil ConwayImage by Neil Conway

The brain sciences are playing an increasingly powerful role in the development of the digital technologies that may augment everyday life in future years. ‘Neurotechnology’ is a broad field of brain-centred technical R&D. It includes advanced imaging systems for real-time brain monitoring and mining the mind via the collection of brain data, but also new and emerging brain stimulator systems that may have the capacity to influence brain activity. Along with new developments in data-driven ‘psycho-informatics’ in the field of psychology, the possibilities associated with brain-machine interaction have begun to attract educational interest, raising significant concerns about how young people’s mental states may in the future be governed through neurotechnology.

Brain data

The human brain has become the focus of intense interest across scientific, technical R&D, governmental, and commercial domains in recent years. Neuroscientific research into the brain itself has advanced significantly with the development and refinement of brain imaging neurotechnologies. Driven by massive research grants and private partnerships, huge teams of neuroscience experts associated with international projects—such as the US-led BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative and European Human Brain Project—have begun to visualize and build ‘wiring diagrams’ and computational models of the cells and neural circuits of the brain at a highly granular, neuromolecular level of detail and fidelity, all based on the collection and analysis of massive records of brain data.

This knowledge of the brain developed by neuroscience is being applied to the design of new brain-machine interface technologies such as neuroprosthetic devices that can be implanted in the brain—with algorithms that can translate ‘thought’ into movement—and noninvasive neurostimulators that might modify cognition and emotions. In the last few months, technology entrepreneurs from some of Silicon Valley’s most successful companies have also begun to concentrate R&D resources on Brain-Computer Interfaces (BCI) and brain-signalled remote control of devices–as well as more speculative attempts to hybridize the human brain with artificial intelligence implants. Tesla boss Elon Musk, for instance, has established Neuralink to use brain implants to directly link human minds to computers and ‘augment the slow, imprecise communication of our voices with a direct brain-to-computer linkup.’ Facebook, meanwhile, has announced it is pursuing the development of a new kind of noninvasive brain-machine interface—possibly a cap or headband—that lets people text and ‘share’ their thoughts by simply thinking rather than typing. Its intention is to use optical technologies to use light, like LEDs or lasers, to sense neural signals emanating from the cerebral cortex.

At the same time, the brain is being treated as an inspiration for the design of neurocomputing systems. These complex cognitive computing, neural networks and AI systems are designed to emulate some of the brain’s capacities, especially for efficient low-energy information storage, processing, retrieval and learning, in order to maximize the efficiency and speed of big data processing and machine learning algorithms. Neural-network research, for example, focuses simultaneously on improving understanding of the human brain and nervous system, and on using that knowledge to ‘find inspiration’ to ‘construct information processing systems inspired by natural, biological functions and thus gain the advantages of these systems.’ The development of ‘bio-inspired’ or ‘bio-mimetic’ systems in neural-network research, and neurocomputing more generally, is already being applied in many settings, notably through companies like IBM. IBM’s recent advances in cognitive computing, such as Watson, take inspiration from neuroscience for the design of brain-like neural networks algorithms and neurocomputational devices that are now being deployed in healthcare, business and educational settings.

A huge field has developed around Brain-Computer Interface research and development too. BCI, or sometimes Brain-Machine Interface R&D, depends on signal processing of brain data to allow brain activities to control external devices or even computers through electrodes–‘the enabling technologies that allow brain information to be encoded by different techniques and algorithms providing input to control devices.’ Although previously largely confined to clinical and laboratory research, the possibilities of brain-machine mental control have begun to attract significant research grant funding along with commercial interest in recent years. The growth in interest at least partly stems from advances in BCI R&D which have seen the invasive implantation of microelectrodes within the brain itself being displaced by increasingly noninvasive techniques. Noninvasive BCI does not involve penetration of the scalp or skull with electrode implants but still holds the potential for mental control over devices through the real-time capture of brain activity data using portable EEG neuroimaging technologies.

Various portable and wearable EEG headbands that allow easy attachment of electrodes to the skull have become commercially and clinically available, with brand-names including Emotiv, Neurosky, BrainBand, Myndwave and BrainControl. Mental control videogaming is a major commercial application of BCI. Further out in R&D terms, other neuroscience inspired brain interface proposals include ‘neural dust’ consisting of microscopic free-floating sensors that could be spread around the brain.

The policy implications of neuroscientific and neurotechnological development have been articulated by, among others, the Potomac Institute for Policy Studies, a policy institute with its own Center for Neurotechnology Studies. Its report on ‘enhancing the brain and reshaping society’  has called for collaborative efforts between policymakers, scientists and the private sector to develop novel neurotechnologies that can improve individuals’ cognitive abilities and behaviours as well as the ‘social order,’ and thereby ‘ensure neuroenhancement of the individual will result in enrichment of our society as a whole.’

As with all technical development, neurotechnology is not merely technical. It is imprinted with powerful social visions of a future in which brain data can be used to know and monitor populations, and to enhance the mental states of individuals to meet certain objectives and aspirations for society at large.

Ed-Neurotech

Neurotechnological development and application of neuroenhancement techniques may seem far removed from education. However, neuroscience itself is currently enjoying fast growth within educational research and practice, with new research centres in educational neuroscience appearing, with support from grant awarding bodies, and research results and applications increasingly being shared by global community using the Twitter hashtag #edneuro. The journal Learning, Media and Technology ran a special issue in 2015 on neuroscience and educational technology.

Various neurotechnologies such as brain imaging are being used by ‘ed-neuro’ researchers in ways which are intended to generate insights for educational policymakers and practitioners. One ed-neuro study has made use of mobile, wearable EEG headbands to study students’ ‘brain-to-brain synchrony’ within the classroom context. EEG neuroimaging has even been used to visualize the brain ‘lighting up’ when students have adopted a ‘growth mindset.’  Attempts have also been made to use brain imaging technologies to analyse the possible biological mechanisms by which socio-economic status influences and effects brain and cognitive development in children. Studies have used neuroimaging to examine whether socioeconomic status correlates with differences in brain structure, and measured the electrical activity in the brains of children from lower SES groups to detect deficits in their selective attention. Such studies and conclusions have begun to influence policymakers, who can interpret the results to specify remedial interventions such as early years education provision. In these ways, neurotechnologies are becoming integral parts of new policy science approaches, the instruments that enable policymakers to see policy problems visualized in the neurobiological detail provided by highly persuasive brain images.

Neurotechnology-based cognitive computing systems developed by commercial organizations have also appeared in the educational landscape. The edu-business Pearson has partnered with IBM to bring IBM’s Watson system into the learning process, as previously detailed. For at least the last decade, IBM has been engaged in an extensive program of brain-based computing R&D, involving neurocomputing, neural-network research and the development of specific neurosynaptic and neuromorphic hardware and software. For IBM, as detailed in its white paper on ‘Computing, cognition and the future of knowing,’ cognitive tools are ‘natural systems’ with ‘human qualities’ which are inspiring the ‘next generation of human cognition, in which we think and reason in new and powerful ways’:

It’s true that cognitive systems are machines that are inspired by the human brain. But it’s also true that these machines will inspire the human brain, increase our capacity for reason and rewire the ways in which we learn.

Pearson has itself articulated a vision of AI teaching assistants and cognitive tutors using technologies based on advances in educational neuroscience and psychology. For both Pearson and IBM cognitive computing does not just mean smarter computing systems, but cognitively optimized individuals whose very brain circuitry has been rewired through interfacing and interacting with machine cognition.

Political support for commercial educational neurotechnology has also emerged. Recently-appointed head of the US Department of Education, the private-education advocate Betsy DeVos, is a major investor and former board member of Neurocore, a brain-training treatment company that specializes in ‘neurofeedback’ technology. The company uses real-time EEG with electrodes attached to the scalp to diagnose individuals’ symptoms by comparing their brainwaves to a massive database of others’ brainwaves. Its proprietorial neurofeedback software can then be applied to run a game that rewards the desired brain activity. Over time, Neurocore claims, the brain starts to learn to produce activity that was rewarded by the increase in stimulation. One of Neurocore’s targets is children with ADHD (Attention Deficit Hyperactivity Disorder); its ‘natural treatments’ with drug-free neurofeedback ‘work with a child’s natural ability to learn, helping them reach their full potential’ (though its underlying neuroscience has been contested).

From a more speculative perspective the Center for Neurotechnology Studies at the Potomac Institute has issued a report on ‘neurotechnology futures’ with some key implications for education. It describes how brain interface technologies could become applications for ‘augmented cognition’, including ‘non-invasive devices that complement or supplement human capabilities, such as tools for learning and training augmentation.’ It has detailed how ‘greater understanding of the neural mechanisms of learning and memory is needed to provide the appropriate theoretical basis for neurotechnologically enhancing learning’ and enabling the educational system ‘to significantly improve teaching techniques for iteratively more complex knowledge.’ It even suggests the ‘provocative possibility of technology that could “down-load” experience and facilitate learning in a time-compressed manner.’

The Potomac Institute provides advice to the US military. And the US military Defense Advanced Research Projects Agency (DARPA) has itself begun exploring the potential to boost the acquisition of skills and learning through its Targeted Neuroplasticity Training (TNT) program, itself part of the BRAIN Initiative. The program aims to develop safe, noninvasive neurostimulation methods for activating synaptic plasticity–the ability of the brain to connect neurons which is understood to be the neural requirement for learning. According to a press release from the TNT program manager,

Targeted Neuroplasticity Training (TNT) seeks to advance the pace and effectiveness of a specific kind of learning—cognitive skills training—through the precise activation of peripheral nerves that can in turn promote and strengthen neuronal connections in the brain. TNT will pursue development of a platform technology to enhance learning of a wide range of cognitive skills…. The TNT program seeks to use peripheral nerve stimulation to speed up learning processes in the brain by boosting release of brain chemicals, such as acetylcholine, dopamine, serotonin, and norepinephrine. These so-called neuromodulators play a role in regulating synaptic plasticity, the process by which connections between neurons change to improve brain function during learning. By combining peripheral neurostimulation with conventional training practices, the TNT program seeks to leverage endogenous neural circuitry to enhance learning by facilitating tuning of neural networks responsible for cognitive functions.

Although TNT is primarily aimed at military training, it clearly indicates how the scientific and technical possibilities of neurotechnology are being taken up in relation to education and learning.

At least one educational entrepreneur has leapt upon the potential of ‘frictionless’ brain-computer interfaces of the kind imagined by DARPA, Silicon Valley entrepreneurs like Elon Musk and the vision of neurotechnologically-enhanced learning promoted by the Potomac Institute. Donald Clark, the founder of the AI-based online learning company Wildfire Learning, the ‘world’s first AI content creation service’ for education, has imagined that invisible, frictionless and seamless interfaces between human brains and AI will have massive implications for education:

The implications for learning are obvious. When we know what you think, we know whether you are learning, optimise that learning, provide relevant feedback and also reliably assess. To read the mind is to read the learning process…. We are augmenting the brain by making it part of a larger network … ready to interface directly with knowledge and skills, at first with deviceless natural interfaces using voice, gesture and looks, then frictionless brain communications and finally seamless brain links. Clumsy interfaces inhibit learning, clean smooth, deviceless, frictionless and seamless interfaces enhance and accelerate learning. This all plays to enhancing the weaknesses of the evolved biological brain … and [to] think at levels beyond the current limitations of our flawed brains.

These aspirations for the future of education merge the scientific R&D of the emerging ‘ed-neuro’ field with the kind of techno-optimism often found in educational technology, or ‘ed-tech,’ development and marketing, to suggest the emergence of a new hybrid field of ‘ed-neurotech.’

Like the plans of Musk and Facebook, the ed-neurotech imaginary of a deviceless, frictionless and seamless neurotechnological future of education is likely to be highly controversial and contested. Part of this resistance will be on primarily technical and scientific grounds–neurotechnologies of brain imaging are one thing, and seamless neuroenhancement of the so-called flawed brain quite another. But another part of the resistance will be animated by concerns over the aspirations of either governments or commercial companies to engage in mental interference and cognitive modification of young people.

Neuroenhancement may not be quite as scientifically and technically feasible yet as its advocates hope, but the fact remains that certain powerful individuals and organizations want it to happen. They have attached their technical aspirations to particular visions of social order and progress that appear to be attainable through the application of neurotechnologies to brain analytics and even neuro-optimization. As STS researchers of neuroscience Simon Williams, Stephen Katz & Paul Martin have argued, the prospects of cognitive enhancement are part of a ‘neurofuture’ in-the-making that needs as much critical scrutiny as the alleged ‘brain facts’ produced by brain scanning technologies.

Neurotechnological governance

In a new article on neuroscience, neurotechnology and human rights, the bioethicists Marcello Ienca and Roberto Andorno have mapped out some of the challenges raised by these emerging ‘brain-society-computer entanglements.’ The ‘neurotechnology revolution’ in ‘neuroimaging’, they argue, highlights how the ‘possibility of mining the mind (or at least informationally rich structural aspects of the mind) can be potentially used not only to infer mental preferences, but also to prime, imprint or trigger those preferences.’ They note how brain imaging techniques have been taken up in ‘pervasive neurotechnology applications’ such as BCIs that ‘use EEG recordings to monitor electrical activity in the brain for a variety of purposes including neuromonitoring (real time evaluation of brain functioning), neurocognitive training (using certain frequency bands to enhance neurocognitive functions), and noninvasive brain device control.’

In addition to neuroimaging and brain-computer interface and device control, however, Ienca and Andorno also note the emergence of ‘brain stimulators’ or ‘neurostimulators.’ Unlike neuroimaging tools, these ‘are not primarily used for recording or decoding brain activity but rather for stimulating or modulating brain activity electrically.’ Available neurostimulators include portable, easy-to-use, consumer-based transcranial direct current stimulation (tDCS) devices aimed at optimizing brain performance on a variety of cognitive tasks, and applications based on transcranial magnetic stimulation (TMS), a magnetic method used to briefly stimulate small regions of the brain for both diagnostic and therapeutic purposes, which has also evolved into portable devices. ‘In sum,’ they state,

if in the past decades neurotechnology has unlocked the human brain and made it readable under scientific lenses, the upcoming decades will see neurotechnology becoming pervasive and embedded in numerous aspects of our lives and increasingly effective in modulating the neural correlates of our psychology and behaviour.

The emergence of neuroimaging, neuromodulation of behaviours, and cognition-stimulating neurotechnologies therefore raises considerable challenges, as Ienca and Androno articulate them:

  • the use of pervasive neurotechnology for malicious ‘brain-hacking’ (or ‘brainjacking’–the unauthorized modification of emotions and cognition)
  • third party eavesdropping on the mind
  • illicit memory-engineering
  • technology-induced personality change
  • the neuromodulation of behaviours
  • illegitimate access to and use of brain data generated by consumer-grade brain-computer interface applications.

These concerns reflect the emergence of what some social scientific critics of the brain sciences have begun to term ‘neurogovernance’ or ‘neuropower.’ As Victoria Pitts-Taylor puts it in her recent book The Brain’s Body, neuroscience-based programs designed to mould and modulate behaviour through targeting the brain for modification represent strategies of ‘preemptive neurogovernance’ that are intended to promote the economic and political optimization of the population. She notes how neuroscience concepts like ‘brain plasticity’ have been taken up by developers of ‘cognitive exercises, brain-machine interfaces, drugs, supplements, electric stimulators, and brain mapping technologies,’ in order to ‘target the brain for modification and rewiring.’ These technical advances clearly amplify the possibilities of preemptive neurogovernance, and the shaping of society and the social order through the modification of the mental states, affects and thoughts of individuals. The plasticity of the brain has become the basis for technoscientific ambitions to monitor, control and transform processes of life for political and commercial purposes, Pitts Taylor argues. And Nikolas Rose and Joelle Abi-Rached, in their book Neuro, have argued that the plastic brain is now the focus for attempts to ‘govern the future’–as is especially the case with interventions into the developing brains and hence future lives of children.

As a consequence, Ienca and Andorno suggest that neurotechnologies raise significant challenges for human rights. In particular they highlight recent debates about the right to  ‘cognitive liberty,’ or the right to alter one’s mental states with the help of neurotools, and the associated right to refuse to do so. Ultimately, cognitive liberty is a conceptual update of the right to ‘freedom of thought’ that takes into account the power available to states and companies to use neurotechnology coercively to manipulate the embrained mental states of citizens. They also add the right to ‘mental privacy,’ defined as a ‘neuro-specific privacy right which protects private or sensitive information in a person’s mind from unauthorized collection, storage, use or even deletion in digital form or otherwise.’  Cognitive liberty and mental privacy, in other words, constitute new rights to take control of one’s own mental life in the face of creeping techniques of neurogovernance in spheres of life including social media, government, consumption, and education.

Educational neuropower

The application of neurotechnology to education that we are just beginning to detect needs to be undertaken in ways which are sensitive to issues of neurogovernance, cognitive liberty and mental privacy. As parts of an educational neurofuture in-the-making, optimistic aspirations towards neuroenhancement and cognitive modification of ‘flawed brains’ through neurotechnologically enhanced education need to be countered not just with technical and scientific scepticism. Greater awareness of the political, military and commercial interests involved in new and developing neurotechnology markets and interventions are required, as well as theoretically engaged studies of the sociotechnical processes involved in producing neurotechnologies and of their uptake and effects in education. Deeply social questions also need to be asked about the use of brain data to exercise neuropower over young people’s mental states, and about how to safeguard their cognitive liberty and mental privacy amid persuasive and coercive promises about neuroenhancement in the direction of personal cognitive improvement.

Posted in Uncategorized | Tagged , , , , | Leave a comment

Imaginaries and materialities of education data science

Future educationImage: The .edu Ocunet, by Tim Beckhardt

Ben Williamson

This is a talk I presented at the Nordic Educational Research Association conference at Aalborg University, Copenhagen, on 23 March 2017.

Education is currently being reimagined for the future. In 2016, the online educational technology  magazine Bright featured a series of artistic visions of the future of education. One of them, by the artist Tim Beckhardt, imagined a vast new ‘Ocunet’ system.

The Ocunet is imagined as a decentralized educational virtual-reality streaming network using state­-of-­the-­art Panoptic headsets to deliver a universal knowledge experience. The infrastructure of public education has been repurposed as housing for the Ocunet’s vast server network. Teachers have been relieved of the stress of child-behavior management, and instead focus their skills on managing the Ocunet—editing our vast database to keep our students fully immersed in the latest curriculum—while principals process incoming student data at all times.

The Ocunet is an artistic and imaginative vision of the future of education. I use it as an example to start here because it illustrates a current fascination with reimagining education. The future it envisages is one where education has been thoroughly digitized and datafied—the educational experience has been completely embedded in digital technology systems, and every aspect of student performance is captured and processed as digital data.

This may all sound like speculative educational science fiction. But some similar imaginative visions of the future of education are now actually catalysing real-world technical innovations, which have the potential to change education in quite radical ways.

In this talk, I want to show you how education is being imagined by advocates of a field of research and development becoming known as ‘education data science.’ And I’ll explore how the social and technical future of education it imagines—one that is digitized and datafied much like the Ocunet—is also being materialized through the design of digital data-processing programs.

The social consequences for the field of education in general are significant:

  • Education data science is beginning to impact on how schools are imagined and managed.
  • It’s influencing how learning is thought about, both cognitively and emotionally, and introducing new vocabularies for talking about learning processes.
  • Its technologies and methods, many developed in the commercial sector, are being used in educational research and to produce new knowledge about education.
  • And education data science is also seeking to influence policy, by making educational big data seem an authoritative source for accelerated evidence collection.

Big data imaginaries and algorithmic governance

Just to set the scene here, education is not the only sector of society where big data and data science are being imagined as new ways of building the future. Big data are at the centre of future visions of social media, business, shopping, government, and much more. Gernot Rieder and Judith Simon have characterized a ‘big data imaginary’ as an attempt to apply ‘mechanized objectivity to the colonization of the future’:

  • Extending the reach of automation, from data collection to storage, curation, analysis, and decision-making processes
  • Capturing massive amounts of data and focusing on correlations rather than causes, thus reducing the need for theory, models, and human expertise
  • Expanding the realm of what can be measured, in order to trace and gauge movements, actions, and behaviours in ways that were previously unimaginable
  • Aspiring to calculate what is yet to come, using smart, fast, and cheap predictive techniques to support decision making and optimize resource allocation

And here the figure of the computer algorithm is especially significant. While in computer science terms algorithms are simply step-by-step processes for getting a computer program to do something, when these algorithms start to intervene in everyday life and the social world they can be understood as part of a process of governing—or ‘algorithmic governance.’

By governing here we are working with ideas broadly inspired by Michel Foucault. This is the argument that every society is organized and managed by interconnected systems of thinking, institutions, techniques and activities that are undertaken to control, shape and regulate human conduct and action—captured in phrases such as ‘conduct of conduct’ or ‘acting upon action.’

Because the focus of much big data analysis—and especially in education—is on measuring and predicting human activity (that most data are people), then we might say we are now living under conditions of algorithmic governance where algorithms play a role in directing or shaping human acts. Antoinette Rouvroy and Thomas Berns have conceptualized algorithmic governance as ‘the automated collection, aggregation and analysis of big data, using algorithms to model, anticipate and pre-emptively affect and govern possible behaviours.’ They claim it consists of three major techniques:

  • Digital behaviourism: behavioural observation stripped of context and reduced to data
  • Automated knowledge production: data mining and algorithmic processing to identify correlations with minimal human intervention
  • Action on behaviours: application of automated knowledge to profile individuals, infer probabilistic predictions, and then anticipate or even pre-empt possible behaviours

For my purposes, what I’m trying to suggest here is that new ways of imagining education through big data appear to mean that such practices of algorithmic governance could emerge, with various actions of schools, teachers and students all subjected to data-based forms of surveillance acted upon via computer systems.

Schools, teachers and students alike would become the objects of surveillant observation and transformation into data; their behaviours would be recorded as knowledge generated automatically from analysing those data; and those known behaviours could then become the target for invention and modification.

Importantly too, imaginaries don’t always remain imaginary. Sheila Jasanoff has described ‘sociotechnical imaginaries’ as models of the social and technical future that might be realized and materialized through technical invention.Imaginaries can originate in the visions of single individuals or small groups, she argues, but gather momentum through exercises of power to enter into the material conditions and practices of social life. So in this sense, sociotechnical imaginaries can be understood as catalysts for the material conditions in which we may live and learn.

The birth of education data science

One of the key things I want to stress here is that the field of education data science is imagining and seeking to materialize a ‘big data infrastructure’ for automated, algorithmic and anticipatory knowledge production, practical intervention and policy influence in education. By ‘infrastructure’ here I’m referring to the interlocking systems of people, skills, knowledge and expertise along with technologies, processes, methods and techniques required to perform big data analysis. It is such a big data infrastructure that education data science is seeking to build.

Now, education data science has, of course, to have come from somewhere. There is a history to its future gaze. We could go back well over a century, to the nineteenth century Great Expositions where national education departments exhibited great displays of educational performance data. And we could certainly say that education data science has evolved from the emphasis on large-scale educational data and comparison made possible by international testing in recent years. Organizations like the OECD and Pearson have made a huge industry out of global performance data, and reframed education as a measurable matter of balancing efficient inputs with effective outputs.

But these large-scale data are different from the big data that are the focus for education data science. Educational big data can be generated continuously within the pedagogic routines of a course or the classroom, rather than through national censuses or tests, and are understood to lead to insights into learning processes that may be generated in ‘real-time.’

In terms of its current emphasis on big data, the social origins of education data science actually lie in academic research and development going back a decade or so, particularly at sites like Stanford University. It’s actually from one of Stanford’s reports that I take the term ‘big data infrastructure for education.’

The technical origins of such an infrastructure lie in advances in educational data mining and learning analytics. Educational data mining can be understood as the use of algorithmic techniques to find patterns and generate insights from existing large datasets. Learning analytics, on the other hand, makes the data analysis process into a more ‘real-time’ event, where the data is automatically processed to generate insights and feedback synchronously with whatever learning task is being performed. Some learning analytics applications are even described as ‘adaptive learning platforms’ because they automatically adapt—or ‘personalize’—in accordance with calculations about students’ past and predicted future progress.

What’s really significant is how education data science has escaped the academic lab and travelled to the commercial sector. So, for example, Pearson, the world’s largest ‘edu-business,’ set up its own Center for Digital Data, Analytics and Adaptive Learning a few years ago to focus on big data analysis and product development. Other technology companies have jumped into the field. Facebook’s Mark Zuckerberg has begun dedicating huge funds to the task of ‘personalizing learning’ through technology. IBM has begun to promote its Watson supercomputing technologies to the same purposes.

And education data science approaches are also being popularized through various publications. Learning with Big Data by Viktor Mayer-Schonberger and Kenneth Cukier, for example, makes a case for ‘datafying the learning process’ in three overlapping ways:

  • Feedback: applications that can ‘learn’ from use and ‘talk back’ to the student and teacher
  • Personalization: adaptive-learning software where materials change and adapt as data is collected, analysed and transformed into feedback in real-time; and the generation of algorithmically personalized ‘playlists’
  • Probabilistic prediction: predictive learning analytics to improve how we teach and optimize student learning

The book reimagines school as a ‘data platform,’ the ‘cornerstone of a big-data ecosystem,’ in which ‘educational materials will be algorithmically customized’ and ‘constantly improved.’

This text is perhaps a paradigmatic statement of the imaginary and ambitions of education data science, with its emphasis on feedback, systems that can ‘learn,’ ‘personalization’ through ‘adaptive’ software, predictive ‘optimization,’ and the appeal to the power of algorithms to make measurable sense of the mess of education.

Smarter, semi-automated startup schools

The imaginary of education data science is now taking material form through a range of innovations in real settings. A significant materialization of education data science is in new data-driven schools, or what I call smarter, semi-automated startup schools.

Max Ventilla is perhaps the most prominent architect of data-driven startup schools. Max’s first job was at the World Bank, before he became a successful technology entrepreneur in Silicon Valley. He eventually moved to Google, where he became head of ‘personalization’ and launched the Google+ platform. But in 2013, Max left Google to set up AltSchool. Originally established as a fee-paying chain of ‘lab schools’ in San Francisco, it now has schools dotted around Silicon Valley and across to New York. Most of its startup costs were funded by venture capital firms, with Mark Zuckerberg from Facebook investing $100million in 2015.

Notably, only about half of AltSchool’s staff are teachers. It also employs software engineers and business staff, many recruited from Google, Uber and other successful tech companies. In fact, AltSchool is not just a private school chain, but describes itself as a ‘full-stack education company’ that provides day-to-day schooling while also engaging in serious software engineering and data analytics. The ‘full-stack’ model is much the same as Uber in the data analytics taxi business, or Airbnb in hospitality.

The two major products of AltSchool are called Progression and Playlist. In combination, Max Ventilla calls these ‘a new operating system for education.’ Progression is a data analytics ‘teacher tool’ for tracking and monitoring student progress, academically, socially and emotionally. It’s basically a ‘data dashboard’ for teachers to visualize individual student performance information. The ‘student tool’ Playlist then provides a ‘customized to-do list’ for learners, and is used for managing and documenting work completed. So, while Progression acts as the ‘learning analytics’ platform to help teachers track patterns of learning, Playlist is the ‘adaptive learning platform’ that ‘personalizes’ what happens next in the classroom for each individual student.

Recently, AltSchool began sharing its ‘operating system’ with other partner schools, and has clearly stated ambitions to move from being a startup to scaling-up across the US and beyond. It also has ambitious technical plans.

Looking forward, AltSchool’s future ambitions include fitting cameras that run constantly in the classroom, capturing each child’s every facial expression, fidget, and social interaction, as well as documenting the objects that every student touches throughout the day; microphones to record every word that each person utters; and wearable devices to track children’s movements and moods through skin sensors. This is so its in-house data scientists can then search for patterns in each student’s engagement level, moods, use of classroom resources, social habits, language and vocabulary use, attention span, academic performance, and more.

The AltSchool model is illustrative of how the imaginary of education data science is being materialized in new startup schools. Others include:

  • Summit Schools, which have received substantial Facebook backing, including the production of a personalized learning platform allegedly now being used by over 20,000 students across the US
  • The Primary School, set up by Mark Zuckerberg’s wife Priscilla Chan
  • The Khan Lab School founded by Salman Khan of the online Khan Academy.

All of these schools are basically experiments in how to imagine and manage a school by using continuous big data collection and analysis.

So, as AltSchool was described in a recent piece in the Financial Times, while ‘parents pay fees, hoping their kids will get a better education as guinea pigs, venture capitalists fund the R&D, hoping for financial returns from the technologies it develops.’

And these smarter, semi-automated startup schools are ambitiously seeking to expand the realm of what is measurable, not just test scores but also student movements, speech, emotions, and other indicators of learning.

Optimizing emotions

As indicated by AltSchool, education data science is seeking new ways to know, understand and improve both the cognitive and the social-emotional aspects of learning processes.

Roy Pea is one of the leading academic voices in education data science. Formerly the founding director of the Learning Analytics Lab at Stanford University, Pea has described techniques for measuring the ‘emotional state’ of learners. These include collecting ‘proximal indicators’ that relate to ‘non-cognitive factors’ in learning, such as academic persistence and perseverance, self-regulation, and engagement or motivation, all of which are seen to be improvable with the help of data analytics feedback.

Now, academic education data scientists and those who work in places like AltSchool are not the only people interested in data scientific ways of knowing and improving students’ social and emotional learning. The OECD has established a ‘Skills for Social Progress’ project to focus on ‘the power of social and emotional skills.’ It assumes that social and emotional skills can be measured meaningfully, and its ambition is to generate evidence about children’s emotional lives for ‘policy-makers, school administrators, practitioners and parents to help children achieve their full potential, improve their life prospects and contribute to societal progress.’

The World Economic Forum has its own New Vision for Education report which involves ‘fostering social and emotional learning through technology.’ Its vision is that social and emotional proficiency will equip students to succeed in a swiftly evolving digital economy, and that digital technologies could be used to build ‘character qualities’ and enable students to master important social and emotional skills. These are ‘valuable’ skills in quite narrowly economic terms.

Both the OECD and World Economic Forum are also seeking to make the language of social and emotional learning into a new global policy vocabulary—and there is certainly evidence of this in the UK and US already. The US Department of Education has been endorsing the measurement of non-cognitive learning for a few years, and the UK Department for Education has funded policy research in this area.

So how might education data science make measurable sense of students’ emotions? Well, according to education data scientists, it is possible to measure the emotional state of the student using webcams, facial vision technologies, speech analysis, and even wearable biometric devices.

Future Classroom_Josan GonzalesImage: Automated teachers & augmented reality classrooms by Josan Gonzalez

These are the kinds of ideas that have been taken up and endorsed very enthusiastically by the World Economic Forum, which strongly promotes the use of ‘affective computing’ techniques in its imaginary vision. Affective computing is the term for systems that can interpret, emulate and perhaps even influence human emotion. The WEF idea is that affective computing innovations will allow systems to recognize, interpret and simulate human emotions, using webcams, eye-tracking, databases of expressions and algorithms to capture, identify and analyse human emotions and reactions to external stimuli. ‘This technology holds great promise for developing social and emotional intelligence,’ it claims.

And it specifically identifies Affectiva as an example. Originating from R&D at MIT Media Lab, Affectiva has built what it claims to be the world’s largest emotion database, which it’s compiled by analysing the ‘micro-expressions’ of nearly 5 million faces. Affectiva uses psychological emotion scales and physiological facial metrics to measure seven categories of emotions, then utilizes algorithms trained on massive amounts of data to accurately analyse emotion from facial expressions. ‘In education,’ claims Affectiva, ‘emotion analytics can be an early indicator of student engagement, driving better learning outcomes.’

Such systems, then, would involve facial vision algorithms determining student engagement from facial expressions, and then adapting to respond to their mood. Similarly, the Silicon Valley magazine for educational technology, EdSurge, recently produced a promotional article for the role of ‘emotive computing in the classroom.’

‘Emotionally intelligent robots,’ its author claimed, ‘may actually be more useful than human [teachers] … as they are not clouded by emotion, instead using intelligent technology to detect hidden responses. … Emotionally intelligent computing systems can analyse sentiment and respond with appropriate expressions … to deliver highly-personalized content that motivates children.’

Both the World Economic Forum and EdSurge also promote a variety of wearable biometric devices to measure mood in the blood and the body of a seemingly ‘transparent child’:

  • Transdermal Optical Imaging, using cameras to measure facial blood flow information and determine student emotions where visual face cues are not obvious
  • Wearable social-emotional intelligence prosthetics which use a small camera and analyzes facial expressions and head movements to detect affects in children in real-time
  • Glove-like devices full of sensors to trace students’ arousal

This imaginary of affective or emotive computing in the classroom taps into the idea that automated, algorithmic systems are able to produce objective accounts of students’ emotional state. They can then personalize education by providing mood-optimized outputs which might actually nudge students towards more positive feelings.

In this last sense, affective computing is not just about making the emotions measurable, but about using automated systems to manipulate mood in the classroom, to make it more positive and preferable. Given that powerful organizations like the World Economic Forum and OECD are now seeking to make the language of social-emotional learning into the language of education policy, this appears to make it possible that politically preferred emotions could be engineered by the use of affective computing in education.

Cognizing systems

Not only are the non-cognitive aspects of learning being targeted by education data science however. One of its other targets is cognition itself. In the last couple of years, IBM has begun to promote its ‘cognitive computing’ systems for use in a variety of sectors—finance, business, healthcare but also education. These have been described as ‘cognitive technologies that can think like a human,’ based on neuroscientific insights into the human brain, technical developments in brain-inspired computing, and artificial ‘neural networks’ algorithms. So IBM is claiming that it can, to some extent, ‘emulate the human brain’s abilities for perception, action and cognition.’

To put it simply, cognitive systems are really advanced big data processing machines that employ machine learning processes modelled on those of embrained cognition, but then far exceed human capacities. These kind of super-advanced forms of real-time big data processing and machine learning are often called artificial intelligence these days.

The promise of IBM for education is to bring these brain-inspired technologies into the classroom, and to ‘bring education into the cognitive era.’ And it is seeking to do so through a partnership with Pearson announced late in 2016, which will embed ‘cognitive tutoring capabilities’ into Pearson’s digital courseware. Though this is only going to happen in limited college courses for now, both organizations have made it quite clear they see potential to take cognitive tutoring to scale across Pearson’s e-learning catalogue of courses.

Pearson itself has produced its own report on the possibilities of artificial intelligence in education, including the creation of ‘AI teaching assistants.’ Pearson claims to be ‘leveraging new insights in disciplines such as psychology and educational neuroscience to better understand the learning process, and build more accurate models that are better able to predict—and influence—a learner’s progress.’

Neuroscience is the important influence here. In recent years brain scientists have popularized the idea of ‘neuroplasticity,’ the idea that the brain modifies itself in response to experience and the environment. The brain, then, is in a lifelong state of transformation as synaptic pathways ‘wire together’.

But the idea of brain plasticity has taken on other meanings as it has entered into popular knowledge. According to a critical social scientific book by Victoria Pitts-Taylor, the idea of neuroplasticity resonates with ideas about flexibility, multitasking and self-alteration in late capitalism. And it also underpins interventions aimed as cognitive modification and enhancement, which target the brain for ‘re-wiring.’

Tapping into the popular understanding plasticity as the biological result of learning and experience, both IBM and Pearson view cognitive computing and artificial intelligence technologies as being based on the plastic brain. IBM’s own engineers have done a lot of R&D with ‘neuromorphic’ computing and ‘neurosynaptic chips,’ and have hired vast collaborative teams of neuroscientists, hardware engineers and algorithm designers to do so. But, they claim, cognitive and AI systems can also be potentially brain-boosting and cognition-enhancing, because they can interact with the plastic brain and ‘re-wire’ it.

The ambitions of IBM and Pearson to make classrooms into engines of cognitive enhancement are clearly put in a recent IBM white paper titled Computing, cognition and the future of knowing. The report’s author claims that:

  • Cognitive computing consists of ‘natural systems’ with ‘human qualities’
  • They learn and reason from their interactions with us and from their experiences with their environment
  • Cognitive systems are machines inspired by the human brain that will also inspire the human brain, increase our capacity for reason and rewire the ways we learn

So, Pearson and IBM are aiming to inhabit classrooms with artificial intelligences and cognitive systems that have been built to act like the brain and then act upon the brain to extend and magnify human cognition. Brain-inspired, but also brain-inspiring.

In some ways we shouldn’t see this as controversial. As computers get smarter, of course they might help us think differently, and act as cognitive extensions or add-ons. Just to anticipate one of our other keynotes at the conference, Katherine Hayles has written about how ‘cognitive systems’ are now becoming so embedded in our environments that we can say there is ‘cognition everywhere.’ We live and learn in extended cognitive networks. So, says Hayles, cognitive computing devices can employ learning processes that are modelled on those of embodied biological organisms, using their experiences to learn, achieve skills and interact with people. Therefore, when  cognitive devices penetrate into human systems, they can potentially modify human cognitive functioning and behaviours through manipulating and changing the plastic brain.

As IBM and Pearson push such systems into colleges and schools, maybe they will make students cognitively smarter by re-wiring their brains. But the question here is smarter how? My concern is that students may be treated as if they too can be reduced to ‘machine learning’ processes. The history of cognitive psychology for the past half-century has been dogged with criticisms that it treats cognition like the functions of a computer. The brain has been viewed as hardware; the mind as software; memory as data retrieval; cognition as information-processing.

With this new turn to brain-enhancing cognitive systems, maybe cognition is being viewed as big data processing; the brain as neuromorphic hardware; mind as neural network algorithms and so on. As IBM’s report indicates, ‘where code and data go, cognition can now follow.’

Owning the means of knowledge production

So we’ve seen how education data science is seeking to embed its systems into schools, and how its aims are to modify students’ non-cognitive learning and embrained cognition alike. I want now to raise a couple of issues that I think will be relevant and important for all researchers of education, not just the few of us looking at this big data business.

The first is the issue of knowledge production. As I showed at the start, big data systems are making knowledge production into a more automated process. That doesn’t mean there are no engineers and analysts involved—clearly education data science involves education data scientists. But what it does mean is that knowledge is now being produced about education through the kinds of advanced technical systems that only a very few specialist education data science researchers can access or use.

What’s more, as many of my examples have shown, education data science is primarily being practiced outside of academic education research. It’s being done inside of AltSchool and by Pearson and IBM. And these organizations have business plans, investors and bank accounts to look after. AltSchool’s ‘operating system for education,’ as we saw, is being turned into a commercial offering, while IBM and Pearson are seeking to make cognitive tutoring into marketable products for schools and colleges to buy.

These products are also proprietorial, wrapped up in intellectual property and patents law. So education data science is now producing knowledge about education through proprietorial systems designed, managed and marketed by commercial for-profit organizations. These companies have the means for knowledge production in data-driven educational research. We could say they ‘own’ educational big data, since companies that own the data and the tools to mine it—the data infrastructure—possess great power to understand and predict the world.

De-politicized real-time policy analytics

And finally, there are policy implications here too, with big data being positioned as an accelerated and efficient source of evidence about education. One of these implications is spelled out clearly by Pearson, in its artificial intelligence report. It states that:

  • AIEd will be able to provide analysis about teaching & learning at every level, whether a subject, class, college, district, or country
  • Evidence about country performance will be available from AIEd analysis, calling into question the need for international testing

So in this imaginary, AI is seen as a way of measuring school system performance via automated, real-time data mining of students rather than discrete testing at long temporal intervals.

This cuts out the need for either national or international testing. And since much national education policymaking has been decided on the basis of test-based systems in recent decades, then we can see how policy processes might be short-circuited or even circumvented altogether. When you have real-time data systems tracking, predicting and pre-empting students, then you don’t need cumbersome policy processes.

These technologies also appear de-politicized, because they generate knowledge about education from seemingly objective data, without the bias of the researcher or the policymaker. The decisions these technologies make are not based on politicized debates or ideologies, it is claimed anyway, but on algorithmic calculations.

A few years ago Dorothea Anagnostopoulos and colleagues edited an excellent book about the data infrastructure of test-based performance measurement in education. They made the key claim that test-based performance data was not just the product of governments but of a complex network of technology companies, technical experts, policies, computer systems, databases and software programs. They therefore argued that education is subject to a form of ‘informatic power.’

Informatic power, they argued, depends on the knowledge, use, production of, and control over measurement and computing technologies to produce performance measures that appear as transparent and accurate representations of the complex processes of teaching, learning, and schooling. And as they define who counts as ‘good’ teachers, students, and schools, these performance metrics shape how we practice, value and think about education.

If test-based data gives way to real-time big data, then perhaps we can say that informatic power is now mutating into algorithmic power. This new form of algorithmic power in education:

  • Relies on a big data infrastructure of real-time surveillance, predictive and prescriptive technologies
  • Depends on control over knowledge, expertise and technologies to monitor, measure, know and intervene in possible behaviours
  • Changing ways cognitive & non-cognitive aspects of learning may be understood & acted upon in policy & practice
  • Is concentrated in a limited number of well-resourced academic education data science labs, and in commercial settings where it is protected by IP, patents and proprietorial systems.

This form of algorithmic power, or algorithmic governance as we encountered it earlier, is not just about performance measurement, but about active performance management of possible behaviours–and opens up possibilities for more predictive and pre-emptive education policy.

Conclusion

Although many applications of big data in education may still be quite imaginary, the examples I’ve shown you today hopefully indicate something of the direction of travel. We’re not teaching and learning in the Ocunet just yet, but its imaginary of greater digitization and datafication is already being materialized.

As educators and researchers of education, we do urgently need to understand how a big data imaginary is animating new developments, and how this may be leading to new forms of algorithmic governance in education.

We need more studies of the sites where education data science is being developed and deployed, of the psychological and neuroscientific assumptions they rely on, of the power of education data science to define how education is known and understood, and of its emerging influence in educational policy.

Posted in Uncategorized | Tagged , , , , , , , , , | 2 Comments