Assembling ClassDojo

A sociotechnical survey of a public sphere platform

Ben Williamson

ClassDojo mojo

The world’s most successful educational technology is ClassDojo. Originally developed as a smartphone app for teachers to reward ‘positive behaviour’ in classrooms, it has recently extended significantly to become a communication channel between teachers and parents, a school-wide reporting and communication platform, an educational video channel, and a platform for schoolchildren to collect and present digital portfolios of their class work.

In a previous post I began sketching out a critical approach to the ClassDojo app. In this follow-up (note that it’s a long read, more a working paper than a post)  I want to explore ClassDojo as a more extensive platform, and to consider it as a sociotechnical ‘assemblage’ of many moving parts. It is, I argue, simultaneously composed of technical components,  people, policies, funding arrangements, expert knowledge and discourse, all of which combine and work together as a hybrid product of human and nonhuman actors to enable the functioning of the platform. Each of these components has been assembled together over time to make ClassDojo what it is today. The purpose of the post is twofold: to help generate greater public understanding and awareness of ClassDojo among teachers and parents, and also to scope out the contours of the platform for further detailed research.

Education in the ‘platform society’
When it was first launched as a beta product in 2011, ClassDojo was a simple app designed for use on mobile devices. It has subsequently become a much more extensive platform, spreading rapidly across the US and around the world. As new features have been added over its 5 year lifespan to date, ClassDojo has become much more like a social media platform for schools. It allows teachers to award points for behaviour, somewhat akin to pressing the ‘like’ button on Facebook; permits text and video communication between teachers and parents, as many social media platforms do; acts as a channel for video content; and also has capacity for schoolchildren to create digital portfolios of their work. It has also extended to become a ‘schoolwide’ platform, whereby all teachers, school leaders and pupils are signed up to the platform and school leaders can take an overview of everything occurring on it.

Given its expansion beyond its original design as an app, ClassDojo needs to be understood in relation to emerging critical research on digital platforms, where ‘platform’ refers to internet-based applications such as social media sites that process information and communication. Jose van Dijck and Thomas Poell have argued that ‘over the past decade, social media platforms have penetrated deeply into the mechanics of everyday life, affecting people’s informal interactions, as well as institutional structures and professional routines.’ More recently, van Dijck has suggested that we are entering a new kind of ‘platform society’ in which ‘social, economic and interpersonal traffic is largely channeled by an (overwhelmingly corporate) global online infrastructure that is driven by algorithms and fueled by data.’ This emerging platform society is gradually interfering with more and more aspects of everyday life, including key public institutions of society such as health and education. Van Dijck has called these ‘public sphere platforms’ that promise to contribute to the public good in areas which are under funded by governments, but are owned and structured by private actors and networks.

ClassDojo is prototypical of a public sphere platform for education, one that is designed to contribute to the public good by supporting teachers to manage children’s classroom behaviour and allow parents to communicate with schools at a time when schools are increasingly under pressure. Before detailing its various dimensions as a platform, however, it is important to note that any platform ultimate consists of multiple moving parts, both human and nonhuman, that have to be assembled together. Putting it simply, social researchers have recently begun to attend to the messy ‘assemblages’ of digital technologies such as online platforms, while education researchers have begun to acknowledge that their objects of study—classrooms, tests, policies, or educational technologies—are in fact assemblies of myriad things. In recent work on ‘critical data studies,’ Rob Kitchin and Tracey Lauriault have described a ‘data assemblage’ as:

a complex socio-technical system, composed of many apparatuses and elements that are thoroughly entwined, whose central concern is the production of data. A data assemblage consists of more than the data system/infrastructure itself, such as a big data system, an open data repository, or a data archive, to include all of the technological, political, social and economic apparatuses that frames their nature, operation and work.

An assemblage such as a digital platform, then, needs to be understood in terms of the ways that all its moving parts—whether human and social or nonhuman, material or technical—come together to form a relatively coherent and stable whole. For Kitchin and Lauriault, researching such an assemblage would therefore involve an investigation of its technical and material components; the people that inhabit it and the practices they undertake; the organizations and institutions that are part of it; the marketplaces and financial techniques that enable it; the policies and frameworks that govern it; and the knowledges and discourses that promote and support it.

Importantly, they—like others working with the assemblage concept—acknowledge that assemblages are contingent and mutable rather than fixed entities:

Data assemblages evolve and mutate as new ideas and knowledges emerge, technologies are invented, organisations change, business models are created, the political economy alters, regulations and laws introduced and repealed, skill sets develop, debates take place, and markets grow or shrink.

Utilizing the concept of a sociotechnical assemblage, in what follows I aim to detail how ClassDojo has been assembled over time as a mutating and evolving public sphere platform for education that consists of many human and nonhuman moving parts. I have arranged this as a kind of sociotechnical survey of the elements that constitute the ClassDojo assemblage.

Technicalities & materialities
As a technical platform ClassDojo consists of a mobile app and an online platform. Teachers can access and use the app on a smartphone or tablet in the classroom, and open up the online platform on any other computing device or display hardware for pupils to view. The platform allows class teachers to set their own behavioural categories, though it comes pre-loaded with a series of behaviours that teachers can use to award or deduct feedback points. Each child in the system is represented by a cute avatar, a dojo monster, which can be customized by the user. Behavioural targets can be set for both individuals and groups to achieve positive goals, and teachers can also deduct points. Children’s points are represented as a ‘doughnut’ of green positive points and red ‘needs work’ deductions. Teachers are able, if they choose, to display each child’s aggregate points to their entire class as a kind of league table of behaviour, and school leaders can access each child’s profile to monitor their behavioural progress.

Launched in 2016, its ‘school-wide’ features to allow whole schools, not just individual teachers, to sign up for accounts, which enables ‘teachers and school leaders to safely share photos, videos, and messages with all parents connected to the school at once, replacing cumbersome school websites, group email threads, newsletters, and paper flyers.’ At the same time that ClassDojo is expanding in scope to encompass new technical innovations and serve other practical and social functions, it is therefore obsolescing existing school technologies and materials. The new school-wide application of ClassDojo also makes it easier for the platform to be used by administrators, and means that a child’s individual profile remains persistent over time as that child moves between classes. Teachers can also create ‘Student Stories‘ for each child in a class, where digital portfolios of their class work can be uploaded and maintained.

The public ClassDojo website acts as a glossy front door and public face to the platform and the company behind it. It presents the brand through highly attractive visual graphics, high-production promotional video content and carefully crafted text copy. The website also features an ‘Idea Board’ where ideas about the use of the platform in the classroom can be submitted by teachers to be shared publicly, plus a blogging area for teachers and an engineering blog where the technical details of the platform are discussed and shared by its engineers. For parents assigned a login, it is possible to access the ‘Class Story’ area where teachers can share messages and video with all parents of children in a specific class, and individual teachers and parents can also exchange short text messages.

Less visibly, ClassDojo consists of technical standards relating to network security, data storage, interoperability, and communication protocols. All of the technical aspects of ClassDojo also need to be written in the code and algorithms that make the platform function. The ClassDojo engineering blog details some of the complexity of the code and algorithms that have been used or designed to make all the different elements of the platform function. Much of its source code is available to view on the ClassDojo area of the GitHub code repository. GitHub is therefore part of the assemblage of ClassDojo, a resource that both contains the code and algorithms used in the platform and a resource used by its engineers to locate existing re-useable code.

As a cloud-based service, all of ClassDojo’s data servers and analytics are hosted externally. For this it employs Amazon Web Services. The safety and security page of the ClassDojo website notes that the web servers of Amazon Web Services ‘are physically located in high-security data centers – the same data centers used to hold secure financial information. … Our database provider uses the same https security connections used by banks and government departments to store and transfer the most sensitive data.’ (Unfortunately, at the time of writing the link provided on the ClassDojo website to the ‘security measures’ provided by AWS does not work.) Any interaction with the ClassDojo platform, therefore, takes place via Amazon’s vast global infrastructure of cloud technologies, including being physically stored in one of Amazon’s data centres. ClassDojo is, in other words, physically, financially and technically located within one of the key global organisations that orchestrate the emerging ‘platform society.’

As well as being a technical platform, ClassDojo consists of a variety of material artefacts under the ‘Resources‘ section of the website . These include teacher resources to support the use of ClassDojo in the classroom and lesson planning, training resources such as powerpoint presentations to enable school leaders to train staff, and a variety of glossy printable posts and other display materials that can be used to decorate the classroom. In addition to this, the website provides resources for parents such as introductory letters that can be distributed by schools to explain the platform, detailed parent guides as downloadable PDF files, and simple video content that can be used in the classroom to help young children understand it too.

ClassDojo also extends into other platforms. It has its own Facebook page and a popular @ClassDojo account on Twitter with 61,000 followers. Much of its initial word-of-mouth marketing worked through these platforms, allowing ClassDojo to rapidly extend to new users as enthusiastic early adopters recommended it to friends and colleagues via social media. Facebook and Twitter are part of the ClassDojo community, enabling its vast user base to engage with the organization and other community members. User-generated materials such as lesson plans and classroom resources to support the use of ClassDojo are made available for sharing by teacher advocates of the platform on teaching websites and other public sharing sites such as Pinterest, thus extending it beyond the enclosures of its own technical infrastructure to other platforms and material resources. Via other platforms, teachers have created and shared, for example, ‘Dojo dollars,’ ‘reward coupons’ and ‘vouchers,’ created their own incentives and rewards systems and displays, posted ‘points tracker’ posters and sets of ‘Dojo goals for data folders, and suggested the use of ‘prize centres’ where physical prizes are displayed for pupils that top the ClassDojo league tables.

As this survey of the technical aspects of ClassDojo demonstrates, it consists of myriad technologies, materials, standards and so on; but these technical elements all need to be orchestrated by human hands.

People & organizations
Who makes ClassDojo? Critical studies of software code and algorithms have demonstrated that their function cannot be separated from their designers. As Tarleton Gillespie has phrased it, ‘algorithms are full of people.’ Humans make decisions about what algorithms do, their goals and purposes and the tasks to which they are put. Likewise, any system of data collection or online communication platform has to be programmed to perform its tasks according to particular objectives, business plans and within financial and regulatory constraints.

ClassDojo depends on a vast network of people and organizations. It was founded in 2011 by two young British entrepreneurs, Liam Don and Sam Chaudhary. Don was educated as a computer scientist and Chaudhary as an economist—with experience of working for the consultancy McKinsey in its education division in London—before both moved to Silicon Valley after successfully applying to the education technology ‘incubator’ program Imagine K-12. Imagine K-12’s founder Tim Brady was the very first investor in ClassDojo and continues to sit on its board; he has been described by ClassDojo’s founders as a key mentor and influence in the early days of its development. Brady himself was one of the very first employees at Yahoo! in the 1990s, where he acted as Chief Product Officer for 8 years. Considerable Silicon Valley experience is therefore accommodated on the ClassDojo board.

In addition to its founders, ClassDojo is staffed by a variety of software engineers, designers, product managers, communications and marketing officers, privacy, encryption and security experts and human-computer interaction designers. Notably, none of ClassDojo’s staff are listed as educational experts, but instead are all drawn from the culture of software development, many of them with experience in other Silicon Valley technology companies, social media organizations and consultancies. Founders Don and Chaudhary themselves have some limited educational experience of working with schools in the UK prior to moving to Silicon Valley.

ClassDojo mojo 2

Through external partnerships, ClassDojo employs three independent privacy experts to guide it in relation to data privacy regulation in north America and Europe, and works with a team of security researchers to continually test ClassDojo’s security practices for vulnerabilities. Its board consists primarily of its major investors (detailed more below under funding and finance). ClassDojo also works with over 20 third-party essential service providers who primarily support the platform with specific technical services, including data storage, video encoding, photo uploading, server performance, data visualization, web analytics, performance metrics, conducting A/B testing on different versions of the website, and managing real-time communication data. The third party service providers include Amazon Web Services, which hosts ClassDojo’s servers and data analytics, Google Analytics, for analytics on its website, and many others, without which the platform could not function.

Support for ClassDojo has been confirmed through the award of a number of prizes. The business magazine FastCompany listed ClassDojo as one of the 10 most innovative education companies in 2013, and in 2015 it won the Crunchie award for best education startup from the TechCrunch awards while its founders were featured in the ’30 under 30’ list of Inc magazine. These prizes and recognitions have helped ClassDojo and its founders to consolidate their reputations and brand, both as a successful classroom tool and an entrepreneurial business.

As a sociotechnical assemblage it is important to note that ClassDojo functions through the involvement of its users. Users are both configured by ClassDojo—in the sense that it makes new practices possible—but can also reshape ClassDojo to their own purposes. The basic reward mechanism at the heart of the ClassDojo behaviour management application can be customized by any signed up teacher. These reward categories then shape the ways in which points are awarded in classrooms, changing both the practices of the staff employing it and the experience of the pupils who are its subjects. With the announcement of school-wide features in 2016, entire schools can be signed up to ClassDojo, ultimately becoming institutional network nodes of the platform. By mid-2016 the ClassDojo website reported that the platform was in use in 180 countries, with other 3 million subscribing teachers serving over 35 million pupils. ClassDojo is, in other words, constituted partly through the practices of a vast global constellation of users.

Teachers using ClassDojo are repositioned by the platform by being conferred new responsibilities. Huw Davies suggests teachers are transformed into data entry clerical workers by the platform, becoming responsible for data collection in the classroom that will ultimately contribute to big datasets that could be analysed and then ‘sold’ back to school leaders as premium features. Although ClassDojo does not market itself as a big data company, its access to behavioural data on millions of children confers it with tremendous capacity to report detailed and comparative analyses that could be used to measure teachers’ and schools’ records on the management of pupil behaviour.

Policy, regulation & governance
The way that the technical platform of ClassDojo operates, and the work of the people who build and use it, is all governed by particular forms of regulation and policy. Data privacy is an area that the ClassDojo organization is especially keen to promote, not least following a critical article in the New York Times in 2014, which the ClassDojo company vigorously countered in an open letter entitled ‘What the NYTimes got wrong.’ Its website features an extensive privacy policy, the product of its privacy advisers. This policy is extensive and regularly updated, organized on the website to detail exactly what information the platform collects, its student data protection policy, and available opt-outs. Notably, ClassDojo claims that it deletes all pupils’ feedback points after 12 months, unless students or parents create accounts. Where schools or individual teachers have set up accounts that parents have then subscribed to, then a persistent record of the child’s personal information will be retained.

ClassDojo claims it is completely compliant with north American data privacy regulatory frameworks such as FERPA (Family Educational Rights and Privacy Act) and COPPA (Children’s Online Privacy Protection Act). FERPA is a Federal law that protects the privacy of student education records, while the primary goal of COPPA is to place parents in control over what information is collected from their young children online. ClassDojo’s ‘privacy center’ displays ‘iKeepSafe’ privacy seals from both FERPA and COPPA, alongside a badge proclaiming it as a signatory of the Student Privacy Pledge. iKeepSafe (Internet Keep Safe Coalition) is itself a nonprofit international alliance of more than 100 policy leaders, educators, law enforcement members, technology experts, public health experts and advocates, and acts to ensure that both FERPA and COPPA are enforced.

ClassDojo is additionally compliant with the US-EU Safe Harbor framework set forth by the US Department of Commerce regarding the collection, use, and retention of personal data from European Union member countries. The European Court of Human Justice, however, declared this agreement invalid in 2015, meaning there is a grey area in terms of data protection for children logged on ClassDjo outside the US. Its commitment to data privacy would seem to depend on specific agreements made between the EU and the cloud service provider hosting its data, in this case Amazon Web Services. This seems to put pressure on schools to make sense of complex international data protection policies. Schools making use of ClassDojo in the UK, for example, might need to ensure they are familiar with the Information Commissioner’s Office code of practice and checklists for data sharing. This code covers such activities as ‘a school providing information about pupils to a research organisation’ and would arguably extend to the providing of information of pupil behaviour to an organization like ClassDojo (and stored by Amazon Web Services), not least as the data may be used to construct behavioural profiles of individuals and classes.

ClassDojo also subscribes to the principles of ‘privacy by design,’ an approach which encourages the embedding of privacy frameworks into a company’s products or services. ClassDojo’s Sam Chaudhary has co-authored an article on privacy by design with the Future of Privacy Forum, a Washington DC-based think tank dedicated to promoting responsible data practices through lobbying government leaders, and a sounding board for companies proposing new products and services. The founders of ClassDojo have therefore situated themselves among a network of data privacy expertise and lobbying groups in order to ensure their compliance with federal law and to be seen as a leading data privacy organization in relation to education and young children.

How ClassDojo operates in relation to data protection and privacy is therefore circumscribed by federal regulatory frameworks which govern how and why ClassDojo can collect, process and store users’ data and what rights children and their parents have to withdraw their consent for its collection or request its deletion. Privacy regulation is ‘designed-in’ to its architecture, though inevitably some concerns persist, not least about ClassDojo’s admission that if it experienced a ‘change of control’ that all users’ personal information would be transferred to its new owner, with only 30 days for parents to request deletion of their children’s data.

Besides privacy policy and regulation, ClassDojo is also shaped by education policy, although less directly. A distinctive policy discourse of ‘character’ education, ‘positive behaviour support’ and ‘social-emotional learning’ frames ClassDojo, shaping the way in which the organization presents the platform. For example, ClassDojo’s founders present the platform through the language of character development and positive behaviour management. This is entirely compatible with US Department of Education policy documents and initiatives which, in the wake of a softening of the dominant test-based policy emphasis, has begun to emphasize concepts such as ‘character,’ ‘grit,’ ‘perseverance,’ ‘personal qualities’ and other ‘non-cognitive’ dimensions of ‘social-emotional learning’—the most prominent example being the 2013 US Department of Education, Office of Educational Technology report Promoting grit, tenacity and perseverance. ClassDojo is directly promoted in the report as ‘a classroom management tool that helps teachers maintain a supportive learning environment and keep students persisting on task in the classroom,’ allowing ‘teachers to track and reinforce good behaviors for individual students, and get instant reports to share with parents or administrators.’

As a consequence of the ‘grit’ report, controversial attempts have been made to make the measurement of ‘personal qualities’ of non-cognitive and social-emotional learning into school accountability mechanisms in the US. The prominent think tank the Brookings Institute has described these new school accountability systems as compatible with the Every Child Succeeds Act, the US law governing K-12 education signed in late 2015. The act requires states to include at least one non-academic measure when monitoring school performance. It therefore permits states to focus to a greater degree that previous acts on concepts such as competency-based and personalized learning, and promotes the role of the educational technology sector in supporting such changes. ClassDojo has been described in a commentary as an ideal educational technology to support the new law.

The ClassDojo website also suggests that its behaviour points system can be aligned with PBIS. PBIS stands for Positive Behavior Interventions and Supports and is an initiative of the US Department of Education, Office of Special Education Programs. Its aim is to support the adoption of the ‘applied science’ of Positive Behavior Support in schools and emphasizes social, emotional and academic outcomes for students. Through both its connections with the non-cognitive learning policy agenda and PBIS, ClassDojo has been positioned, and located itself, in relation to major political agendas about school priorities. It is in this sense an indirect technology of government that can help schools to support students’ non-cognitive learning. In turn, those schools are increasingly been held accountable for the development and effective measurement of those qualities.

ClassDojo is, in other words, a bit-part player in emerging policy networks that are changing the priorities of education policy to focus on the management and measurement of children’s personal qualities rather than academic attainment alone. Such changes are being brought about through processes of ‘fast policy’ as Jamie Peck and Nik Theodore describe it, where policy is a thoroughly distributed achievement of ‘sprawling networks of human and nonhuman actors’ that include web sites, practitioner communities, guru per­formances, evaluation scientists, think tanks, consultants, blogs, and media channels and sources, as well as the more hierarchical influence of centres of political authority. As both an organization and a platform, ClassDojo acts indirectly as a voice and a technology of networked fast policy in the educational domain, particularly as a catalyst and an accelerant that translates the priorities of government around non-cognitive learning and character development into classroom practice.

Markets, finances & investment
ClassDojo is part of a significant growing marketplace of educational technologies. The new Every Child Succeeds Act gives states in the US much more flexibility to spend on ed-tech, which has been growing as a sector at extraordinary rates in recent years. Some enthusiastic assessments suggest that global education technology sector spending was $67.8bn in 2015, part of a global e-learning market worth $165bn and estimated to reach $243.8bn by 2022.

This marketplace is being supported vigorously in Silicon Valley, where most investments are made, particularly through networks of venture capital firms and entrepreneurs and business ‘incubator’ and ‘accelerator’ programs dedicated to supporting startup ed-tech companies to go to scale. ClassDojo was developed as a working product through the Imagine K12 accelerator program for education technology startups in Silicon Valley. When ClassDojo emerged from its beta phase in 2013, it announced that it had further secured $1.6million in investment from venture capital sources from Silicon Valley. It raised another $21million in venture funding in spring 2016. Its investors include over 20 venture capital companies and entrepreneurial individuals, including Tim Brady from Imagine K12 (now merged with Y Combinator, a leading Silicon Valley Startup accelerator), General Catalyst Partners, GSV Capital and Learn Capital, ‘a venture capital firm focused exclusively on funding entrepreneurs with a vision for better and smarter learning.’ Learn Capital has invested in a large number of ed-tech products in recent years and is a key catalyst of the growth of the sector; its biggest limited partner is Pearson, the world’s biggest edu-business, which links ClassDojo firmly into the global ed-tech market. Many of ClassDojo’s investors also sit on the ClassDojo board.

Investment in ClassDojo has followed the standard model for startup funding in Silicon Valley. It first received seed funding from Imagine K12 and others, before securing Series A investment in 2013 and Series B in 2016. While seed funding refers to financial support for startup ideas, Series A funding is used to optimize a product and secure its user base, and Series B is about funding the business development, technology, support, and other people required for taking a business beyond its development stage. Sometime after 2016, ClassDojo will look to scale fast and wide through Series C funding—investment at this stage can reach hundreds of millions of dollars.

The ClassDojo success story for classroom practitioners and school leaders is therefore reflected and enabled by its success as a desirable product of venture capital funding, all of it framed by a buoyant marketplace of ed-tech development and finance. This marketplace is also itself framed and supported by specific kinds of discourses of technological disruption and solutionism. Many Silicon Valley companies and entrepreneurs have latched on to the education sector in recent years, seeing it in terms of problems that might be solved through technological developments and applications. Greg Ferenstein has noted that many Silicon Valley startup founders and their investors ‘believe that the solution to nearly every problem is more innovation, conversation or education,’ and therefore ‘believe in massive investments in education because they see it as a panacea for nearly all problems in society.’ The marketplace in which ClassDojo is located, therefore, is framed by a discourse that emphasizes the importance of fixing education systems and institutions in order to make them into effective mechanisms for the production of innovative problem-solving people.

Expert knowledge & discourse
As already noted above in relation to ClassDojo’s connections to education policy agendas, an emerging educational discourse is that of personal qualities and character education. ‘Education goes beyond just a test score to developing who the student is as a person—including all the character strengths like curiosity, creativity, teamwork and persistence,’ its co-founder and chief executive Sam Chaudhury has said. ‘There’s so much research showing that if you focus on building students’ character and persistence early on, that creates a 3 to 5 times multiplier on education results, graduation rates, health outcomes. It’s pretty intuitive. We shouldn’t just reduce people to how much content they know; we have to develop them as individuals.’

Underpinning the policy shift to character development of which ClassDojo plays a small bit-part are particular forms of expertise and disciplinary knowledge. The particular forms of expertise to which ClassDojo is attached are those of the psychological sciences, neuroscience and the behavioural sciences, in particular as they have been translated into the discourse of character education, grit, resilience and so on. One of the key voices in this emerging discourse is Paul Tough, author of a book about educating children with ‘grit,’ who has mapped out some of the networks of psychological, neuroscientific and economics experts contributing their knowledge and understandings to this field,  including names such as Angela Duckworth and Carol Dweck.

Duckworth and Dweck are both directly cited by ClassDojo’s founders as key influences, alongside other ‘thought leaders’ such as James Heckman, and Doug Lemov. Heckman is a Nobel prize-winning economist noted for his work on building character. Lemov is a former free-market advocate of the charter schools movement and author of the popular Teach Like a Champion. Duckworth has her own named psychological lab where she researches ‘personal qualities’ such as ‘grit’ and ‘self-control’ as dimensions of human character. The relationship between ClassDojo and Carol Dweck’s concept of ‘growth mindsets’ is the most pronounced. In January 2016, ClassDojo announced a partnership with the Project for Education Research That Scales (PERTS), an applied research center at Stanford University led by Dweck that has become the intellectual home of the theory of growth mindsets.

Dweck has argued that teachers can ‘engender a growth mind-set in children by praising them for their persistence or strategies (rather than for their intelligence), by telling success stories that emphasize hard work and love of learning, and by teaching them about the brain as a learning machine.’ Notably, Dweck’s PERTS lab itself has a close relationship with Silicon Valley, where the growth mindsets concept has been popularized as part of a recent trend in behavior-change training programs designed to enable valley workers to to ‘fix personal problems.’ Dweck herself has presented the concept at Google and other PERTS staff have advisory roles in Silicon Valley companies. The growth mindset concept is, therefore, closely aligned with the wider governmental behaviour change agenda associated with behavioural economics. Governments have long sought to use psychological and behavioural insights into citizens’ behaviours as the basis for designing policies and services that are intended to modify their future behaviours. ClassDojo seeks to accomplish this goal within schools by nudging children to change their behaviours at exactly the same time that schools are being encouraged to measure students’ non-cognitive social-emotional skills.

The partnership between ClassDojo and PERTS takes the form of a series of short animations on the ‘Big Ideas’ section of the ClassDojo website that help explain the growth mindsets idea for teachers and learners themselves. They present the brain as a malleable ‘muscle’ that can constantly grow and adapt as it is put to the task of addressing challenging problems. The presentation of the brain as a muscle in ClassDojo is part of the recent popularization of recent neuroscience concepts of ‘neuroplasticity,’ where the brain is seen as constantly adapting to the social environment. Rather than being seen as a structurally static organ, the brain has been reconceived as dynamic, with new neural pathways constantly forming through adaptation to environmental stimuli. The videos are basically high-production updates of instructional resources previously developed by Dweck and disseminated through her Mindset Works spin-out company. ClassDojo approached Dweck about adapting these materials, and the videos were produced by ClassDojo with input from PERTS. The ClassDojo website claims that ’15 million students are now building a growth mindset’–this figure is presumably based on web analytics of the numbers of schools in which the videos have been viewed–while at the time of writing in September 2016 the ClassDojo Facebook page was promoting ‘Growth Mindset Month’ .

ClassDojo is increasingly aligned with psychological and behavioral norms associated with growth mindsets, both by teaching children about growth mindsets through its Big Ideas videos and, through the app, by nudging children to conduct themselves in ways appropriate to the development of such a growth-oriented character. In this sense, ClassDojo is perfectly aligned with the controversial recent federal law which allows states to measure the performance of schools on the basis of ‘non-academic’ measures, such as students’ non-cognitive social-emotional skills, personal qualities, and growth mindsets. This governmental agenda sees children themselves as a problem to be fixed through schooling. Its logic is that if children’s non-cognitive personal qualities, such as character, mindset and grit, can be nudged and configured to the new measurable norm, then many of the problems facing contemporary schools will be solved.

The close relationship between ClassDojo, psychological expertise and government policy is indicative of the extent to which the ‘psy-sciences’ are involved in establishing the norms by which children are measured and governed in schools—a relationship which is by no means new, as Nikolas Rose has shown, but is now rapidly being accelerated by psy-based educational technologies such as ClassDojo. A science of mental measurement infuses ClassDojo, as operationalized by its behavioural points system, but it is also dedicated to an applied science of mental modification, involved in the current pursuit of the development of children as characters with grit and growth mindsets. By changing the language of learning to that of growth mindsets and other personal qualities, ClassDojo and the forms of expertise with which it is associated are changing the ways in which children may be understood and acted upon in the name of personal improvement and optimization.

ClassDojo is prototypical of how education is being reshaped in a ‘platform society.’ This sociotechnical survey of the ClassDojo assemblage provides some sense of its messy complexity as an emerging public sphere platform that has attained substantial success and popularity in education. Approached as a sociotechnical assemblage, ClassDojo is simultaneously a technical platform that serves a variety of practical, pedagogical and social functions; an organizational mosaic of engineers, marketers, product managers and other third party providers and partners; the subject of a wider regulatory environment and also a bit-part actor in new policy networks; the serious object for financial investment in the ed-tech marketplace; and a mediator of diverse expert psychological, neuroscientific and behavioural scientific knowledges and discourses pertaining to contemporary schooling and learning.

Like any digital assemblage, ClassDojo is mutating and evolving in response to the various elements that co-constitute it. As policy discourse shifts, ClassDojo follows suit–as its shift to embrace growth mindsets and its positioning in relation to policy discourses of character and positive behaviour support demonstrate. It is benefiting financially from a currently optimistic ed-tech marketplace, which is itself now being supported politically via the Every Child Succeeds Act. Its engineering blog also demonstrates how the technical platform of ClassDojo is changing as new code and algorithms become available, while its privacy policies are constantly being updated as data privacy regulation pertaining to children becomes an increasing priority and a concern–as it demonstrated in its response to a critical New York Times article in 2014. ClassDojo is not being ‘scaled up’ in a simple and linear manner, but messily and contingently, through a relational interweaving of human actions and nonhuman technologies, materials, policies, and technical standards.

Given its rapid proliferation globally into the practices of over 3 million teachers and the classroom experiences of over 35 million children in 180 countries, ClassDojo can accurately be described as a public sphere platform that is interfering in how teaching and learning take place. It is doing so according to psychological forms of expertise and governmental priorities, supported by financial instruments and organizations, and is being enacted through a technical infrastructure of devices and platforms and a human infrastructure of entrepreneurs, engineers, managers, and other experts, as well as the users who incorporate it into their own practices and extend it through the creation of user-generated content and materials. As it continues to scale and mutate, it deserves to be the focus of much further in-depth analysis. This work-in-progress has surveyed ClassDojo to point to possible future lines of inquiry into the reshaping of education in a platform society.

Images from ClassDojo media assets
Posted in Uncategorized | Tagged , , , , , | 4 Comments

Super-fast education policy fantasies

Ben Williamson

data server
Image by CSCW

In recent years the pace of education policy has begun to pick up speed. As new kinds of policy influencers such as international organizations, businesses, consultancies and think tanks have entered into educational debates and decision-making processes, the production of evidence and data to support policy development has become more spatially distributed across sectors and institutions and invested with more temporal urgency too. The increasing availability of digital data that can be generated in real time is now catalysing dreams of an even greater acceleration in policy analysis, decision-making and action. A fantasy of real-time policy action is being ushered into material existence, particularly through the advocacy of the global edu-business Pearson and the international organizations OECD (Organization of Economic Cooperation and Development) and WEF (World Economic Forum). At the same time, the variety of digital data available about aspects of education means that these policy influencers are focusing attention on the possible measurement of previously unrecorded activities and processes.

Fast policy
Education policy processes are undergoing a transformation. A spatial redistribution of policy processes is underway whereby government departments are becoming parts of ‘policy networks’ that also include consultants, think tanks, policy labs, businesses, and international non-governmental organizations.

In their recent book Fast Policy, policy geographers Jamie Peck and Nik Theodore argue that:

The modern policymaking process may still be focused on centers of political authority, but networks of policy advocacy and activism now exhibit a precociously transnational reach; policy decisions made in one jur­isdiction increasingly echo and influence those made elsewhere; and global policy ‘models’ often exert normative power across significant distances. Today, sources, channels, and sites of policy advice encompass sprawling networks of human and nonhuman actors/actants, including consultants, web sites, practitioner communities, norm-­setting models, conferences, guru per­formances, evaluation scientists, think tanks, blogs, global policy institutes, and best-­practice peddlers, not to mention the more ‘hierarchical’ influence of multilateral agencies, international development funds, powerful trading partners, and occupying powers.

These policy networks sometimes do the job of the state through outsourced contracts, commissioned evidence-collection and analysis, and the production of policy consultancy for government. They often also act as channels for the production of policy influence, bringing new agendas, new possibilities, and new solutions to perceived problems into the view of national government departments and policymakers. Policy is, therefore, becoming more socially and spatially distributed across varied sites, across public, private and third sectors, and increasingly involves the hybridization of methods drawn from all the actors involved in it, particularly in relation to the production and circulation of evidence that might support a change in policy.

The socially and spatially networked nature of the contemporary education policy environment is leading to a temporal quickening in the production and communication of evidence. In the term ‘fast policy’, Peck and Theodore describe a new condition of accelerated policy production, circulation and translation that is characterized not just by its velocity but also ‘by the intensified and instantaneous connectivity of sites, chan­nels, arenas, and nodes of policy development, evolution, and reproduction.’ Fast policy refers to the increasing porosity between policymaking locales; the transnationalization of policy discourses and communities; global deference to models of ‘what works’ and ‘best practices’; compressed R&D time in policy design and roll-out; new shared policy experimentality and evaluation practices; and the expansion of a ‘soft infrastructure’ of expert conferences, resource banks, learning networks, case-­study manuals, and web-­ based materials, populated by intermediaries, advocates, and experts.

Fast policy is becoming a feature of education policy production and circulation. As Steven Lewis and Anna Hogan have argued,

actors work within complex policy networks to produce and promote evidence tailored to policymakers, meaning they orchestrate rather than produce research knowledge in order to influence policy production. These actors tend to construct simplified and definitive solutions of best practice, and their reports are generally short, easy-to-read and glossy productions.

As a consequence they claim the desire for policy solutions and new forms of evidence and expertise, is ultimately leading to the ‘speeding up’ of policy:

This ‘speeding up’ of policy, or ‘fast policy’ … is characterized not only by the codification of best practice and ‘ideas that work’ but also, significantly, by the increasing rate and reach of such policy diffusion, from sites of policy development and innovation to local sites of policy uptake and, if not adoption, translation.

In other words, policies are becoming more fast-moving, both in their production and in their translation into action, as well as more transnational in uptake and implementation, more focused on quick-fix ‘best practice’ or ‘what works’ solutions, and more pacey and attractive to read thanks to being packaged up as short glossy handbooks and reports, websites and interactive data visualizations.

For Lewis and Hogan, the development of fast policy in education is exemplified by the work of the education business Pearson and the international organization OECD. In their specific example of fast policy in action, they observe how ‘so-called best practices travel from their point of origin (to the extent that this can ever be definitively fixed) at the OECD to their uptake and development by an international edu-business (Pearson),’ and how they are from there translated into more ‘localized’ concerns with improving state-level schooling performance within national systems. In particular they show how OECD data collected as part of the global PISA testing program have been translated into Pearson’s Learning Curve Databank, itself a public data resource intended to inform ‘evidence-based’ educational policymaking around the world, and from there mobilized in the specification of local policy problems and solutions. The concern with evidence-based policymaking, they show, involves the use of best practice models and learning from ‘examples’:

We see the dominance of fast policy approaches, and hence their broad appeal across policy domains such as schooling, as directly emanating from the promotion of decontextualised best practices that can, so it is alleged, transcend the specific requirements of local contexts. This is despite ‘evidence-based’ policymaking being an inherently political and contingent process, insofar as it is always mediated by judgements, priorities and professional values specific to the people, moments and places in which such policies are to be enacted.

Additionally, in the fast policy approaches that are developing in education through the work of OECD and Pearson, quantitative data have become especially significant for evidence-based practices, as measurement, metrics, ranking and comparison all help to create new continuities and flows that can overcome physical distance in an increasingly interconnected and accelerating digital world. Numbers and examples form the evidential flow of fast policy, enabling complex social, political and economic problems to be rendered in easy-to-understand tables, diagrams and graphs, and their solutions to be narrated and marketed through exemplar best practice case studies.

Real-time policy action
Pearson and OECD are additionally seeking to develop new computer-based data analytics techniques that can be used to generate evidence to inform education policy. Pearson, for example, has proposed a ‘renaissance in assessment’ that will involve a shift to new computer-based assessment systems for the continuous tracking and monitoring of ‘streaming data’ through real-time analytics, rather than the collection of data through discrete temporal assessment events. Its report promotes using ‘intelligent software and a range of devices that facilitate unobtrusive classroom data collection in real time,’ and to ‘track learning and teaching at the individual student and lesson level every day in order to personalise and thus optimise learning.’ Much of the data analytic and adaptive technology required of this vision is in development at Pearson’s own Center for Data Analytics and Adaptive Learning, its in-house centre for educational big data research and development.

Moreover, the authors of the renaissance in assessment report argue for a revolution in education policy, shifting the focus from the governance of education through the institution of the school to ‘the student as the focus of educational policy and concerted attention to personalising learning.’ The report clearly represents an emerging educational imaginary where policy is to concentrate on the real-time tracking of the individual rather than the planned and sequenced longitudinal measurement of the institution or system. Along these lines, its authors note that the OECD itself is moving towards new forms of machine learning in its international assessments technologies, with a proposal to assess collaborative problem solving through ‘a fully computer-based assessment in which a student interacts with a simulated collaborator or “avatar” in order to solve a complex problem.’ Such systems, for Pearson and OECD, can speed up the process of providing feedback to students, but are, importantly, also adaptive, meaning that the content adapts to the progress of the student in real time.

The potential promise of such computer-based adaptive systems, for the experts of Pearson and OECD, is a further acceleration in policy development to real-time speed. Instead of policy based on the long time-scales of temporally discrete assessment events, data analytics platforms appear to make it possible to perform a constant automated analysis of the digital timestream of student activities and tasks. Such systems can then adapt to the student in ways that are synchronized with their learning processes. This process appears to make it feasible to squeeze out conventional standardized assessments and tests, with their association with bureaucratic processes of data collection by governmental centres of political authority, and replace them with computer-adaptive systems.

These proposals imagine a super-fast policy process that is at least partly automated, and certainly accelerated beyond the temporal threshold of human capacities of data analysis and expert professional judgment. Heather Roberts-Mahoney and colleagues have analysed US documents advocating the use of real-time data analytics for personalized learning, and conclude that they transform teachers into ‘data collectors’ who  ‘no longer have to make pedagogical decisions, but rather manage the technology that will make instructional decisions for them,’ since  ‘curriculum decisions, as well as instructional practices, are reduced to algorithms and determined by adaptive computer-based systems that create ‘personalized learning,’ thereby allowing decision-making to take place externally to the classroom.’ The role of policymakers is changed by such systems too, turning them into awarders of contracts to data processing companies and technological vendors of adaptive personalized learning products. It is through such technical platforms and the instructions coded in to them that decisions about intervention will be made at the individual level, rather than bureaucratic decision-making at national or state system scale.

The use of real-time systems in education is therefore part of ‘a reconfiguring of intensities, or “speeds”, of institutional life’ as it is ‘now “plugged into” information networks,’ as Greg Thompson has argued. It makes the collection, analysis and feedback from student data into a synchronous loop that functions at extreme velocity through systems that are hosted by organizations external to the school but are also networked into the pedagogic routines of the adaptive, personalized classroom. In short, real-time data-driven systems are ideal fast policy technologies.

Affective policy
Importantly, these fast policy influencers are also pursuing the possibility of measuring non-academic aspects of learning such as social and emotional skills. The OECD has launched its Education and Social Progress project to develop specific measurement instruments for ‘social and emotional skills such as perseverance, resilience and agreeableness,’ ‘using the evidence collected, for policy-makers, school administrators, practitioners and parents to help children achieve their full potential, improve their life prospects and contribute to societal progress.’

The World Economic Forum, another major international organization that works in policy networks to influence education policy, has similarly produced a report on fostering social and emotional learning through technology. It promotes the development of biosensor technologies, wearable devices and other applications that can be used to ‘provide a minute-by-minute record of someone’s emotional state’ and ‘to help students manage their emotions.’ It even advocates educational applications of ‘affective computing’:

Affective computing comprises an emerging set of innovations that allow systems to recognize, interpret and simulate human emotions. While current applications mainly focus on capturing and analysing emotional reactions to improve the efficiency and effectiveness of product or media testing, this technology holds great promise for developing social and emotional skills such as greater empathy, improved self-awareness and stronger relationships.

The affective analytics of education being proposed by both the OECD and WEF make the emotional life of the school child into the subject of fast policy experimentation. They are seeking to synchronize children’s emotional state, measured as a ‘minute-by-minute record,’ with societal progress, rendering students’ emotions as real-time digital timestreams of data that can be monitored and then used as evidence in the evaluation of various practices and policies. Timestreams of data about how students feel are being positioned by policy influencers the OECD and WEF as a new form of evidence at a time of accelerating policy experimentation. These proposals are making sentiment analysis into a key fast policy technology, enabling policy interventions and associated practices to be evaluated in terms of the feelings they generate–a way of measuring not just the effects of policy action but its production of affect too.

Following super-fast policy prototypes
Writing about fast policy in an earlier paper prefacing their recent book, Jamie Peck and Nik Theodore have described ‘policy prototypes that are moving through mutating policy networks’ and which connect ‘distant policy-making sites in complex webs of experimentation-emulation-evolution.’ They describe the methodological challenges of ‘following the policy’ in the context of spatially distributed policy networks and temporally accelerated modes of policy development where spefific policies are in a constant state of movement, translation and transformation. For them:

Policy designs, technologies, and frames are … complex and evolving social constructions rather than as concretely fixed objects. In fact, these are very often the means and the media through which relations between distant policy-making sites are actively made and remade.

A research focus on the kind of super-fast policy prototypes being developed by Pearson, the WEF and the OECD would likewise need to focus, methodologically, on the technologies and the designs of computer-based approaches as socially created devices. It would need to follow these policy prototypes through processes of experimentation, emulation and mutation, as they are diversely developed, taken up or resisted, and modified and amended through interaction with other organizations, actors, discourses and agendas. As with Peck and Theodore’s focus on fast policy, researching the super-fast policy prototypes proposed for education by the OECD, WEF and Pearson would investigate the ‘social life’ of the production of new technologies of computer-adaptive assessment, personalized learning, affective computing and so on, but also attend to their social productivity as they change the ways in which education systems, institutions, and the individuals within them perform.

Posted in Uncategorized | Tagged , , , , , | Leave a comment

Performing data

‘Performance information’ in the Scottish Government national improvement plan for education

Ben Williamson

ScotGov plan

At the end of June 2016 the Scottish Government published a major national delivery plan for improving Scottish education over the next few years. Drafted in response to a recent independent review of Scottish education carried out by the OECD, the delivery plan is part of a National Improvement Framework with ambitious plans to raise attainment and achieve equity.

It is the relentless focus of the delivery plan on the use of performance measurement, metrics and evidence gathering to drive forwards these improvements that is especially arresting. In a striking line from the introduction it is stated that:

As the OECD review highlighted, current … arrangements do not provide sufficiently robust information across the system to support policy and improvement. We must move from a culture of judgement to a system of judgement.

A ‘system of judgment’: right from the start, it is clear that the delivery plan is based on the understanding—imported from the OECD via its recommendation that new ‘metrics’ be devised to measure Scottish education—that data can be used to drive forward performance improvement and for the purposes of disciplining under-performance.

Productive measurement
In a series of articles, the sociologist David Beer has been writing about the socially productive power of metrics in a variety of sectors and institutions of society:

We often think of measurement as in some way capturing the properties of the world we live in. This might be the case, but we can also suggest that the way that we are measured produces certain outcomes. We adapt to the systems of measurement that we are living within.

Metrics and measurements are not simply descriptive of the world, then, but play a part in reshaping it in particular ways, affecting how people behave and understand things and act to do things differently. As Beer elaborates:

The measurements themselves matter, but it is knowing or expecting how we will be measured that is really powerful. Systems of measurement then have productive powers in our lives, both in terms of how we respond to them and how they inform the judgments and decisions that impact upon us.

Performance measurement techniques, of the kind to be implemented through the Scottish Government’s proposed ‘system of judgement’, can similarly be understood as productive measures that will be used to attach evaluative numbers to practices and institutions in ways that are intended to change how the system performs overall. This is likely to affect how school teachers, leaders, and maybe even pupils themselves and their parents act and perform their roles, as they expect to be measured, judged, and acted upon as a result.

‘Performance information’ is one of the key ‘drivers of improvement’ listed in the plan, and clearly shows how a range of ‘measures’ are to be collected:

We will pull together all the information and data we need to support improvement.  Evidence suggests … we must ensure we build a sound understanding of the range of factors that contribute to a successful education system. This is supported by international evidence which confirms that there is no specific measure that will provide a picture of performance. We want to use a balanced range of measures to evaluate Scottish education and take action to improve further.

Scanning through the plan and the improvement framework, it becomes clear just how extensive this new focus on performance measurement will become. The plan emphasizes:

  • the use of standardized assessment to gather attainment data
  • the gathering of diverse data about the academic progress and well-being of pupils at all stages
  • pre-inspection questionnaires, school inspection and local authority self-evaluation reports
  • the production of key performance indicators on employability skills
  • greater performance measurement and measurement of schools
  • new standards and evaluation frameworks for schools
  • information on teacher induction, teacher views, and opportunities for professional learning
  • evidence on the impact of parents in helping schools to improve
  • regular publication of individual school data
  • the use of visual data dashboards to make school data transparent
  • training for ‘data literacy’ among teachers
  • comparison with international evidence

All of this is in addition to system-wide national benchmarking, international comparisons, defining and monitoring standards, quality assurance, and is all to be overseen by an international council of expert business and reformatory advisors to guide and evaluate its implementation.

Performative numbers
The delivery plan makes for quite a cascade of new and productive measures–an ‘avalanche of numbers‘ –though Scottish schools are unlikely to be terribly surprised by the emphasis in the delivery plan on performance information, targets, performance indicators and timelines. (In England the emphasis on performance data has been even more pronounced, with Paul Morris claiming ‘the purposes of schooling and what it means to be educated are effectively being redefined by the metrics by which we evaluate schools and pupils.’)

Since 2014, all Scottish schools have been encouraged by the Scottish Government to make use of Insight, an online benchmarking tool ‘designed for use by secondary schools and local authorities to identify success and areas where improvements can be made, with the ultimate aim of making a positive difference for pupils’. It provides data on ‘four national measures, including post-school destinations and attainment in literacy and numeracy as well as information on a number of local measures designed to help users take a closer look at their curriculum, subjects and courses’. It features data dashboards that allow schools to view an overall picture of the data from their school and compare it with the national measures presented on the national dashboard.

A notable feature of Insight is the ‘Virtual Comparator’ which allows users to see how the performance of their pupils compares to a similar group of pupils from across Scotland. The Virtual Comparator feature takes the characteristics of pupils in a school and matches them to similar pupils from across Scotland to create a ‘virtual school’ against which a ‘real’ school may benchmark its progress.

The relentless focus by the Scottish Government on performance information, inspection, comparison, measurement and evidence is demonstrative of how education systems, organizations and individuals are now the subjects of increasing demands of producing data.

As the concept of ‘productive measures’ reminds us, though, performance measurement is not simply descriptive. It also brings the matter it describes into being. Captured in the term ‘performativity,’ it has become apparent that education systems and institutions, and even individuals themselves, are changing their practices to ensure the best possible measures of performance. Closely linked to this is the notion of accountability, that is, the production of evidence that proves the effectiveness—in terms of measurable results—of whatever has been performed in the name of improvement and enhancement. As Stephen Ball phrases it:

Performativity is … a regime of accountability that employs judgements, comparisons and displays as a means of control, attrition and change. The performance of individuals and organizations serve as measures of productivity or output … [and] stand for, encapsulate or represent the worth, quality or value of an individual or organization within a field of judgement.

In other words, performativity makes the question of what counts as worthwhile activity in education into the question of what can be counted and of what account can be given for it. It reorients institutions and individuals to focus on those things that can be counted and accounted for with evidence of their delivery, results and positive outcomes, and de-emphasises any activities that cannot be easily and effectively measured.

In practical terms, performativity depends on databases, audits, inspections, reviews, reports, and the regular publication of results, and tends to prioritize the practices and judgements of accountants, lawyers and managers who subject practitioners to constant processes of target-setting, measurement, comparison and evaluation. The appointment of an international council of experts to oversee the collection and analysis of all the performance information required by the improvement and delivery plans is ample illustration of how Scottish education will be subject to a system of expert techniques and judgement.

Political analytics
It is hard, then, to see the Scottish Government delivery plan as anything other than a series of policy instruments that via specific data-driven techniques and particular technical tools will reinforce performativity and accountability, all under the aspiration of closing attainment gaps and achieving equity.

Although no explicit mention is made of the technologies required to enact this system of judgement, it is clear that a complex data infrastructure of technologies and technical experts will also be needed to collect, store, clean, filter, analyse, visualize and communicate the vast masses of performance information. Insight and other dashboards already employed in Scottish education are existing products that doubtless anticipate a much more system-wide digital datafication of the sector. Data processing technologies are making the performance of education systems and institutions into enumerated timestreams of data by which they might be measured, evaluated and assessed, held up to both political and public scrutiny, and then made to account for their actions and decisions, and either rewarded or disciplined accordingly. A new kind of political analytics that prioritizes digitized forms of data collection and analysis is likely to play a powerful role in the governance of Scottish education in coming years.

Data technologies of various kinds are the enablers of performativity and accountability, and translate the numerical logics of the technologies into the material and practical realities of professional life. As a data-driven ‘system of judgement’, Scotland’s delivery plan for education will, in other words, usher in more and more ‘productive measures’ into Scottish education, reconfiguring it and those who work and learn in it in ways that will need to be studied closely for many years to come.


Posted in Uncategorized | Tagged , , | 1 Comment

Critical questions for big data in education

Ben Williamson

data center

Big data has arrived in education. Educational data science, learning analytics, computer adaptive testing, assessment analytics, educational data mining, adaptive learning platforms, new cognitive systems for learning and even educational applications based on artificial intelligence are fast becoming parts of the educational landscape, in schools, colleges and universities, as well as in the networked spaces of online courses.

As part of a recent conversation about the Shadow of the Smart Machine work on machine learning algorithms being undertaken by Nesta, I was asked what I thought were some the most critical questions about big data and machine learning in education. This reminded me of the highly influential paper ‘Critical questions for big data’ by danah boyd and Kate Crawford, in which they ‘ask critical questions about what all this data means, who gets access to what data, how data analysis is deployed, and to what ends.’

With that in mind, here are some preliminary (work-in-progress) critical questions to ask about big data in education.

How is ‘big data’ being conceptualized in relation to education?
Large-scale data collection has been at the centre of the statistical measurement, comparison and evaluation of the performance of education systems, policies, institutions, staff and students since the mid-1800s. Does big data constitute a novel way of enumerating education? The sociologist David Beer has suggested we need to think about the ways in which big data as both a concept and a material phenomenon has appeared as part of a history of statistical thinking, and in relation to the rise of the data analytics industry—he suggests social science still needs to understand ‘the concept itself, where it came from, how it is used, what it is used for, how it lends authority, validates, justifies, and makes promises.’ Within education specifically, how is big data being conceptualized, thought about, and used to animate specific kinds of projects and technical developments? Where did it come from–data science, computer science–and who are its promoters and sponsors in education? What promises are attached to the concept of big data as it is discussed within the domain of education? We might wish to think about a ‘big data imaginary’ in education—a certain way of thinking about, envisaging and visioning the future of education through the conceptual lens of big data—that is now animating specific technical projects, becoming embedded in the material reality of educational spaces and enacted in practice.

What theories  of learning underpin big data-driven educational technologies?
Big data-driven platforms such as learning analytics aim to ‘optimize learning’ but is it always clear what is meant by ‘learning’ by the organizations and actors that build, promote and evaluate them? Much of the emerging field of ‘educational data science’—which encompasses much educational data mining, learning analytics and adaptive learning software R&D—is informed by conceptualizations of learning that are rooted in cognitive science and cognitive neuroscience. These disciplines tend to focus on learning as an ‘information-processing’ event—to treat learning as something that can be monitored and optimized like a computer program—and pay less attention to the social, cultural, political and economic factors that structure education and individuals’ experiences of learning.

Given the statistical basis of big data, it’s perhaps also not surprising that many actors involved in educational big data analyses are deeply informed by the disciplinary practices and assumptions of psychometrics and its techniques of psychological measurement of knowledge, skills, personality and so on. Aspects of behaviourist theories of learning even persist in behaviour management technologies that are used to collect data on students’ observed behaviours and distribute rewards to reinforce desirable conduct. There is an emerging tension between the strongly psychological, neuroscientific and computational ways of conceptualizing and theorizing learning that dominate big data development in education, and more social scientific critiques of the limitations of such theories.

How are machine learning systems used in education being ‘trained’ and ‘taught’?
The machine learning algorithms that underpin much educational data mining, learning analytics and adaptive learning platforms need to be trained, and constantly tweaked, adjusted and optimized to ensure accuracy of results–such as predictions about future events. This requires ‘training data,’ a corpus of historical data that the algorithms can bee ‘taught’ with to then use to find patterns in data ‘in the wild.’ Who selects the training data? How do we know if it is appropriate, reliable and accurate? What if the historical data is in some ways biased, incomplete or inaccurate? Does this risk generating ‘statistical discrimination’ of the sort produced by ‘predictive policing,’ which has in some cases been found to disproportionately predict that black men will commit crime? Educational research has long asked questions about the selection of the knowledge for inclusion in school curricula that are to be taught to students—we may now need to ask about the selection of the data for inclusion in the training corpus of machine learning platforms, as these data could be consequential for learners’ subsequent educational experience.

Moreover, we might need to ask questions about the nature of the ‘learning’ being experienced by machine learning algorithms, particularly as enthusiastic advocates in places like IBM are beginning to propose that advanced machine learning is more ‘natural,’ with ‘human qualities,’ based on computational models of aspects of human brain functioning and cognition. To what extent do such claims appear to conflate understandings of the biological neural networks of the human brain that are mapped by neuroscientists with the artificial neural networks designed by computer scientists? Does this reinforce computational information-processing conceptualizations of learning, and risk addressing young human minds and the ‘learning brain’ as computable devices that can be debugged and rewired?

Who ‘owns’ educational big data?
The sociologist Evelyn Ruppert has asked ‘who owns big data?’, noting that numerous people, technologies, practices and actions are involved in how data is shaped, made and captured. The technical systems for conducting educational big data collection, analysis and knowledge production are expensive to build. Specialist technical staff are required to program and maintain them, to design their algorithms, to produce their interfaces. Commercial organizations see educational data as a potentially lucrative market, and ‘own’ the systems that are now being used to see, know and make sense of education and learning processes. Many of their systems are proprietorial, and are wrapped in IP and patents which makes it impossible for other parties to understand how they are collecting data, what analyses they are conducting, or how robust their big data samples are. Specific commercial and political ambitions may also be animating the development of educational data analytics platforms, particularly those associated with Silicon Valley where ed-tech funding for data-driven applications is soaring and tech entrepreneurs are rapidly developing data-driven educational software and even new institutions.

In this sense, we need to ask critical questions about how educational big data are made, analysed and circulated within specific social, disciplinary and institutional contexts that often involve powerful actors that possess significant economic capital in the shape of funding and resourcing, cultural capital in terms of the production of new specialist knowledge, and social capital through wider networks of affiliations, partnerships and connections. The question of the ownership of educational big data needs to be located in relation to these forms of capital and the networks where they circulate.

Who can ‘afford’ educational big data?
Not all schools, colleges or universities can necessarily afford to purchase a learning analytics or adaptive software platform—or to partner with platform providers. This risks certain wealthy institutions being able to benefit from real-time insights into learning practices and processes that such analytics afford, while other institutions will remain restricted to the more bureaucratic analysis of temporally discrete assessment events.

Can educational big data provide a real-time alternative to temporally discrete assessment techniques and bureaucratic policymaking?
Policy makers in recent years have depended on large-scale assessment data to help inform decision-making and drive reform—particularly the use of large-scale international comparative data such as the datasets collected by OECD testing instruments. Educational data mining and analytics can provide a real-time stream of data about learners’ progress, as well as automated real-time personalization of learning content appropriate to each individual learner. To some extent this changes the speed and scale of educational change—removing the need for cumbersome assessment and country comparison and distancing the requirement for policy intervention. But it potentially places commercial organizations (such as the global education business Pearson) in a powerful new role in education, with the capacity to predict outcomes and shape educational practices at timescales that government intervention cannot emulate.

Is there algorithmic accountability to educational analytics?
Learning analytics is focused on the optimization of learning and one of its main claims is the early identification of students at-risk of failure. What happens if, despite being enrolled on a learning analytics system that has personalized the learning experience for the individual, that individual still fails? Will the teacher and institution be accountable, or can the machine learning algorithms (and the platform organizations that designed them) be held accountable for their failure? Simon Buckingham Shum has written about the need to address algorithmic accountability in the learning analytics field, and noted that ‘making the algorithms underpinning analytics intelligible’ is one way of at least making them more transparent and less opaque.

Is student data replacing student voice?
Data are sometimes said to ‘speak for themselves,’ but education has a long history of encouraging learners to speak for themselves too. Is the history of pupil voice initiatives being overwritten by the potential of pupil data, which proposes a more reliable, accurate, objective and impartial view of the individual’s learning process unencumbered by personal bias? Or can student data become the basis for a data-dialogic form of student voice, one in which teachers and their students are able to develop meaningful and caring relationships through mutual understanding and discussion of student data?

Do teachers need ‘data literacy’?
Many teachers and school leaders possess little detailed understanding of the data systems that they are using, or required to use. As glossy educational technologies like ClassDojo are taken up enthusiastically by millions of teachers worldwide, might it be useful to ensure that teachers can ask important questions about data ethics, data privacy, data protection, and be able to engage with educational data in an informed way? Despite calls in the US to ensure that data literacy become the focus for teachers’ pre-service training, there appears little sign that the provision of data literacy education for educational practitioners is being developed in the UK.

What ethical frameworks are required for educational big data analysis and data science studies?
The UK government recently published an ethical framework for policymakers for use when planning data science projects. Similar ethical frameworks to guide the design of educational big data platforms and education data science projects are necessary.

Some of these questions clearly need more work, but make clear I think the need for more work to critically interrogate big data in education.

Posted in Uncategorized | Tagged , , , , , , | 4 Comments

Artificial intelligence, cognitive systems and biosocial spaces of education

By Ben Williamson

Brewbook_Wired brain_2012
Image: telephone cable model of corpus callosum by Brewbooks

Recently, new ideas about ‘artificial intelligence’ and ‘cognitive computing systems’ in education have been advanced by major computing and educational businesses. How might these ideas and the technical developments and business ambitions behind them impact on educational institutions such as schools, and on the role of human actors such as teachers and learners, in the near future? More particularly, what understandings of the human teacher and the learner are assumed in the development of such systems, and with what potential effects?

The focus here is on the education business Pearson, which published a report entitled Intelligence Unleashed: An argument for AI in education in February 2016, and the computing company IBM, which launched Personalized Education: from curriculum to career with cognitive systems in May 2016. Pearson’s interest in AI reflects its growing profile as an organization using advanced forms of data analytics to measure educational institutions and practices  while IBM’s report on cognitive systems makes a case for extending its existing R&D around cognitive computing into the education sector.

AI has been the subject of serious concern recently, with warnings from high-profile figures including Stephen Hawking, Bill Gates and Elon Musk, while awareness about cognitive computing has been fuelled by widespread media coverage of Google’s AlphaGo system, which beat one of the world’s leading Go players back in March. Commenting on these recent events, the philosopher Luciano Floridi has noted that contemporary AI and cognitive computing, however, cannot be characterized in monolithic terms as some kind of ‘ultraintelligence’; instead it is  manifesting itself in far more mundane ways through an ‘infosphere’ of ‘ordinary artefacts that outperform us in ever more tasks, despite being no cleverer than a toaster’:

The success of our technologies depends largely on the fact that, while we were speculating about the possibility of ultraintelligence, we increasingly enveloped the world in so many devices, sensors, applications and data that it became an IT-friendly environment, where technologies can replace us without having any understanding, mental states, intentions, interpretations, emotional states, semantic skills, consciousness, self-awareness or flexible intelligence. Memory (as in algorithms and immense datasets) outperforms intelligence when landing an aircraft, finding the fastest route from home to the office, or discovering the best price for your next fridge. Digital technologies can do more and more things better than us, by processing increasing amounts of data and improving their performance by analysing their own output as input for the next operations.

Contemporary algorithmic forms of AI that learn from the vast memory-banks of big data do not constitute either an apocalyptic or benevolent future of AI or cognitive systems, but, for Floridi, reflect human ambitions and problems.

So why are companies like Pearson and IBM advancing claims for their benefits in education, and to address which ambitions and problems? Extending from my recent work on both Pearson’s digital methods and IBM’s cognitive systems R&D programs (all part of an effort to map out the emerging field of ‘educational data science’), I suggest these developments can be understood in terms of growing recognition of the connections between computer technologies, social environments, and embodied human experience.

Pearson intelligence
Pearson has been promoting itself as a new source of expertise in educational big data analysis since establishing its Center for Digital Data, Analytics and Adaptive Learning in 2012. Its ambitions in the direction of educational data analytics are to make sense of the masses of data becoming available as educational activities increasingly occur via digital media, and to use these data and patterns extracted from them to derive new theories of learning processes, cognitive development, and non-academic social and emotional learning. It has also begun publishing reports under its ‘Open Ideas’ theme, which aim to make its research available publicly. It is under the Open Ideas banner that Pearson has published Intelligence Unleashed (authored by Rose Luckin and Wayne Holmes of the London Knowledge Lab at the University College London).

Pearson’s report proposes that artificial intelligence can transform teaching and learning. Its authors state that:

Although some might find the concept of AIEd alienating, the algorithms and models that comprise AIEd form the basis of an essentially human endeavour. AIEd offers the possibility of learning that is more personalised, flexible, inclusive, and engaging. It can provide teachers and learners with the tools that allow us to respond not only to what is being learnt, but also to how it is being learnt, and how the student feels.

Rather than seeking to construct a monolithic AI system, Pearson is proposing that a ‘marketplace’ of thousands of AI components will eventually combine to ‘enable system-level data collation and analysis that help us learn much more about learning itself and how to improve it.’

Underpinnings its vision of AIEd is a particular concern with ‘the most significant social challenge that AI has already brought – the steady replacement of jobs and occupations with clever algorithms and robots’:

It is our view that this phenomena provides a new innovation imperative in education, which can be expressed simply: as humans live and work alongside increasingly smart machines, our education systems will need to achieve at levels that none have managed to date.

In other words, in the Pearson view, a marketplace of AI applications will both be able to provide detailed real-time data analytics on education and learning, and also lead to far greater levels of achievement by both individuals and whole education systems. Its vision is of augmented educational systems, spaces and practices where humans and machines work symbiotically.

In technical terms, what Pearson terms AIEd relies on a particular form of AI. This is not the AI with sentience of sci-fi imaginings, but AI reimagined through the lens of big data and data analytics techniques–the ‘ordinary artefacts’ of machine learning systems. Notably, the report refers to advances in machine learning algorithms, computer modelling, statistics, artificial neural networks and neuroscience, since ‘AI involves computer software that has been programmed to interact with the world in ways normally requiring human intelligence. This means that AI depends both on knowledge about the world, and algorithms to intelligently process that knowledge.’

In order to do so, and importantly, Pearson’s brand of AIEd requires the development of sophisticated computational models. These include models of the learner, models of effective pedagogy, and models of the knowledge domain to be learned, as well as models that represent the social, emotional, and meta-cognitive aspects of learning:

Learner models are ways of representing the interactions that happen between the computer and the learner. The interactions represented in the model (such as the student’s current activities, previous achievements, emotional state, and whether or not they followed feedback) can then be used by the domain and pedagogy components of an AIEd programme to infer the success of the learner (and teacher). The domain and pedagogy models also use this information to determine the next most appropriate interaction (learning materials or learning activities). Importantly, the learner’s activities are continually fed back into the learner model, making the model richer and more complete, and the system ‘smarter’.

Based on the combination of these models with data analytics and machine learning processes, Pearson’s proposed vision of AIEd includes the development of Intelligent Tutoring Systems (ITS) which ‘use AI techniques to simulate one-to-one human tutoring, delivering learning activities best matched to a learner’s cognitive needs and providing targeted and timely feedback, all without an individual teacher having to be present.’ It also promises intelligent support for collaborative working—such as AI agents that can integrate into teamwork—and intelligent virtual reality environments that simulate authentic contexts for learning tasks. Its vision is of teachers supported by their own AIEd teaching assistants and AIEd-led professional development.

These techniques and applications are seen as contributors to a whole-scale reform of education systems:

Once we put the tools of AIEd in place as described above, we will have new and powerful ways to measure system level achievement. … AIEd will be able to provide analysis about teaching and learning at every level, whether that is a particular subject, class, college, district, or country. This will mean that evidence about country performance will be available from AIEd analysis, calling into question the need for international testing.

In other words, Pearson is proposing to bypass the cumbersome bureaucracy of mass standardized testing and assessment, and instead focus on real-time intelligent analytics conducted up-close within the pedagogic routines of the AI-enhanced classroom. This will rely on a detailed and intimate analytics of individual performance, which will be gained from detailed modelling of learners through their data.

Pearson’s vision of intelligent, personalized learning environments is therefore based on its new understandings of ‘how to blend human and machine intelligence effectively.’ Specific kinds of understandings of human intelligence and cognition are assumed here. As Pearson’s AIEd report acknowledges,

AIEd will continue to leverage new insights in disciplines such as psychology and educational neuroscience to better understand the learning process, and so build more accurate models that are better able to predict – and influence – a learner’s progress, motivation, and perseverance. … Increased collaboration between education neuroscience and AIEd developers will provide technologies that can offer better information, and support specific learning difficulties that might be standing in the way of a child’s progress.

These points highlight how the design of AIEd systems will embody neuroscientific insights into learning processes–insights that will then be translated into models that can be used to predict and intervene in individuals’ learning processes. This reflects the recent and growing interest in neuroscience in education, and the adoption of neuroscientific insights for ‘brain-targeted‘ teaching and learning. Such practices target the brain for educational intervention based on neuroscientific knowledge. IBM has taken inspiration from neuroscience even further in its cognitive computing systems for education.

IBM cognition
One of the world’s most successful computing companies, IBM has recently turned its attention to educational data analytics. According to its paper on ‘the future of learning’:

Analytics translates volumes of data into insights for policy makers, administrators and educators alike so they can identify which academic practices and programs work best and where investments should be directed. By turning masses of data into useful intelligence, educational institutions can create smarter schools for now and for the future.

An emerging development in IBM’s data analytic approach to education is ‘cognitive learning systems’ based on neuroscientific methodological innovations, technical developments in brain-inspired computing, and artificial neural networks algorithms. Over the last decade, IBM has positioned itself as a dominant research centre in cognitive computing, with huge teams of engineers and computer scientists working on both basic and applied research in this area. Its own ‘Brain Lab’ has provided the neuroscientific insight for these developments, leading to R&D in a variety of areas. Its work has proceeded through neuroscience and neuroanatomy to supercomputing, to a new computer architecture, to a new programming language, to artificial neural network algorithms, and finally cognitive system applications, all underpinned by its understanding of the human brain’s synaptic structures and functions.

IBM itself is not seeking to build an artificial brain but a computer inspired by the brain and certain neural structures and functions. It claims that cognitive computing aims to ’emulate the human brain’s abilities for perception, action and cognition,’ and has dedicated extensive R&D to the production of ‘neurosynaptic brain chips’ and scalable ‘neuromorphic systems,’ as well as its cognitive supercomputing system Watson. Based on this program of work, IBM defines cognitive systems as ‘a category of technologies that uses natural language processing and machine learning to enable people and machines to interact more naturally to extend and magnify human expertise and cognition.’

To apply its cognitive computing applications in education, IBM has developed a specific Cognitive Computing for Education program. Its program director has presented its intelligent, interactive systems that combine neuroscientific insights into cognitive learning processes with neurotechnologies that can:

learn and interact with humans in more natural ways. At the same time, advances in neuroscience, driven in part by progress in using supercomputers to model aspects of the brain … promise to bring us closer to a deeper understanding of some cognitive processes such as learning. At the intersection of cognitive neuroscience and cognitive computing lies an extraordinary opportunity … to refine cognitive theories of learning as well as derive new principles that should guide how learning content should be structured when using cognitive computing based technologies.

The prototype innovations developed by the program include automated ‘cognitive learning content’, ‘cognitive tutors’ and ‘cognitive assistants for learning’ that can understand the learner’s needs and ‘provide constant, patient, endless support and tuition personalized for the user.’ IBM has also developed an application called Codename: Watson Teacher Advisor, that is designed to observe, interpret and evaluate information to make informed decisions that should provide guidance and mentorship to help teachers improve their teaching.

IBM’s latest report on cognitive systems in education proposes that ‘deeply immersive interactive experiences with intelligent tutoring systems can transform how we learn,’ ultimately leading to the ‘utopia of personalized learning’:

Until recently, computing was programmable – based around human defined inputs, instructions (code) and outputs. Cognitive systems are in a wholly different paradigm of systems that understand, reason and learn. In short, systems that think. What could this mean for the educators? We see cognitive systems as being able to extend the capabilities of educators by providing deep domain insights and expert assistance through the provision of information in a timely, natural and usable way. These systems will play the role of an assistant, which is complementary to and not a substitute for the art and craft of teaching. At the heart of cognitive systems are advanced analytic capabilities. In particular, cognitive systems aim to answer the questions: ‘What will happen?’ and ‘What should I do?’

Rather than being hard-programmed, cognitive computing systems are designed like the brain to learn from experience and adapt to environmental stimuli. Thus, instead of seeking to displace the teacher, IBM sees cognitive systems as optimizing and enhancing the role of the teacher, as a kind of cognitive prosthetic or machinic extension of human qualities. This is part of a historical narrative about human-computer hybridity that IBM has wrapped around its cognitive computing R&D:

Across industries and professions we believe there will be an increasing marriage of man and machine that will be complementary in nature. This man-plus-machine process started with the first industrial revolution, and today we’re merely at a different point on that continuum. At IBM, we subscribe to the view that man plus machine is greater than either on their own.

As such, for IBM,

We believe technology will help educators to improve student outcomes, but must be applied in context and under the auspices of a ‘caring human’. The teacher-to-system relationship does not, in our view, lead to a dystopian future in which the teacher plays second fiddle to an algorithm.

The promise of cognitive computing for IBM is not just of more ‘natural systems’ with ‘human qualities,’ but a fundamental reimagining of the ‘next generation of human cognition, in which we think and reason in new and powerful ways,’ as claimed its white paper ‘Computing, cognition and the future of knowing’:

It’s true that cognitive systems are machines that are inspired by the human brain. But it’s also true that these machines will inspire the human brain, increase our capacity for reason and rewire the ways in which we learn.

A recursive relationship between machine cognition and human cognition is assumed in this statement. It sees cognitive systems as both brain-inspired and brain-inspiring, both modelled on the brain and remoulding the brain through interacting with users. The ‘caring human’ teacher mentioned in its report above is one whose capacities are not displaced by algorithms, but are algorithmically augmented and extended. Similarly, the student enrolled into a cognitive learning system is also part of a hybrid system. Perhaps the clearest illustration from IBM of how cognitive systems will penetrate into education systems is its vision of a ‘cognitive classroom.’ This is a ‘classroom that will learn you‘ through constant and symbiotic interaction between cognizing human subjects and nonhuman cognitive systems designed according to a model of the human brain.

Biosocial spaces
Some of the claims in these reports from Pearson and IBM may sound far-fetched and hyperbolic. It’s worth noting, however, that most of the technical developments underpinning them are already part of cutting-edge R&D in both the computing and neuroscience sectors. Two recent ‘foresight’ reports produced by the Human Brain Project document many of these developments and their implications. One, Future Neuroscience, details attempts to map the human brain, and ultimately understand it, through ‘big science’ techniques of data analysis and brain simulation. The other, Future Computing and Robotics, focuses on the implications of ‘machine intelligence,’ ‘human-machine integration,’ and other neurocomputational technologies that use the brain as inspiration; it states:

The power of these innovations has been increased by the development of data mining and machine learning techniques, that give computers the capacity to learn from their ‘experience’ without being specifically programmed, constructing algorithms, making predictions, and then improving those predictions by learning from their results, either in supervised or unsupervised regimes. In these and other ways, developments in ICT and robotics are reshaping human interactions, in economic activities, in consumption and in our most intimate relations.

These reports are the product of interdisciplinary research between sociologists and neuroscientists, and are part of a growing social scientific interest in ‘biosocial’ dynamics between biology and social environments.

Biosocial studies emphasize how social environments are now understood to ‘get under the skin’ and to actually influence the biological functions of the body. In a recent introduction to special issue on ‘biosocial matters,’ it was claimed that a key insight coming out of social scientific attention to biology is ‘the increasing understanding that the brain is a multiply connected device profoundly shaped by social influences,’ and that ‘the body bears the inscriptions of its socially and materially situated milieu.’ Concepts such as ‘neuroplasticity’ and ‘epigenetics’ are key here. Simply put, neuroplasticity recognizes that the brain is constantly adapting to external stimuli and social environments, while epigenetics acknowledges that social experience modulates the body at the genetic level. According to such work, the body and the brain are influenced by the structures and environments that constitute society, but are also the source for the creation of new kinds of structures and environments which will in turn (and recursively) shape life in the future.

As environments become increasingly inhabited by machine intelligence–albeit the machine intelligence of ordinary artefacts rather than superintelligences–then computer technologies need to be considered as part of the biosocial mix. Indeed, IBM’s R&D in cognitive computing fundamentally depends on its own neuroscientific findings about neuroplasticity, and the translation of biological neural networks used in computational neuroscience into the artificial neural networks used in cognitive computing and AI research.

Media theorist N Katherine Hayles has mobilized a form of biosocial inquiry in her recent work on ‘nonconscious cognitive systems’ which increasingly permeate information and communication networks and devices. For her, cognition in some instances may be located in technical systems rather than in the mental world of an individual participant, ‘an important change from a model of cognition centered in the self.’ Her non-anthropocentric view of ‘cognition everywhere’ suggests that cognitive computing devices can employ learning processes that are modelled like those of embodied biological organisms, using their experiences to learn, achieve skills and interact with people. Therefore, when nonconscious cognitive devices penetrate into human systems, they can then potentially modify the dynamics of human behaviours through changing brain morphology and functioning. The potential of nonhuman neurocomputational techniques based on the brain, then, is to become legible as traces in the neurological circuitry of the human brain itself, and to impress itself on the cerebral lives of both individuals and wider populations.

Biosocial explanations are beginning to be applied to education and learning. Jessica Pykett and Tom Disney have shown, for example, that:

an emphasis on the biosocial determinants of children’s learning, educational outcomes and life chances resonates with broader calls to develop hybrid accounts of social life which give adequate attention to the biological, the nonhuman, the technological, the material, … the neural and the epigenetic aspects of ‘life itself.’

In addition, Deborah Youdell‘s new work on biosocial education proposes that such conceptualizations might change our existing understandings of processes such as learning:

Learning is an interaction between a person and a thing; it is embedded in ways of being and understanding that are shared across communities; it is influenced by the social and cultural and economic conditions of lives; it involves changes to how genes are expressed in brain cells because it changes the histones that store DNA; it means that certain parts of the brain are provoked into electrochemical activity; and it relies on a person being recognised by others, and recognising themselves, as someone who learns. … These might be interacting with each other – shared meanings, gene expression, electrochemical signals, the everyday of the classroom, and a sense of self are actually all part of one phenomenon that is learning.

We can begin to understand what Pearson and IBM are proposing in the light of these emerging biosocial explanations and their application to emerging forms of neurocomputation. To some extent, Pearson and IBM are mobilizing biosocial explanations in the development of their own techniques and applications. Models of neural plasticity and epigenetics emerging from neuroscience have inspired the development of  cognitive computing systems, which are then used to activate environments such as Pearson’s AIEd intelligent learning environments or IBM’s cognitive classroom. These are reconfigured as neurocomputationally ‘brainy spaces’ in which learners are targeted for cognitive enhancement and neuro-optimization through interacting with other nonconscious cognitive agents and intelligent environments.

In brief, the biosocial process assumed by Pearson and IBM proceeds something like this:

> Neurotechnologies of brain imaging and simulation lead to new models and understandings of brain functioning and learning processes
> Models of brain functions are encoded in neural network algorithms and other cognitive and neurocomputational techniques
> Neurocomputational techniques are built-in to AIEd and cognitive systems applications for education
> AIEd and cognitive systems are embedded into the social environment of education institutions as ‘brain-targeted’ learning applications
> Educational environments are transformed into neuro-inspired, computer-augmented ‘brainy spaces’
> The brainy space of the educational environment interacts with human actors, getting ‘under the skin’ by becoming encoded in the embodied human learning brain
> Human brain functions are augmented, extended and optimized by machine intelligences

In this way, brain-based machine intelligences are proposed to meet the human brain, and, based on principles of neuroplasticity and epigenetics, to influence brain morphology and cognitive functioning. The artificially intelligent, cognitive educational environment is, in other words, translated into a hybrid, algorithmically-activated biosocial space in the visions of Pearson and IBM. Elsewhere, I’ve articulated the idea of brain/code/space–based on geographical work on technologically-mediated environments–to  describe environments that possess brain-like functions of learning and cognition performed by algorithmic processes. Pearson and IBM are proposing to turn educational environments into brain/code/spaces that are both brain-based and brain-targeted.

While we need to be cautious of the extent to which these developments might (or might not) actually occur (or be desirable), it is important to analyse them as part of a growing interest in how technologically-enhanced social environments based on the brain might interweave with the neurobiological mechanisms that underlie processes of learning and development. In other words, Pearson’s interest in AIEd and IBM’s application of cognitive systems to education need to be interpreted as biosocial matters of significant contemporary concern.

Of course, as Neil Selwyn cautions, technological changes in education cannot be inevitable or wholly beneficial. There are commercial and economic drivers behind them that do not necessarily translate smoothly into education, and most ‘technical fixes’ fail to have the impact intended by their designers and sponsors. A fuller analysis of Pearson’s aims for AIEd or IBM’s ambitions for cognitive systems in education would therefore need to acknowledge the business plans that animate them, and critically consider the visions of the future of education they are seeking to catalyse.

More pressingly, it would need to develop detailed insights into the ways that the brain is being mapped, known, understood, modelled and simulated in institutional contexts such as IBM, or how neuroscientific insights and models are being embodied in the kinds of AI applications that Pearson is promoting.  How IBM and Pearson conceive the brain is deeply consequential to the AI and cognitive systems they are developing, and to how those systems then might interact with human actors and possibly influence the cognition of those people by shaping the neural architectures of their brains. Are these models adequate approximations of human mental and cognitive functioning? Or do they treat the brain and cognition in reductive terms as a kind of computational system that can be debugged,  rewired and algorithmically optimized, in ways which reproduce the long-standing tendency by technologists and scientists to represent mental life as an information-processing computer?

Just a few years after the dawn of computer technology in the 1940s, the brain was said to operate like a computer, with the role of physical hardware played by the brain itself and our thoughts serving as software. … Propelled by subsequent advances in both computer technology and brain research, an ambitious multidisciplinary effort to understand human intelligence gradually developed, firmly rooted in the idea that humans are, like computers, information processors. This effort now involves thousands of researchers, consumes billions of dollars in funding, and has generated a vast literature consisting of both technical and mainstream articles and books … speculating about the ‘algorithms’ of the brain, how the brain ‘processes data’, and even how it superficially resembles integrated circuits in its structure. The information processing metaphor of human intelligence now dominates human thinking, both on the street and in the sciences.

To what extent, for example, are biological neural networks conflated with (or reduced to) artificial neural networks as findings and insights from computational neuroscience are translated into applied AI and cognitive systems R&D programs? A kind of biosocial enthusiasm about the plasticity of the brain and epigenetic modulation is animating the technological ambitions of Pearson and IBM, one that may be led more by computational understandings of the brain as an adaptive information-processing device than a culturally and socially situated organ. Future research in this direction would need to interrogate the specific forms of neuro knowledge production they draw upon, as well as engage with social scientific insights into how environments really work to shape human embodied experience (and vice versa).

The translation of educational environments into biosocial spaces that are technologically enhanced by new forms of AI, cognitive systems and other neurocomputational applications could have significant effects on teachers and learners right down to biological and neurological levels of life itself. As Luciano Floridi has noted, these are not forms of ‘ultraintelligence’ but ‘ordinary artefacts’ that can outperform us, and that are designed for specific purposes–but could always be made otherwise, for better purposes:

We should make AI human-friendly. It should be used to treat people always as ends, never as mere means…. We should make AI’s stupidity work for human intelligence. … And finally, we should make AI make us more human. The serious risk is that we might misuse our smart technologies, to the detriment of most of humanity.

The glossy imaginaries of AIEd and cognitive systems in education projected by Pearson and IBM reveal a complex intersection of technological and scientific developments–combined with business ambitions and future visions–that require detailed examination as biosocial matters of concern for the future of education.

Posted in Uncategorized | Tagged , , , , , , , , , , | 2 Comments


Ben Williamson

In a new article published in Information, Communication & Society I aim to make some sense of how machine learning algorithms and new forms of ‘brain-inspired’ computing are being imagined for use in education. In particular, the article examines IBM’s ‘Smarter Education’ programme, part of its wider ‘Smarter Cities’ agenda, focusing on its learning analytics applications (based on machine learning algorithms) and cognitive computing developments for education (which take inspiration from neuroscience for the design of brain-like neural networks algorithms and neurocomputational devices). Together, these developments constitute the emergence of ‘learning algorithms’ that are responsive, adaptive and appear to possess some degree of sentience and cognition.

The article is part of a forthcoming special issue of the journal on ‘the social power of algorithms’ edited by David Beer, and it’s really great to be in the company of other papers by Daniel Neyland & Norma Mollers, Taina Bucher, Bernard Rieder, and Rob Kitchin. It was Rob Kitchin’s work (which he presented at the first Code Acts in Education seminar in 2014) that originally got me interested in ideas about smart ‘programmable cities’–which I’ve taken up to explore ideas about education in smart cities–and in my article I’ve drawn on the concept of ‘code/space‘ he developed with Martin Dodge. My starting place is that recently urban environments have been reimagined as ‘smart cities of the future’ with the computational capacity to monitor, learn about, and adapt to the people that inhabit them. In other words, smart cities are themselves ‘learning environments.’ What does it mean for urban space to learn? For IBM, the answer lies in neuroscience, and particularly in a synthesis of brain science and computer science innovations–both areas in which it has been significantly active, particularly in relation to the field of cognitive computing. IBM’s imaginary of the future smart city is one in which the environment itself is envisaged as being a ‘cognitive environment’–with schools as one such kind of space, as illustrated by its ideas for a ‘classroom that will learn you.’

In the article I explore the relationship between learning algorithms, neuroscience and the new learning spaces of the city by combining the notion of programmable code/space with ideas about the ‘learning brain’ to suggest that new kinds of ‘brain/code/spaces’ are being developed where the environment itself is imagined to possess brain-like functions of learning and cognition performed by algorithmic processes. I take IBM’s Smarter Education vision as an exemplar of its wider ambitions to make smart cities into highly-coded brainy spaces that are intended to supplement, augment and even optimize human cognition too.

In other words, IBM’s vision for Smarter Education is diagrammatic of its plans for ‘cognitive cities’ that are configured for advanced mental processing–and that rely on neuro-technological renderings of human brain functioning. The learning algorithms of learning analytics and cognitive computing applications imagined by IBM contain particular neuroscientific models of learning processes. Its glossy imaginary of Smarter Education acts as a seemingly desirable model not just for the future of schools in software-enabled urban environments, but as a diagram for future cities that are to be treated as learning environments and enacted by increasingly cognitive forms of computing technology.

The term brain/code/space registers how the learning algorithms of data analytics and cognitive computing are weaving constitutively into the functioning and experience of smart cities, including but not limited to the cognitive classrooms of IBM’s imagined smarter education environments. The brain/code/spaces of IBM’s smart cognitive classrooms are built around models of the brain that are encoded in the functioning of learning algorithms and inserted into the pedagogic space of the classroom. IBM’s imaginary of the brain/code/spaces of such cognitive learning environments is one instantiation of a new kind of urban space in which neuroscientific claims about brain plasticity are built in to the learning algorithms that constitute the functioning and experience of the environment itself. The notion of brain/code/space articulates a novel neurocomputational biopolitics in which brain functions are transcoded into data, and then codified into nonconscious cognitive learning algorithms and applications that are designed to augment human cognition. I suggest that IBM’s imaginary of Smarter Education is a kind of computational neurofuture-in-the-making, one that illustrates how the neuro-technological diagrammatization of the human ‘learning brain’ is being written in to the functioning of smart urban space through the design of learning algorithms.

The full paper, ‘Computing brains: learning algorithms and neurocomputation in the smart city,’ is available open access.

Posted in Uncategorized | Tagged , , , , , , , | Leave a comment

Intimate analytics

Ben Williamson

In April 2016 the Education Endowment Foundation launched the Families of Schools Database, a searchable database that allows any school in England to be compared with statistically-similar institutions. At about the same time, the Learning Analytics and Knowledge 2016 conference was taking place in Edinburgh, focused on the latest technical developments and philosophical and ethical implications of data mining learners and ‘algorithmic accountability.’ The current development of school comparison websites like the Families of Schools Database and the rapid growth of the learning analytics field point to the increasingly fine-grained, detailed and close-up nature of educational data collection, calculation and circulation–that is, they offer a kind of numerical and ‘intimate’ analytics of education.

Intimate datascapes
One way of approaching these school comparison databases and learning analytics platforms is through Kristin Asdal’s notion of ‘accounting intimacy.’ According to Asdal, practices of calculation are increasingly moving away from bureaucratic practices enacted in distant ‘centres of calculation’ to much more ‘intimate’ calculative practices that are enacted in situ, close to the action they measure. Intimacy also implies close relationship, and practices of calculative or accounting intimacy can also be understood in terms of how numbers and numerical presentations of data can be used to build intimate relationships between different actors.

Radhika Gorur has adapted Asdal’s ideas to suggest that more ‘intimate accounting’ is increasingly occurring in education. Drawing on the example of the school comparison site MySchool in Australia, she argues that:

The public, especially parents, was exhorted to make itself familiar–intimate–with the school by studying the wealth of detail about each school that was on My School. The idea was that, armed with intimate knowledge of their child’s school, parents could exert pressure on schools to perform well and get the best outcomes for their children. Not only did My School become a technology through which the government entered intimate spaces of schools, schools themselves entered intimate spaces of living rooms and kitchens through discussions between parents.

Through these techniques, schools could become available for the intimate scrutiny of the government as well as by parents.

The Families of Schools Database, like Australia’s MySchool, involves schools in providing highly intimate details—in the form of numbers—that can then be presented to the general public. These public databases allow the school to be known and discussed, as Gorur argues, in the intimate spaces of the home—as well as involving school leaders in the intimate accounting and disclosure of their institution’s performance according to various criteria. One aim of the Families of Schools Database is to enable statistically-similar schools to identify each other and then collaborate to overcome shared problems. An intimate knowledge of other institutions is required to facilitate such collaboration (thought it might also motivate competition). While school data certainly are collected together and transported to distant centres of calculation to allow the compilation of such databases, a certain demand is placed on institutions to present themselves in terms of an intimate account, and ultimately to share that account as a means towards possible collaboration with their numerical neighbours.

Following Ingmar Lippert we might say that such practices of intimate accounting configure the school environment as a ‘datascape,’  one whose existence in organizational reality is achieved through the calculative practices that make it ‘accountable.’ By configuring the school environment as a ‘dataspace,’ as Lippert argues, ‘reality is enacted’ as its intimate details are projected as a stabilized numerical account. Databases such as the Families of Schools Database might therefore be understood as intimate datascapes, where schools’ data are disclosed with the aim of building close relationships with parents and other institutions, whilst also becoming more visible to government.

Algorithmic intimacy
When it comes to learning analytics, the level of intimate accounting is increased even further. With such systems comes the technological ambition to know the microscopically intimate details of the individual learner. Major learning analytics platform providers such as Knewton claim to collect literally millions of data points about millions of users to amass huge big datasets that can be used for the automatic analysis of learning progress and performance.

For Knewton, the value of big data in education specifically is that it consists of ‘data that reflects cognition’—that is, vast quantities of ‘meaningful data’ recorded during student activity ‘that can be harnessed continuously to power personalized learning for each individual.’ The collection and analysis of this ‘data that reflects cognition’ is a sophisticated technical and methodological accomplishment. As stated in documentation on the Knewton website:

The Knewton platform consolidates data science, statistics, psychometrics, content graphing, machine learning, tagging, and infrastructure in one place in order to enable personalization at massive scale. … Using advanced data science and machine learning, Knewton’s sophisticated technology identifies, on a real-time basis, each student’s strengths, weaknesses, and learning style. In this way, the Knewton platform is able to take the combined data of millions of other students to help each student learn every single concept he or she ever encounters.

The analytics methods behind Knewton include Item-Response Theory, Probabilistic Graphic Models, and Hierarchical Agglomerative Clustering, as well as ‘sophisticated algorithms to recommend the perfect activity for each student, constantly.’

What a learning analytics platform like Knewton appears to promise is a highly intimate and real-time analytics of the very cognition of the individual, mediated through particular technical methods for making the individual known and measurable. Again, as with the Families of Schools Database, it is clear that the data are being collected and transported to distant centres of calculation—namely Knewton’s vast servers—but the speed of this transportation has been accelerated massively as well as being automated. A vast new datascape of cognition–amassed methodologically according to the psychometric assumptions underlying Item-Response Theory et al–is emerging from such calculative practices.

Moreover, because Knewton’s platform is adaptive, it not only collects and analyses student data, but actively adapts to their performance so that each individual experiences a different ‘personalized’ pathway through learning content, as determined by machine learning algorithms. Such algorithms have the capacity to predict students’ probable future progress through predictive analytics processes, and then, in the form of prescriptive analytics, to personalize their access to knowledge through modularized connections that has been deemed appropriate by the algorithm. To give a sense of this, in Knewton’s documentation, it is stated that all content in the platform is:

linked by the Knewton knowledge graph, a cross-disciplinary graph of academic concepts. The knowledge graph takes into account these concepts, defined by sets of content and the relationships between those concepts. Knewton recommendations steer students on personalized and even cross-disciplinary paths on the knowledge graph towards ultimate learning objectives based on both what they know and how they learn.

The Knewton platform’s ‘knowledge graph’ treats knowledge in terms of discrete modules of content that can be linked together to produce differently connected personalized pathways.

In this sense, knowledge is treated in terms of a network of individual nodes with myriad possible lines of connection, and the Knewton platform ‘refines recommendations through network effects that harness the power of all the data collected for all students to optimize learning for each individual student.’ For Knewton, knowledge is nodal like a complex digital network, and constantly being refined as machine learning algorithms learn from observing large numbers of students engaging with it: ‘The more students who use the Knewton platform, the more refined the relationships between content and concepts and the more precise the recommendations delivered through the knowledge graph.’ In other words Knewton is developing new kinds of intimacies between units of content and concepts, as well as identifying recommendations for students that are based on an assessment of the optimal relationship between the individual learner and the individual content item. The Knewton knowledge graph ultimately consists of networked data that reflects content, and data that reflects cognition, and it is constantly analyzing these data to find best fits, clusters, connections and relationships–or numerical intimacies in the datascape of content and cognition.

Real-time intimate action
What I am briefly trying to suggest here is that a kind of automated real-time intimate accounting at the level of the individual is occurring with these learning analytics platforms. Such platforms both govern learners at a distance—through transporting their data for collection and processing via data servers and storage facilities—but also up-close, intimately and immediately, through real-time adaptivity and personalized prescriptive analytics.

Whereas the Families of Schools Database and MySchool involve more intimate accounting among human actors mediated through public databases, however, the intimate action of learning analytics is algorithmic and subject to machine learning processes. The ambition of Knewton, and other learning analytics platform providers, is nothing less than an intimate account of the individual, which can then be analyzed as points in a vast networked datascape of content and cognition of others to ‘optimize learning’–and in this sense it instantiates a distinctive form of real-time intimate action that is targeted at individual improvement at the level of cognition itself.

Kristin Asdal suggests that intimate accounting involves the ways that calculative practices associated with ‘the office’ become implanted in ‘the factory’–that is, bureaucratic practices of distant data collection and calculation are displaced by practices of enumeration that are enacted much more closely to the measurable action. Schools are now increasingly involved in their own practices of institutional intimate accounting and the production of the school environment as a datascape. The proliferation of learning analytics platforms brings intimate accounting into the everyday life and learning of the individual, with algorithms (and the methodologies that underpin them) designed to provide both an intimate account of the individual–as data that reflects cognition–and to undertake intimate action in the shape of prescriptive analytics and automatically personalized learning pathways that might shape the individual as a cognizing subject.

Posted in Uncategorized | Tagged , , , | Leave a comment