By Ben Williamson
One of the main ideas behind the Code Acts in Education project is that software code can ‘do’ stuff, as the first seminar on 28 January 2014 began to explore. This idea is worth some further reflection. As the hybrid progeny of a variety of social, human and technical elements, code is increasingly invested with a kind of performative social power that gives software the capacity to enact tasks, make decisions, and, in part, mediate how people see, know and do things. The apparent capacity of code and its algorithmic procedures to interweave with society, act upon people, augment knowledge, and to mediate and govern their lives in myriad ways, especially in relation to education, is the basis for our inquiry. Before it can do such things, though, how does it even ‘know’ enough about the people it is to act upon? How does code know you? How does software see us?
A decade ago, the designers Dan O’Sullivan and Tom Igoe asked the question ‘how does the computer see us?’ The image of a hand with one finger, one eye, and two ears that they produced—a simple yet weirdly obscene finger-eye-being—is a striking reminder that technologies carry programmed assumptions and knowledge about the human beings who will use them. Computers and software interact with us as its producers have instructed and programmed it to see and identify us.
These pre-programmed forms of identification do not just emerge out of the air. Nor are they just the wild imaginings of software programmers. They are brought into being through various paradigms, theories and knowledge about human being that have been developed by various experts from across the human and social sciences, and from there propelled into public thought. As the sociologist Nikolas Rose and the philosopher Ian Hacking have argued, how we conceive, characterize and classify ourselves as ‘kinds of people’ is ultimately mediated by the authority of particular kinds of expert claims and arguments. How does this happen?
Consider, for example, the assumptions and beliefs that many educators carry with them about the students or learners in their classes. Where do these ways of thinking about learners come from? Social psychology has provided many of the conceptual models for understanding learning and learners in recent decades. Instead of the individual child of developmental psychology, the accounts provided by social psychology have reconceived the learner as an active and culturally cognitive participant in everyday experience, learning and developing through social proximity with more knowledgeable others. The result has been that many educators enter the classroom with ways of seeing, understanding and working with learners that are based on the expertise of social psychology. Recent claims from branches of neuroscience about how the brain is activated are also now being mobilized as the basis for new kinds of brain-based learning programmes.
Elsewhere, theories of human behaviour emerging from the field of behavioural economics are being mobilized as the basis for a variety of techniques, such as social marketing and ‘nudging,’ aimed at changing behaviours. These theories assume people are behaviourally malleable, defined by their capacity to be influenced and prompted, liable to becoming attached to group norms, and susceptible to learning behaviours from others through processes of social observation, modelling and imitation. The UK Government Behavioural Insights Team, often called the ‘Nudge Unit,’ applies insights from behavioural economics and psychology to public policy and service—putting a particular image of human conduct right at the centre of the state’s governing strategies.
What these examples demonstrate is that knowledge about humans is always produced via particular conceptual models from specific sites of expertise. The ways in which we, as humans, come to understand, represent and address ourselves is an expert accomplishment mediated through a whole battery of disciplinary theories and techniques that change and evolve and get overturned over time. Social psychology, neuroscience, and behavioural economics are just three prominent examples of disciplines that claim to be able to ‘see’ and ‘know’ people with particular authority (or that are at least deployed by certain actors in such a way), and on whose expert authority it is then possible to do such things as educate, govern, or act upon people.
So how does this relate to software? Does software now come with claims to be able to ‘know’ and ‘see’ us? What we need to explore here are the kinds of models or ways of thinking about people that are mobilized in the production of particular software products. I am not suggesting that software has some kind of autonomous sentience (though it can give the impression of being semi-alive), but that in its production, the originators of any software product codify particular models of human activity and interaction in the software itself.
To give some sense of this, we might want to consider a popular social networking site like Facebook. Facebook is built upon particular understandings of people as socially networked beings. The kinds of networking possibilities built into Facebook and the like tend to emphasize the idea of networks of horizontally connected friends. Facebook’s ‘people you may know’ algorithm seeks to optimize users’ network sociality by establishing a kind of algorithmic normality for social relations. These kinds of network architectures are supported by a whole panoply of concepts and theories (or in some cases fantasies and imaginings) which attempt to articulate their own ‘authoritative’ accounts of humans as socially networking creatures.
The symmetry between the rise of social networking in the domain of software and the rise of new social concepts in the domain of sociality can be detected in the recent spread of terms like ‘smart mobs,’ ‘participatory cultures,’ ‘networked learning,’ ‘crowdsourcing,’ ‘wikinomics,’ ‘educational ecosystems,’ ‘prosumption,’ the ‘social brain,’ and, of course, the ‘social network’ itself. All of these terms appear to express something natural and given about human sociality—as if the evolution of our psyches and our brains has actually demanded the social network media that now flood telecommunications infrastructures. (I am drawing here on ideas about social media and new ways of conceiving of the ‘social’ itself explored by Will Davies and in the conference ‘The New Social-ism?‘ in December 2013 by Nikolas Rose, Liz Moor, Evelyn Ruppert & Nortje Marres.)
Yet it is important to remember that all of these terms are the products of particular theories about people as both individuals and social creatures. Many of these accounts draw on sociological concepts, on social psychology, and on emerging theories about the social brain from neuroscience, as well as on concepts from economics. The idea that human beings are a fundamentally social species, with a social brain embedded in complex social networks and cultural norms, is becoming the default theory of human nature, behaviour and sociality among organizations and research centres involved in influencing and governing public policy and education. The idea of the ‘social brain’ of the individual connected to others via ‘social networks’ is a particular image of the human, and of human sociality, that has been produced by various discursive and technical means by actors working with these various disciplinary ideas and styles of thinking—it is not natural and pre-given. The symmetries between social scientific and computer science models and understandings of individual and social behaviour are disciplinary, discursive and technical accomplishments.
Paradigms and ideas from these various disciplines—all of which seek to describe and explain human individuality and sociality—are now being modelled in code in ways which have the potential to ‘configure the user,’ in Steve Woolgar’s memorable phrase. The existence of in-house social and behavioural experts at places like Facebook, Google and Microsoft demonstrates the close connection between the expertise of the human and computer sciences in the coding of much contemporary software. We might say, then, that sites like Facebook, and the social and computer science expertise behind them, conceive of people in terms of social behaviours that have been classified and characterized by particular kinds of expertise and ways of thinking. If its software sees us as individual nodes in horizontal connections of social attachments, this is at least in part the result of disciplinary insights from the social and human sciences.
We might also want to consider the kinds of data mining technologies and other database-driven systems that ‘know’ us through the collection of our digital traces and byproduct data. As David Beer reports, the kind of software that can crawl, capture and scrape the web for data is becoming a powerful and largely autonomous actor in contemporary societies. Social media aggregators, algorithmic database analytics and other forms of ‘sociological software’ have the capacity to see social patterns in huge quantities of data and improve how we ‘know’ ourselves. As Tarleton Gillespie has argued in recent research on algorithms in public life, software ‘anticipates’ its users through the constant collection of their digital traces: ‘digital providers are not just providing information to users, they are also providing users to their algorithms.’
The generation of big data sets from all these digital traces is now being used as a way of knowing individuals and the wider population in both the domains of social media and government. The think tank Demos has established a new research centre dedicated to ‘social media science’ which aims to explore the ‘datafication’ of social life and ‘see society-in-motion’ through big data:
To cope with the new kinds of data that exist, we need to use new big data techniques that can cope with them: computer systems to marshal the deluges of data, algorithms to shape and mould the data as we want and ways of visualising the data to turn complexity into sense.
These techniques make databases and their algorithms into powerful mediators of how people are seen and known. Evelyn Ruppert has argued that the database makes people visible, knowable and therefore amenable to classification and intervention. These data can then be used as the basis for ‘doing’ things to people—whether by making probabilistic recommendations for consumer items, or by ‘personalizing’ state services. Increasingly, it seems as though people are to be known and remembered through their data traces rather than through their embodied lives. The ‘knowers’ in what Nigel Thrift has termed contemporary ‘knowing capitalism‘ in this sense are social media and database software. To put it bluntly, as David Bowker argues, ‘if you are not data, you do not exist’—you are invisible and unknown to the organizations and agencies that, through software-mediated means, classify and govern so much of contemporary existence.
Software now sees and knows us as particular ‘kinds of people.’ This is not some universal person or ‘natural kind’ but hybrid products of digital data and theories of human individuality and sociality all enacted by software code, or a ‘new algorithmic identity’ inferred from calculations on our digital traces, as John Cheney-Lippold terms it. Software sees and knows us as hybrid products made up out of data traces of activities and interactions, as well as out of particular conceptions, theories and imaginings of what it is to be a person.
If we recognize that all software, and its underlying code, carries assumptions about actual or potential users, and that these models of human being can then shape how people go about seeing, knowing and doing things, then it becomes essential to do close studies unpacking how those assumptions came to be codified in the software. If as researchers we want to unpack how code acts in education—or in other social contexts—we need to work out what assumptions, models or concepts of human action, interaction and behaviour, and what ways of seeing, knowing and doing things, have been programmed in to the specific software we are interrogating. We need to investigate what social codes of conduct are written into the code, and to conduct genealogical explorations of the claims that underpin the codifying of conduct in software. We need to understand how software programmers engage with social and human sciences, and to trace the kinds of theories of human nature and sociality that they build into their products. The issue is how being known and seen, characterized and classified by code might become the basis for being governed and educated. We’ll be exploring these issues further in the 2nd Code Acts in Education seminar in Edinburgh on Friday 9 May 2014.