Ben Williamson, Jeremy Knox & Sarah Doyle
What does ‘data’ do? The second Code Acts in Education seminar in Edinburgh on 9 May 2014 focused on the ways that software, code and algorithms act on educational data. Presentations focused on the interweaving of computer code with the data mobilized in educational governance and school inspection, the algorithmic classifications and assumptions behind the data collected and visualized by learning analytics platforms, and the dense relations between human actors and the codes and algorithms of online learning environments.
The powerful role of algorithms in the world is becoming an increasingly important focus for research, and one that the seminar demanded we take more seriously in educational research. A particularly useful article on ‘The Relevance of Algorithms’ has been provided by Tarleton Gillespie. Gillespie’s article reminds us that algorithms (and the code they’re written in) do not act alone as clean, mechanical or objective technologies, but need to be understood as both socially produced and as socially productive. Algorithms, from search engines to social networking sites, organize information according to human but increasingly automated evaluative criteria, and tangle with users’ information practices and patterns of political and cultural engagement. It is perhaps more accurate to write of ‘socioalgorithmic’ processes and practices to reflect the relational interweaving of algorithms and social worlds.
Gillespie suggests that algorithms are an important object of study for several reasons, among them that algorithms are increasingly being designed to anticipate users and make predictions about their future behaviours. As a result users are now reshaping their practices to suit the algorithms they depend on. This constructs ‘calculated publics,’ the algorithmic presentation of publics back to themselves that shapes a public’s sense of itself. Gillespie raises a series of questions about the production of calculated publics that have immediate resonance in education:
Algorithms … engage in a calculated approximation of a public through their traceable activity, then report back to them …. But behind this, we can ask, What is the gain for providers in making such characterizations, and how does that shape what they’re looking for? Who is being chosen to be measured in order to produce this representation, and who is left out of the calculation? And perhaps most importantly, how do these technologies, now not just technologies of evaluation but of representation, help to constitute and codify the publics they claim to measure, publics that would not otherwise exist except that the algorithm called them into existence?
Taking up some of Gillespie’s ideas, we can see the field of education as one in which algorithms are being mobilized to anticipate and predict the behaviours of learners, and in which educational policymakers, school leaders, teachers and pupils alike may be changing their practices to suit the algorithms. Education is becoming a massive calculated public—presented algorithmically through the collection, codification, classification and visualization of data. In this sense, as the introduction to the seminar suggested, the various practices and people that constitute education are becoming ‘machine-readable’ as the kind of data that can be processed and organized by algorithms in order to make particular kinds of choices and decisions possible. The ways in which education is now represented and understood relies on the algorithms that have called it into existence and made it intelligible.
The Code Acts in Education programme seeks to explore how software, code and algorithms are increasingly constituting educational practices, space and subjects. This post provides a summary of two of presentations from the second seminar (more to follow).
Learning algorithms and analytics
Simon Buckingham Shum’s presentation was concerned with ‘how learning analytics act in education.’ He began with an anecdotal example of the use of Apple’s Siri, to which he asked the question ‘find code acts in education’. Siri interpreted this as ‘code accident education’, aptly signalling the problems code might engender for this field. The ‘pervasive algorithms we carry in our pockets,’ he suggested, demonstrate how ‘code can make bad things happen in education.’
The challenges of learning analytics, he suggested, centre on the necessary embodiment of a ‘world view’. The ontological implications for this field are, as for code and the practices of classification in general, about forgetting as much of the world as is evoked. In other words, algorithms are pre-programmed to find a particular world, and it is only that world which is brought into view. When things are deconstructed and anatomised as numbers, they look very different.
Data does not ‘speak for itself’ as has sometimes been claimed: a whole host of human decisions about the collection and cleaning of data, and the choice of analysis software, go into the production of a learning analytics programme. The algorithms that enable learning analytics appear to be ‘theory-free’ but are loaded with political and epistemological assumptions. The data visualizations produced by learning analytics—data dashboards as they’re frequently described—also act semiotically to create meanings. These assumptions undergird the models of pedagogy and assessment built into different learning analytics programmes. In other words, we need to ask: ‘What theories of learning are embedded in learning analytics?’
Nevertheless, Simon argued, learning analytics may provide productive and tangible tools for educators and learners alike ‘to see the systems they are in’. Learning analytics generate a ‘personal data cloud’ that makes the learner visible and knowable to the teacher. But as the underlying assumptions behind the algorithms and classifications that generate such a cloud are usually hidden and invisible, Simon suggested that ‘open source’ approaches would be important in future learning analytics work. Open source learning analytics, he argued, make it more clear how learners have been algorithmically classified. He suggested this would have implications for how pupils understand their learning, arguing that we can construct ourselves differently when we encounter our digital mirror images.
Jeremy Knox presented material from his doctoral research on MOOCs, ‘multimodal profusion‘ and digital literacy. The research takes a relational posthumanist and sociomaterial approach, and Jeremy argued that ‘the digital classroom’ is constantly being produced by teachers and algorithmic processes in combination.
Starting with the question, ‘what really counts as digital literacy or literacies?,’ Jeremy proposed that digital literacy comprises information literacy, digital citizenship and safety and security. Digital literacy is characterised as essential for (future) survival. The importance and significance of digital literacy are held to be unquestionable because of the extent and reach of digital networks, and because of the proliferation of related developments. The ability to read and write computer code is increasingly seen as a key aspect of digital literacies. But for Jeremy it is important to acknowledge that digital literacies are social practices mediated by and mixed with processes of digital codification.
Jeremy focused on the production of digital artefacts to show the relational nature of digital literacy. The specific example was the end of course assignment given to students undertaking the EDCMOOC (University of Edinburgh). In this assignment, students were asked to produce a digital artefact. This digital artefact had to be a multimodal piece of work, incorporating two or more dimensions such as text, sound, image and so on. The digital artefacts had to be designed with the specific purpose of being experienced online. In total, thousands of artefacts were submitted.
These artefacts, Jeremy argued, demonstrate how digital literacy practices are produced by humans and algorithms acting in concert. Jeremy used the example of a word cloud to show that the artefact produced is not simply a movement of text from one format to another. In fact, the algorithms work on the source text to respond to criteria such as frequency, highlighting the words most often used by using larger font or more intense colour. The end product emerges through combining elements including person and code. Material processes and symbolic representational processes work together.
Jeremy proposed that a relational approach might be especially fruitful in researching this field. Rather than emphasising control and mastery, a sociomaterial approach would recognise the entangled nature of the elements that work together to produce digital literacy. These kinds of ideas raise contentious questions about the nature of knowledge, the role of representation and the implications for our understanding of human learning.
The presentations demonstrated how the practices that we routinely classify as ‘learning’ or as elements of ‘education’ are, at least in part, being reshaped through socioalgorithmic processes. Algorithms and education are being fitted to each other, and education is being presented and identifying itself as a calculated public.