Education systems today produce vast quantities of information. Numerical data, including administrative records and student assessment data, such as the government National Pupil Database and Individualised Learner Record, have assumed significant power in educational policy debates nationally and globally. Vast global testing instruments and databases, such as the Programme for International Student Assessment (PISA) administered by the Organisation for Economic Cooperation and Development (OECD) and the Learning Curve Data Bank managed by the commercial education publisher Pearson, have made comparable student data into a major policy commodity. Nationally, Ofsted (the Office for Standards in Education) maintains the RAISEonline (Reporting and Analysis for Improvement through school Self-Evaluation) databank and tools to enable schools to analyse performance data in depth as part of the self-evaluation process. In the commercial domain of educational technology, ‘learning analytics’ applications which can monitor student performance based on constant data collection and automated analysis have become a major business—as global products such as Knewton demonstrate (although the recent closure of inBloom due to student data legislation, even with its Gates Foundation backing, is an interesting counter-development). Educational institutions are increasingly plastered with data, and governed and managed through that data.
However, while discussion about data itself has been lively, less detailed research on the technical collection and use of data in education has been carried out. We know little about the data collection techniques and the database software used by different state agencies and organizations in the management, analysis and dissemination of all this data. A sense of the history of data systems in education demonstrates why it is so important to understand data as a sociotechnical accomplishment produced through particular practices of collection, analysis and visualization. In a recent book entitled The Rise of Data in Education Systems, Martin Lawn brings together a series of studies of how educational data has evolved historically. From the spectacle and display of the great exhibitions, world’s fairs and scientific congresses of the nineteenth century, to the emergence of an international infrastructure of specialist research associations, national research centres, and the formation of international organizations in the late twentieth century, the visualization and presentation of large quantities of data, the numerical information they represent, and the statistical techniques used to generate them, have become powerful explanatory and persuasive devices in educational systems.
Today, however, the educational system is accompanied by a ‘virtual world of data’ consisting of numerical data and an array of visualizations, diagrams, charts, tables, infographics and other forms of representation that are aimed at explaining educational data to an ever-widening audience of educational policymakers, professionals, and the wider public. This virtual world of educational data is a sophisticated technical and methodological accomplishment facilitated by software development. New experts such as the software and data companies that can organize and analyze the data have had an enormous effect on how education is seen, understood and governed. In order to enable policy decisions to be made and enacted, data have to be inputted, ordered, filtered, and classified, all tasks requiring various skilled technical intermediaries, statistical experts and data brokers with the relevant know-how and the software to reconfigure it as useful knowledge. Yet these new managers of the virtual landscape of education remain largely hidden, as Lawn rightly argues, and little attention has been given, either, to how the governing of education systems is increasingly connected to the technical capacity of data servers, data-mining tools, and the algorithmic processes of machine learning and data analytics software.
In the second Code Acts in Education seminar, to be held at the University of Edinburgh on Friday 9 May 2014, we aim to attend to this relationship between educational data and software. What software facilitates the collection, analysis, visualization and communication of educational data? What is the ‘social life’ of educational data, and what sociotechnical practices are enacted as it moves between different statistical packages, analytics software, and modes of graphical display? How does the translation of data shape perceptions about educational institutions and practices, and with what effects? The programme of papers and discussions in the seminar aims to begin exploring such questions.
In the opening paper, ‘Governing Inspection,’ Jenny Ozga of the University of Oxford, will examine the extent to which data systems frame knowledge production, distribution and use in governing schooling. The paper looks at two main areas of interaction between data and knowledge in the school inspection process: firstly, the processes surrounding the encoding of evidence in interaction between performance data and other (embodied, enacted) sources of inspection judgement, and secondly, the role and influence of commercial companies in data provision and in training inspection teams in data analysis and interpretation.
Continuing the debate about contemporary educational data systems, Matt Finn, University of Durham, will then consider how data-based living is transforming institutions and reconfiguring people’s lived experience of the present and of possible futures. One site of these changes is the school. ‘Forging Futures in a Data-Based School’ will explore how the proliferation of data to enable judgement about learning, which promises the ability to enact ever earlier interventions, is mediating the everyday interactions and relationships of care, judgement and inspiration in education.
The second half of the seminar will focus on emerging technologies of learning analytics and MOOCs. Simon Buckingham-Shum, Open University, will examine learning analytics, an emerging field investigating the implications for learning of faster feedback loops based on computational analytics on digital data. He will examine how their design and deployment perpetuate different kinds of assumptions and values, and consider whether learning analytics is based on particular theories of learning or the technocratic belief that data comes ‘theory-free.’
Sian Bayne & Jeremy Knox, University of Edinburgh, will provide the final paper, examining a range of digital artefacts produced in response to ‘E-learning and Digital Cultures’ (known as EDCMOOC), a Massive Open Online Course offered by the University of Edinburgh in partnership with Coursera. They will consider how the profusion of multimodal artefacts produced in response to the EDCMOOC constitutes a set of sociomaterial entanglements, in which human beings and technologies—including software and its algorithms—each play a part.
These papers will provide a series of perspectives on the entanglements of data and software code in educational institutions today, taking in sociotechnical practices around data use in governing schools, learning analytics, and online learning. And they will point towards important next steps for researching the sociotechnical entanglement of software code and educational institutions. If the continuing rise of data in education is to be researched and understood adequately, researchers may need to get to grips with the software developments utilized in the collection and calculation of ‘big data’ in education, track down the database companies doing the calculation, and seek to understand the various database algorithms and visualization techniques that are becoming ever more significant and influential in how education systems are quantified, monitored, and made amenable to governmental intervention.