By Ben Williamson
If we accept that computer code, software and algorithms have now sunk right down into the “technological unconscious” of our contemporary “lifeworld,” as Nigel Thrift argues, then how might this be affecting the ways in which academic research is carried out and how “academics” identify themselves? These are important questions for Higher Education that the Code Acts in Education project seeks to explore.
As various researchers have pointed out in recent publications, the work of academics across the natural, human and social sciences is now increasingly interwoven with computer coded technologies. This is perhaps most obvious in the natural sciences, in developments such as the human genome project and its vast database. As Geoffrey Bowker has argued in Memory Practices in the Sciences, such databases are increasingly viewed as a challenge to the idea of the scientific paper, with its theoretical framework, hypothesis and long-form argumentation, as the “end result” of science.
The ideal database should according to most practitioners be theory-neutral, but should serve as a common basis for a number of scientific disciplines to progress. … In this new and expanded process of scientific archiving, data must be reusable by scientists. It is not possible simply to enshrine one’s results in a paper; the scientist must lodge her data in a database that can be easily manipulated by other scientists.
The algorithmic techniques of sorting, ordering, classification and calculation associated with computer databases have become a key part of the infrastructure underpinning contemporary big science.
However, the coding and databasing of the world does not end with big science. “Big data” are now being generated and mobilized by a variety of humanly operated as well as automated systems, as Rob Kitchin has explained. Social scientific research, the humanities, and the production of knowledge and theory across disciplines are all now affected by software code, algorithms and the data they mediate. In an article on “algorithms in the academy,” David Beer has shown that software algorithms are increasingly integrated into ordinary everyday processes of research, teaching and administration, and interwoven in the performance of academic life.
For some enthusiastic commentators, such as Chris Anderson in Wired, this is tantamount to “the end of theory”—the triumph of computers, data and quantification over disciplinary expertise:
This is a world where massive amounts of data and applied mathematics replace every other tool that might be brought to bear. Out with every theory of human behavior, from linguistics to sociology. Forget taxonomy, ontology, and psychology. Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves.
Such claims reflect the techno-utopian fantasy that computers are able to collect neutral, unmediated, “raw data,” when the reality of course is that at every stage of its collection, storage, analysis, visualization and dissemination data is always being mediated, framed, and re-made. Notwithstanding the “cyberbole” of such announcements, it is clear that these technologies and techniques are now increasingly governing what academic researchers do.
In research on “the redistribution of methods” in the social media environment, Noortje Marres has shown how the digitization of social research, for example, has involved a proliferation of new devices and formats for the documentation of social life. These include Twitter streams and blogs which document everyday activities, search engine analytics which can reveal massive population trends and social behaviours over time, Instagram images, social networks on Facebook, and so on. These platforms enable the routine generation of data about social life and make available new forms of social data—data that are not merely accessible to professional social scientists but that may be analyzed and interpreted by interested amateurs too.
These developments have led to both optimistic and pessimistic visions of the future of social research. On the one hand, these technologies grant us much greater empirical and analytical capacity; on the other, they seriously threaten established social research and concentrate data analysis and knowledge production in a few highly resourced research centres, including the R&D laboratories of corporate technology companies.
Instead of social scientists, the new social experts of the social media envrioment, it seems to some, are the “algorithmists” and data analysts of Google, Facebook and Amazon. Writing on the rise of the professional “algorithmist” who can analyze big data, Viktor Mayer-Schonberger and Kenneth Cukier write:
These new professionals would be experts in the areas of computer science, mathematics, and statistics; they would act as reviewers of big-data analyses and predictions. Algorithmists would take a vow of impartiality and confidentiality, much as accountants and certain other professionals do now. They would evaluate the selection of data sources, the choice of analytical and predictive tools, including algorithms and models, and the interpretation of results. In the event of a dispute, they would have access to the algorithms, statistical approaches, and datasets that produced a given decision.
It is notable that Facebook, for example, has a Data Science Team that according to Tom Simonite, writing on “what Facebook knows” in Technology Review, can “apply math, programming skills, and social science to mine our data for insights that they hope will advance Facebook’s business and social science at large.” The team is run by Facebook’s “in-house sociologist” who is “confident that exploring this resource will revolutionize the scientific understanding of why people behave as they do.”
In the face of such developments utilizing complex programming and data to advance scientific understanding, then, what is the role of the Higher Education professionals? Gary Hall, writing in an article titled “#MySubjectivation,” argues that corporate social media platforms such as Twitter and Facebook are now part of the change in how academics in HE create, perform and circulate research and knowledge. Extending the argument that computer coded environments and devices now play an active constitutive part in the present, Hall articulates how today’s new media are constitutive of a particular emergent “epistemic environment.” Building on the work of French philosophers Bernard Stiegler and Michel Foucault, he suggests that the epistemic environment of “traditional” academic knowledge production was based on the Romantic view of single authorship and creative genius materialized in writing, the development of long-form argumentation, and the publication of books, or what Hall terms “ready-made methods of composition, accreditation, publication and dissemination” and the “original creation of a stable, centred, indivisible and individualized, humanist, proprietary subject.”
The new social media infrastructures underpinning contemporary scholarly knowledge production, however, are reshaping the epistemic environment. This is subsequently affecting the ways in which higher education professionals think and act, and thus how they research, how they generate knowledge, and how they theorize and explain the world. Roger Burrows has argued that as academics we are now “living with the h-index” and a range of assorted metrics including citations, workload models, transparent costing data, research and teaching quality assessments, and commercial university league tables, all of them “increasingly enacted via code, software and algorithmic forms of power.” These calculative devices all add up the increase of “quantified control” and “metricisation” in the governing of HE, and play a large part in how academics are constituted. Deborah Lupton suggests that an academic version of the “quantified self” is emerging: a professional self based on quantified measures of output and impact. As Hall states it, the emerging epistemic environment in HE – materialized in and mediated through such devices and infrastructures as laptops, tablets, apps, email, professional social networks, Twitter connections, electronic diaries, swipe cards, online databases, bibliometrics and so on – “invents us and our own knowledge work, philosophy and minds, as much as we invent it, by virtue of the way it modifies and homogenizes our thought and our behaviour through its media technologies.” Academics are becoming their data, as mediated through complex coded infrastructures and devices.
Whether we are witnessing the “end of theory” as computer coded software devices and sophisticated algorithms increasingly pervade, augment, and even automate HE practice remains an open question. Is academic work really being homogenized and manipulated by the media machines of Google and Facebook, and is disciplinary expertise and knowledge production being displaced to the “algorithmists” of private R&D labs and commercial technology firms? What kinds of changes as academics and professionals are we experiencing as computer code acts on Higher Education?