By Jeremy Knox
In his presentation at the second Code Acts seminar, Simon Buckingham-Shum raised important critical questions about Learning Analytics. Significantly, Learning Analytics was suggested to necessarily embody a ‘world view’, one that is already approximated by the algorithms employed to ‘discover’ it. However, despite this call to acknowledge the inevitable limitations of Learning Analytics, Buckingham-Shum’s rationale was ultimately one of positive potential, in which the ‘invisible’ of education could be ‘made visible’.
It was these ocular metaphors that seemed important somehow, and stayed with me, unresolved, long after the seminar was over. Until that is, I watched a hirsute Mathew Collings host ‘The Rules of Abstraction’ on BBC Four, a documentary about the history and motivations of abstract art. But what can art – and an obscure, often confusing form of art at that – possibly tell us about Learning Analytics? Moreover, how can something so ‘abstract’ help us to understand the ‘realities’ of the operations of code in education?
Before I indulge in a crude interpretation of art, let’s briefly return to those allusions to the visual. To claim that Learning Analytics provides a ‘view’ of the world, albeit a partial one, is to frame it as a process which reveals something else; in this case educational activities, or ‘learning’ itself, depending on how one might wish to interpret the results. This is encapsulated rather well in the notion that Learning Analytics ‘makes visible the invisible’. In other words, there is stuff going on in education that is not immediately perceptible to us, largely due to scale, distribution and duration, and Learning Analytics provides the means to ‘see’ this world. The idea of the ‘visual’ we are dealing with here is therefore a kind of access; we are able to see the unseen, and therefore gain admittance to something ‘elsewhere’. Put rather more simply, we might say that Learning Analytics is ‘visual’ in the way that a window provides a view of something else – not the entire world of course, but a particular framing of it.
So, faced with such a seemingly straightforward idea of the visual capacity of Learning Analytics, what could the ambiguous field of abstract art possibly offer? One might easily dismiss the prospect, especially after watching Collings deliver typically mystifying statements such as, ‘abstract art is not abstract’. However, I want to dwell precisely on this statement, because it has something genuinely useful to contribute to our thinking about Learning Analytics, and in helping us to understand what we mean by making something ‘visible’.
Crucial here is the difference between ‘abstract’ art and ‘realist’ or ‘naturalist’ art, the latter being interested in capturing ‘real life’, or avoiding the stylisation of the artist. Crudely put, we can say that realist or naturist art attempts to depict things as they are. For example, such a painting might attempt to represent a panoramic view or still life scene as accurately as possible on the canvas, relative to what can be perceived by the viewer. The fidelity of the image to its object is of prime importance, and this is perhaps what remains as the most commonplace understanding of what ‘art’ is. This seems to reflect the way I described ‘viewing’ previously in relation to Learning Analytics. That is to say, Learning Analytics is considered valuable precisely because it is able to provide an accurate depiction of a ‘real world’ of education; albeit a real world that is invisible to our normal perception.
In contrast, abstract art is often considered non-representational. In other words, those famous images by artists such as Jackson Pollock or Bridget Riley are not necessarily meant to depict landscapes or domestic scenes; they are not intended to represent or correspond to something ‘out there’. This notion of non-representation must therefore be saying something different about what an image is, and what it is to ‘view’ something. It is concerned with the practices and materials of painting itself. Such a reading would say that, rather than representing a landscape or an object elsewhere, it depicts the very thing that it is: a painting. What we see on the canvas is not a truthful portrayal of something external (a hillside, a seascape), but rather an account of the internal act of producing the painting (the movement of a human being, the effect of gravity on the substance of paint, the yield of a canvas).
Before I get carried away, let’s return to that idea of abstract art not really being abstract, and to flip the distinction between ‘abstract art’ and ‘realist art’ on its head. What if we understood the beautifully detailed and accurate oil painting of some famous landscape not as ‘realist’, but as an abstraction from the real? That is to say, not as the real scenery ‘over there’, but rather as the very precise extraction of its particular hues and tones and their replication on a canvas ‘over here’. Could we then understand the ‘realist’ painting as actually an ‘abstraction’? By the very same measure, the so-called ‘abstract art’ of someone like Pollock can be understood differently. Rather than extracting and replicating the qualities of some external scene, a Pollock represents the very real processes of marking a canvas with paint. As such, it is not ‘abstract’ at all, but rather about the actual, tangible events that constitute the production of a painting: the relationships between the materials and how they interact; the influence of the artist’s movement on the paint; the role of the environment in shaping how the paint reaches the canvas. Following Collings, we might say it is ‘painting about painting’.
So what does all that have to do with Learning Analytics? I think it is this: to critique Learning Analytics simply on the grounds that it makes certain worlds visible while hiding others remains within a representational logic that diverts attention from the contingent relations involved in the process of analysis itself. Let’s unpick that a bit. In my very simplistic, and certainly inadequate, interpretation of abstract art, I have suggested that a concern for ‘realism’ actually involves processes of abstraction that attempt to recreate an absent scene, object or person on a present canvas. I think it is fairly uncontentious to say that Learning Analytics is fundamentally concerned with a similar idea of abstraction. Particularly as the ‘worlds’ Learning Analytics is attempting to depict are not discernible through our individual senses alone. For example, the broad range of ‘grade point averages’ depicted in Charlie Tyson’s recent Inside Higher Ed article on ‘The Murky Middle’, would not be apparent to us as human beings without data collection and presentation methods.
However, the clever critique that I think the ‘abstract artists’ were making was that realism invalidated the actual painting itself; what was really important was the scene being depicted, and the painting was judged exclusively on its ability to represent this reality. Conversely, ‘abstract’ art might not be involved in abstraction at all, but rather the foregrounding of the very real, immanent practices of producing an image. But why should that concern Learning Analytics, which is fundamentally interested in providing a view, not an image; with ‘making visible’ the realities of educational activity so that positive intervention can take place? Well, this post is certainly not advocating that designers of Learning Analytics should suddenly surrender notions of representation and evoke their inner Jackson Pollock. Rather the point is to dwell on what the valuable lessons from abstract art might be: if we strive for Learning Analytics to be transparent, to depict with precise fidelity the real behaviours of our students, then we are working to hide the processes inherent to analysis itself.
In describing the demise of the Russian constructivist movement in abstract art, Collings alludes to the communist propaganda machine, and we are shown very realistic paintings of a smiling Stalin holding aloft a beaming blue-eyed child, similar to that shown in figure 1. This was the only ‘reality’ that Russian art was allowed to portray.
Figure 1: Russian propaganda poster showing Stalin holding a child https://farm6.staticflickr.com/5228/5637093714_933742e72a_z.jpg Creative Commons BY SA
My point in raising this example here is not just to suggest that what is produced by Learning Analytics may be controlled by wider political, economic, societal, or algorithmic influences, but also to divert attention away from the supposed reality behind the image. What I think the ‘abstract artists’ might have taught us is that the question is not whether Stalin really lifted the child (the reality ‘behind’ the image), but how and why the image itself was produced. That, I suggest, might tell us more about the state of Russia at that particular time. Indeed, we might even go as far as to say that to analyse the image in this way could tell us more than attempting to come up with a new, more accurate painting that shows us what Stalin was really doing at the time.
To drag the conversation back to Learning Analytics, my argument is that if we focus exclusively on whether the educational reality depicted by analysis is truthful or not, we seem to remain locked-in to the idea that a ‘good’ Learning Analytics is a transparent one. I think we should focus less on the results of Learning Analytics, and whether they measure up to reality, and more on the processes that have gone into the analysis itself. Understanding these processes, I contend, is as crucially important in understanding the ‘realities’ of education in our current times.
If that has all been rather abstract, then let’s get real. The traffic light system is perhaps the simplest, and possibly the most widely used example of Learning Analytics currently employed by educational institutions. A notable example is ‘Course Signals’ created by Purdue University. Crude associations with abstract shapes and colours aside, such systems essentially produce an image, derived from data collected around student behaviour. By my own tentative definition, it is ‘abstract’, or rather ‘abstracted’, in the sense that it is a representation of something else. A green light signifies the student is doing fine, a red light that they are not. There is, of course, code acting to produce such images. For example, if the algorithm detects the presence of a variable that indicates ‘semester 1 assignment has been handed in’, it will produce the output required for a green light to appear in the student’s VLE profile.
OK, now let’s consider the two different ways of thinking about Learning Analytics here. If we remain within a representational framework, we are primarily concerned with the ‘reality’ being depicted by the traffic light: either the ‘problem’ of a student not handing in their assignment, or the absence of a problem resulting from a successful submission. What such a focus overlooks is the conditions through which this measurement of educational attainment actually comes about. This is not to argue that indicators of whether a student has completed an assignment or passed an assessment aren’t useful, rather it is to suggest that an equally significant question might be how a traffic light has come to be the gauge by which a pedagogical intervention might take place. What are the broader societal and economic factors that produce an educational concern for retention over that of enjoyment, for example, and how is the image of the traffic light amplifying this concern? Why are the things that are programmed to produce a green light the exclusive measures of student success, and how does the prevalence of certain student data influence what kind of things are measured? Why has an algorithm been given the responsibility of saying ‘you’re doing OK’ or ‘you’re not doing OK’ over that of the teacher, or indeed the student? Why is education in need of such ‘solutions’, premised on what kind of ‘problem’? These are the kinds of questions we can begin to ask when we see the traffic light as a traffic light, not as a transparent window to something else.
This is not an argument against traffic light systems, or Learning Analytics in general, but rather a call to expose and interrogate the assumptions already embedded in the code that produces them. While the drive to ‘make visible the invisible’ through Learning Analytics may indeed be useful, and techniques will undoubtedly aim for increased accuracy, such approaches work towards the transparency of their own processes. What may be a far more significant analysis of education in our times is not whether our measurements are accurate, but why we are fixated on the kinds of measurements we are making, and how this computational thinking is being shaped by the operations of the code that make Learning Analytics possible.