PISA for personality testing – the OECD and the psychometric science of social-emotional skills

Ben Williamson

OECD SEL survey

The Organization for Economic Cooperation and Development (OECD) has published details of its new Study on Social and Emotional Skills (SSES). While the OECD has been administering international large-scale assessments on ‘cognitive skills’ and ‘competencies’ with both children and adults for many years, the new SSES survey represents a significant shift in focus to ‘non-cognitive’ aspects of learning and skills. While details of the science behind its cognitive skills and competencies tests are relatively well known, it is now becoming clear that the OECD’s social-emotional skills programme will emphasize the psychometric science of ‘personality’ measurement.

As part of ongoing research on social-emotional learning and skills (SELS) policies, practices and technologies, this (lengthy) post summarizes some of the key aspects of SSES, detailing its policy context, the ways it will generate and use student data, its conceptual basis in psychometrics, and the ways the OECD frames it as an objective ‘policy-relevant’ science programme with positive social and economic outcomes for participating countries.

From PISA to SSES
In recent years, the OECD’s PISA (Programme for International Student Assessment) and PIAAC (Programme for the International Assessment of Adult Competencies) tests have been the subject of extensive debate and research. New tests, such as the PISA-based Test for Schools to help schools compare themselves to international standards, as well as the expansion of its tests to include factors like problem-solving and well-being, have become available as the OECD has gradually extended its logic of measurement and comparison into policymaking systems globally.

The OECD first began signalling its interest in measuring and assessing social and emotional skills in 2014. That year, it published Fostering and Measuring Skills: Improving cognitive and non-cognitive skills to promote lifetime success, followed in 2015 by its report Skills for Social Progress: The power of social and emotional skills. In 2017 the OECD published Personality Matters: Relevance and assessment of personality characteristics, an extensive review of the scientific literature on personality theory and the measurement of personality factors. Although Personality Matters was developed as part of the PIAAC survey of adult skills, it has been deployed as the scientific rationale for the Study of Social and Emotional Skills announced in its 2017 ‘brochure’ Social and Emotional Skills: Well-being, connectedness and success and the accompanying SSES website.

Before going into some of the detail of SSES, the OECD’s focus in this area needs to be seen in a larger policy context. Over the last five years, as I’ve documented elsewhere, social-emotional learning and skills (SELS) have become a significant education policy priority and a key focus for education technology development and investment. Organisations including the global education business Pearson and the Nudge Unit have produced research summaries and guidance on developing SELS. The core idea behind many social-emotional learning and skills approaches is that the ‘non-cognitive’ aspects of learning are fundamentally linked to academic progress and to a range of social and economic outcomes, such as productivity, labour market behaviours and overall well-being.

Moreover, many advocates maintain, SELS are malleable and can be improved through direct teaching intervention. Improving SE skills is, therefore, seen as an important prerequisite for raising attainment, achieving social and economic progress, and improving individuals’ success, and an attractive prospect for policymakers seeking new ways to boost student achievement and employability.

Major lobbying groups based in the US have produced scientific justifications for focusing on SE learning and skills. The Collaborative for Academic, Social, and Emotional Learning (CASEL) has produced its own meta-analyses on social-emotional learning research and evidence. Similarly, the National Commission on Social, Emotional, and Academic Development (NCSEAD) at the Aspen Institute has published a research consensus drawing from evidence in brain science, medicine, economics, psychology, and education research. It claims to demonstrate that ‘the success of young people in school and beyond is inextricably linked to healthy social and emotional development, such as the ability to pay attention, understand and manage emotions, and work effectively in a team.’

Although CASEL and NCSEAD appear to have identified a consensus about what constitutes social-emotional learning and skills, the terminology remains confusing. Terms used for SELS including ‘character,’ ‘growth mindset,’ ‘grit,’ ‘resilience,’ and other ‘non-cognitive’ or ‘non-academic’ ‘personal qualities’ are often used interchangeably and gain traction with different academic, practitioner and policymaking communities. ‘Character’ has become the policy focus for the Department for Education in the UK following the 2014 publication of a cross-party Character and Resilience Manifesto,  while ‘grit’ has been favoured by the US Department of Education, as in its 2013 report Promoting Grit, Tenacity, and Perseverance—Critical Factors for Success in the 21st Century. Emerging education policies in the European Union appear to emphasize ‘soft skills’ as a category that encompasses SELS.

The OECD itself has adopted ‘social and emotional skills,’ or ‘socio-emotional skills,’ in its own publications and projects. This choice is not just a minor issue of nomenclature. It also references how the OECD has established itself as an authoritative global organization focused specifically on cross-cutting, learnable skills and competencies with international, cross-cultural applicability and measurability rather than on country-specific subject achievement or locally-grounded policy agendas.

Social-emotional datafication
The SSES programme was launched in 2017 with a timetable toward delivery of the first results in 2020. According to a published project schedule, during 2018 the OECD is developing the instruments and test items, before field testing in participating countries later in the year. The first formal round of the survey will take place in 2019, with final results released to the public late in 2020. The OECD also plans to administer SSES repeatedly in order to generate longitudinal data.

The expected outputs from the project include:

  • a set of validated international instruments to measure social and emotional skills of school-aged children
  • a dataset with information on the level and spread of social and emotional skills of children at ages 10 and 15, obtained from multiple sources, and accompanied with a wide scope of background and contextual variables
  • an improved understanding amongst policy-makers, education leaders, teachers, parents and other stakeholders on the critical role of social and emotional skills and the types of policies and practices that support the development of these skills
  • an improved understanding of whole child development, specifically as it relates to the development of social and emotional skills of children and youth

Although it is anticipated that only 10-12 countries and cities will take part in the first study, a huge quantity of data will be collected from the participants to deliver these outputs. The student survey of 10 and 15 year-olds itself consists of two assessments.

The first is a direct assessment. This is to be administered as a self-report questionnaire, which will be completed online as a computer-based survey. Students will respond to questions that are designed to assess behaviours considered indicative of selected SE skills. Indirect assessment will add to the dataset, including reports from parents and teachers based on similar questions on the typical behaviours of individual participating students.

In addition to the core assessment instruments will be contextual questionnaires for completion by children, their parents, teachers and principals. The contextual questionnaire for students will gather data on demographics, family culture, subjective health and well-being, academic expectations, and perception of their own SE skills.

Parents will provide information about their children’s SE skills, family background, child’s performance, home learning environment, parent-child-relationship, parental styles, learning activities, and parents’ own attitudes and opinion. As this list indicates, SSES is not just focused on school factors involved in developing SELS, but on distinctive family factors too, including learning activities undertaken out of school.

Schools themselves will provide contextual information from teachers in the form of reports on students’ SE skills, teachers’ own backgrounds, school characteristics, teaching practices, and teachers’ values and expectations about SE skills. Principals will add data on school background, school management, principles and rules, school climate, and the role of SE skills in curriculum and school agenda, as well as further administrative data for calculating other behavioural correlates and outcomes.

The data production expectations on schools, students and their families are, as the list demonstrates, extensive and extend well beyond the normal jurisdiction of the education sector into the extraction of information about homes, family relationships and parenting practices.

A further OECD document suggested it was also considering ‘exploring ways to link social and emotional skill measurement of the proposed study with other OECD measurement instruments such as those used in PISA and PIAAC, as well as with local measurement instruments such as standardised achievement tests.’

The direct assessment will be delivered online using a centralized software platform for assessment of children’s SE skills. Notably, the OECD claims it will use log file data obtained during the test as additional indicators of SE skills.

Log file information collected during computer-based international assessments has been described by Bryan Maddox as ‘process data’ collected about such things as  response times and key strokes, which can be studied with ‘micro-analytic precision’ in the analysis of larger-scale assessment data. These log file data are increasingly used in assessment software platforms as an extension of the test and can be conceptualized as ‘the mechanisms that underlie what people do, think, or feel, when interacting with, and responding to, the item or task.’

It’s not entirely clear how SSES will use log file data, but other projects have sought to correlate process metadata such as keystroke and response times to SELS. For example, the winner of CASEL’s 2017 design challenge on technologies to assess social-emotional learning was designed to capture the metadata generated as students took a computer-based test. It claimed its ‘measure quantifies how often students respond extremely quickly over the course of a test, which is strongly correlated with scores from measures of social-emotional learning constructs like self-regulation and self-management.’

This project exemplifies a form of stealth assessment whereby students are being assessed on criteria they know nothing about, and which rely on micro-analytics of their gestures across interfaces and keyboards. It appears likely that SSES, too, will involve correlating such process metadata with the OECD’s own SELS constructs to produce stealth assessments for quantifying student skills.

As the range of its data collection activities demonstrate, the OECD has designed SSES to include not only direct survey assessments, but extensive contextual information, school-level data on students behaviours and outcomes, and log file information that can be analysed as digital signals of SE skills. Importantly, though, these data all rely on specific conceptualizations of socio-emotional skills that the OECD has invested significant institutional effort in researching and defining.

Personality measurement
Behind the OECD SSES survey lies a set of psychological knowledge about the measurable qualities and characteristics of socio-emotional skills. The SSES brochure gives an overview of how the OECD defines SE skills. It claims socio-emotional skills constructs can be classified into five broad domains, which it refers to as a well-known framework called the ‘Big Five model’: emotional regulation (emotional stability); engaging with others (extroversion); collaboration (agreeableness); task performance (conscientiousness); open-mindedness (openness). The SSES survey itself will be administered to assess 19 skills which fit into each of these five categories.

The brochure notes the ‘five-factor structure of personality characteristics’ has been extensively researched and empirically validated in multiple studies, leading to ‘widespread acceptance of the model.’ It further adds that there is ‘extensive evidence that the Big Five domains and sub-domains can be generalised across cultures and nations’ and is suitable for describing socio-emotional skills in both children and adults.

OECD SELS categories

A much fuller account of the Big Five model is provided in Personality Matters, which the SSES brochure references directly. Written by an OECD policy analyst with academic experience in educational psychology, international social policy and cross-cultural survey methodology, Personality Matters is an extensive review of psychological and psychometric research on the conceptualization and measurement of human personality.

The document reviews research on a variety of potential measures of social-emotional learning and skills, including ‘grit,’ ‘character skills’ and other socio-emotional competencies (though it makes no reference to ‘growth mindset,’ a current popular psycho-policy concept). The review favours the five factor model of personality consisting of openness, conscientiousness, extroversion, agreeableness and neuroticism (OCEAN). It acknowledges that psychologists have developed tests and assessments such as the Big Five Inventory (BFI), the Neuroticism-Extraversion-Openness Personality Inventory (NEO-PI) the International Personality Item Pool (IPIP) and Trait Descriptive Adjectives (TDA) to measure these personality factors.

It is of course clear that the OECD’s SSES categories map exactly on to the five factor personality categories, with ‘emotional stability’ standing in for ‘neuroticism.’ The model was approved at an OECD meeting in 2015, following a presentation by Oliver John, a psychologist at the Berkeley University Personality Lab and original author of the Big Five Inventory personality test. Likewise, the SSES survey has been designed to assess 19 skills associated with its Big Five model, in ways which emulate the structure of many of the personality tests cited in its review. As this indicates, the way the OECD has formulated social and emotional skills is a direct translation of the OCEAN categories used by psychologists for personality testing. In this sense, SSES appears to represent a therapeutic shift in OECD focus, with its target being the development of emotionally stable individuals who can cope with intellectual challenge and real-world problems.

Crucially, the review also notes strong correlations between high scores in the Big Five and other outcomes such as academic achievement, job performance, and standardized test scores. Notably, it emphasizes the ‘policy relevance’ of the insight that many personality characteristics—or socio-emotional skills as the SSES describes them—are malleable and can therefore become a ‘potential target for policy intervention.’  Arguably, policy interventions that would be relevant to remedying identified personality problems would be forms of therapeutic provision, such as remedial classes in socio-emotional skills, which schools would then be responsible for delivering. ‘Therapeutic education,’ as Kathryn Ecclestone and Denis Hayes characterized it a decade ago, includes activities and underlying assumptions ‘paving the way for coaching “appropriate” emotions’ and ‘replace education with the social engineering of emotionally literate citizens who are also coached to experience emotional wellbeing.’

The combination of the human sciences with public policymaking has a long history, enabling governments to act upon the capacities of those subjects they govern. The promise of scientific objectivity regarding human behaviour and emotion is very attractive to policymakers wishing to manage or social-engineer the ‘appropriate’ behaviours of populations more effectively. However, as Sheila Jasanoff has argued, the objectivity of ‘policy-relevant knowledge’ is always achieved through hard work, argument, and the strategic deployment of persuasive, authoritative claims. The ‘objectivity-making’ practices of psychologists of personality underpin the recent policy shift to SELS embodied by the OECD, and the forms of therapeutic education that are likely to proliferate as schools realize they are to be measured and assessed according to SELS categories.

The OECD has made a science itself out of crafting objective policy-relevant knowledge from larger, contested bodies of scientific evidence about human competencies and personalities. With SSES, it either dismisses or omits the ‘grit,’ ‘growth mindset’ and ‘character skills’ literature—which in the Personality Matters report it suggests imply ‘moral connotations that many researchers and policy advisers would like to avoid’—and instead translates the concepts and practices of psychometric personality testing into policy-relevant approaches to measuring and assessing the social and emotional worlds of children.

Socio-emotional indicators and socio-economic outcomes
Beyond the presumed scientific objectivity of personality testing, interest in SELS among government departments and policymakers is also due at least in part to the economic arguments made by its advocates.

In the US, SELS are a lucrative investment opportunity under the banner of ‘impact investing.’ These ‘pay for success’ schemes allow investment banks and wealthy philanthropies to invest in educational services and programs and then collect public money with additional interest as profits if they meet agreed outcomes metrics.  The metrics for calculating the social benefit and monetary value of SELS schemes have  been published as a cost-benefit analysis with the title The economic value of social and emotional learning.

Beyond direct profitability of SELS programs for investors, however, the OECD makes a strong argument to governments that its assessment of socio-emotional skills can produce indicators of socio-economic outcomes. As such, it makes the case that government investment in SELS through departments of education will generate a substantial return in the shape of productive human capital. This is an argument the OECD has refined through years of PISA and PIAAC testing and analysis.

The Nobel laureate of economics James Heckman has advised the OECD on its social-emotional learning program through co-authoring its 2014 report on measuring non-cognitive skills. The report claimed that some programmes to support non-cognitive skills development ‘have annual rates of return that are comparable to those from investments in the stock market.’ Based on  extensive economics analysis twinned with developmental psychology and the neuroscience of ‘human capability formation,’ Heckman has influentially argued for over a decade that non-cognitive social-emotional skills and ‘personality factors are also powerfully predictive of socioeconomic success and are as powerful as cognitive abilities in producing many adult outcomes.’ Making ‘personality investments’ in young people, he claims, leads to high returns in labour market outcomes.

It’s notable that the organization contracted to lead SSES is the Center for Human Resource Research (CHRR) at The Ohio State University. The CHRR’s mission is to provide ‘substantive analyses of economic, social, and psychological aspects of individual labor market behavior to examining the impact of government programs and policies.’ According to the CHRR, the SSES project will identify ‘those social and emotional skills that are cross-cultural, malleable, measurable, and that contribute to the success and well-being of both the youth and their society.’ The assessment of SELS is therefore to be undertaken through the logic of human resource management and the analysis of labour market behaviours.

As the OECD itself phrases it, the purpose of SSES is to ‘provide participating cities and countries with robust and reliable information on the social and emotional skills of their students,’ and also to ‘have policy relevance’ by identifying ‘the policies, practices and other conditions that help or hinder the development of these critical skills.’ As a ‘policy-relevant project,’ the OECD claims, ‘study findings can also be used by policy makers to devise better policy instruments geared towards promoting these types of skills in students.’ Its brochure gives examples of ‘critical life outcomes’ that correlate with socio-emotional skills, including school achievement, college graduation, job performance, health and well-being, behavioural problems, and citizenship participation. These, it claims, can be improved because social and emotional skills are learnable and personality is malleable.

Psycho-economic policymaking & personality modification
This is necessarily a very partial overview of some of the key features of SSES. However, it does raise a few headline points:

  • SSES extends international-large scale assessment beyond cognitive skills to the measurement of personality and social-emotional skills as a way of predicting future economic and labour market outcomes
  • SSES will deliver a direct assessment instrument modelled on psychological personality tests
  • SSES enacts a psychological five-factor model of personality traits for the assessment of students, adopting a psychometric realist assumption that personality test data capture the whole range of cross-cultural human behaviour and emotions in discrete quantifiable categories
  • SSES extends the reach of datafication of education beyond school walls into the surveillance of home contexts and family life, treating them as a ‘home learning environment’ to be assessed on how it enables or impedes students’ development of valuable socio-emotional skills
  • SSES normalizes computer-based assessment in schools, with students required to produce direct survey data while also being measured through indirect assessments provided by teachers, parents and leaders
  • SSES produces increasingly fine-grained, detailed data on students’ behaviours and activities at school and at home that can be used for targeted intervention based on analyses performed at a distance by an international contractor
  • SSES involves linking data across different datasets, with direct assessment data, indirect assessments, school admninistrative data, and process metadata generated during assessment as multiple sources for both large-scale macro-analysis and fine-grained micro-analytics–with potential for linking data from other OECD assessments such as PISA
  • SSES uses digital signals such as response times and keystrokes, captured as process metadata in software log files, as sources for stealth assessment based on assumptions about their correlation with specific social-emotional skills
  • SSES promotes a therapeutic role for education systems and schools, by identifying ‘success’ factors in SELS provision and encouraging policymakers to develop targeted intervention where such success factors are not evident
  • SSES treats students’ personalities as malleable, and social-emotional skills as learnable, seeking to produce policy-relevant psychometric knowledge for policymakers to design interventions to target student personalities
  • SSES exemplifies how policy-relevant knowledge is produced by networks of influential international organizations, connected discursively and organizationally to think tanks, government departments and outsourced contractors
  • SSES represents a psycho-economic hybridization of psychological and psychometric concepts and personality measurement practices with economic logics relating to the management of labour market behaviours and human resources

There is likely to be additional concern that the OECD will use SSES to conduct large-scale international comparison of children’s social-emotional learning and skills. At present the first stage study appears too limited for that, with only an estimated 10-12 participating cities and countries.

However, over time SSES could experience function creep. PISA testing has itself evolved considerably and gradually been taken up in more and more countries over different iterations of the test. The new PISA-based Test for Schools was produced in response to demand from schools. Organizations like CASEL are already lobbying hard for social-emotional learning to be used as an accountability measure in US education—and has produced a State-Scan Scorecard to assess each of the 50 states on SEL goals and standards. Even if the OECD resists ranking and comparing countries by SELS, national governments and the media are likely to interpret the data comparatively anyway.

If these developments are taken as indicators, it is possible that over time the OECD may generate international comparisons, accountability metrics and league tables of education systems based on intimate assessments of students’ personalities.

Image credits: OECD
Advertisements
Posted in Uncategorized | Tagged , , , , , , , | 4 Comments

Mapping the data infrastructure of market reform in higher education

Ben Williamson

Cables_Thomas Williams

A new regulator for Higher Education in England came into legal existence on 1st January 2018. Announced as part of the 2017 Higher Education and Research Act (HERA), the Office for Students is already controversial before it formally begins operations in April. The appointment to its board of Toby Young, the free schools champion and journalist, appalled critics who vocally called for his sacking over previous misogynistic comments in the press and on social media. Despite Conservative Party ministers including Jo Johnson, Boris Johnson and Michael Gove defending his selection, Young resigned within 10 days.

The Toby Young storm, however, has distracted attention from one of the most significant aspects of the HE reforms the Office for Students will preside over. That is the escalation and acceleration in the collection, analysis and use of student data, and the building of a new HE data infrastructure to enact that task. Under the Office for Students, student data is to become a significant source for regulating the HE sector, as universities are put under increasing pressures of market reform, metrics and competition by HERA.

As with all data infrastructure, mapping the HE data infrastructure is a complex task. In this initial attempt to document it (part of a forthcoming paper), I am following Rob Kitchin’s call for case studies that trace out the ‘sociotechnical arrangements’ of people, organizations, policies, discourses and technologies involved in the development, evolution, influence, dead-ends and failures of data infrastructures. It is necessarily a very partial account of a much larger project to follow the development, rollout and upkeep of a new data infrastructure in UK HE, and to chart how big data, learning analytics and adaptive learning technologies are being positioned as part of this program to deliver a reformed ‘smart’ sector for the future.

Metrics, markets and HE reform
As will be familiar to many working within UK HE, the Higher Education and Research Act (HERA) came into effect in 2017. It is the result of governmental reforms to the sector that have been underway since the beginning of the decade, as detailed in government papers produced by the Department for Business, Innovation and Skills (BIS)—notably 2011’s Students at the Heart of the System and 2016’s Success as a Knowledge Economy. These reforms, as others have documented and debated, have unleashed a ‘metric tide’ of performance measurement across the sector in an effort to create a marketized system of mass higher education.

The creation of the Office for Students (OfS) as a new regulator for HE in England has now consolidated the metricization and marketization of the sector as demanded by HERA.

Described by BIS as a ‘non-departmental public body’ operating ‘at arm’s length from Government,’ the OfS is intended as an ‘explicitly pro-competition and pro-student choice’ organization, as well as a ‘consumer focused market regulator’ much like Ofcom in the media sector and Ofwat in water services. Its chair is Sir Michael Barber, formerly the ‘deliverology’ champion under Tony Blair’s government, a partner at management consultancy McKinsey, and more recently the chief education adviser at the global education business Pearson. At Pearson Barber oversaw its attempted transformation into a ‘digital-first’ company focused on digital data analytics and adaptive learning technologies for both the schools and universities sectors.

Barber was also the co-author of a 2013 report on the future of HE in the UK with the IPPR think tank, which argued:

With a massive diversification in the range of providers, methods and technologies delivering tertiary education worldwide, the assumptions underlying the traditional relationship between universities, students and local and national economies are increasingly under great pressure – a revolution is coming.

WonkHE named Barber the most powerful person in HE in 2017, noting his ‘legendary fondness for metrics,’ and the OfS began recruiting high-profile positions in data analytics ‘for those who understand the latest thinking in data science practice and how data can support policy development.’

The combination of BIS, HERA and Barber’s OfS represent the most visible aspects of current HE reform. As Neil Selwyn argues, ‘neoliberal logics’ of competition, performance measurement, quality management, marketization, commercialization and privatization have been growing in HE around the globe for many years, and are part of an increasingly powerful ideal of ‘digital universities’ which use data and metrics for monitoring performance and planning.  Underlying the reforms, however, is a less acknowledged project to upgrade and rebuild the data infrastructure that will allow the necessary information to be collected, analysed and put to use. The business of marketization in HE, as Janja Komljenovic and Susan Robertson have argued, involves ‘not just people, but technologies such as software, algorithms, computers, procedures and so on, in a rich collage of people, technology and programmes … that align the work of the university with the logics of capitalist markets.’

Data Futures
The enactment of HERA and the OfS will depend on massive quantities of data from universities, and also the utilization of new digital technologies to collect and process those data. In the last couple of years, as a recent Westminster event illustrates, government has begun to take more and more interest in ‘the role of big data and learning analytics for universities, including targeted marketing of prospective students, improving retention and personalising learning experience for individuals.’

Similarly, a collaboration between the Higher Education Commission and the think tank Policy Connect has produced From Bricks to Clicks: The potential of data and analytics in Higher Education to focus on the use of ‘fluid data’ that is ‘generated through the increasingly digital way a student interacts with their university.’ It highlights the potential of learning analytics to ‘improve the student experience at university, by allowing the institution to provide targeted and personalised support and assistance to each student.’ The government’s major 2017 Industrial Strategy also committed investment in big data, AI and machine learning within digital courses.

These future aspirations are beginning to be realized through the ‘Data Futures’ program being undertaken by the Higher Education Statistics Agency (HESA). Designated the official statistics and data body for HE since 1993, HESA compiles huge quantities of data about students, staff and institutions, departments, courses and finances, as well as performance indicators used to evaluate and compare providers. It maintains the data infrastructure for HE recording and reporting first established in its current form in 1994.

Data Futures is HESA’s flagship data infrastructure upgrade program, which it initiated as part of its corporate strategy in 2016, in response to government demands, and plans to operationalize fully by 2020. Funded with £7.5million from the HE funding councils, Data Futures is intended to enhance HE data quality, reduce duplication, and make data more useful and useable by members of the public, policymakers, providers, and the media. Its first priority is upgrading the systems for student data collection and analysis, in order to satisfy government demands that prospective students, as potential consumers of HE, can receive the best possible information about courses and providers, while current students might be able to monitor their own progress and rate the value-for-money provision they receive from their chosen courses.

The data infrastructure model being developed by HESA was first proposed through a series of reports by the global consulting firms Deloitte and KPMG, as part of the Higher Education Data and Information Improvement Programme (HEDIIP) hosted by HESA. In 2013 Deloitte produced ‘a proposal for a coherent set of arrangements for the collection, sharing and dissemination of data for the higher education data and information landscape.’ KPMG followed it with a ‘blueprint’ for a ‘New Data Landscape’, envisioned as ‘a data and information landscape for Higher Education in the UK that has effective governance and leadership, promotes data standards, rationalises data flows and maximises the value of technology and enables improved data capability.’

In the KPMG blueprint, used to establish Data Futures, HESA is positioned as a central ‘data warehouse’ for all HE data collection and access. Rather than once-a-year reporting, under Data Futures all HE providers will be required to conduct ‘in-year’ reporting. This will speed up the flow of data between institutions and HESA, and enable HESA to produce analyses and make them available to the public, media and policymakers more swiftly.

Writing in the Times Higher Education, HESA’s chief executive Paul Clark has described how the environment in which HE institutions operate is becoming increasingly data-intensive, with policymakers, students, funders and regulators all seeking information for their own purposes and needs:

Good data allow students to make informed choices, allow policymakers and regulators to make better decisions, promote public trust and confidence in the system, enable institutions to be competitive and provide a lever to incentivise or penalise behaviour in the absence of public funding.

At the same time as these ‘trends are being driven by developments in higher education policy,’ added Clark in another piece, ‘changes in the worlds of data, digital service delivery, and technology’ are taking place as big data technologies and practices are embedded across sectors and industries.

In this context, Data Futures instantiates, perhaps, a tentative first move toward the use of ‘live data’ and ‘real-time metrics’ which could be used for continuous performance measurement and comparison of the competitive HE market of providers. Although a real-time data model was suggested in the KPMG blueprint, Data Futures is not going quite that far, just yet. As Clark noted in his THE article,

Further developments can build out from this – providing enhanced analytical tools for users and providers, opening up larger stores of data for analysis and innovation, linking datasets across government departments and policy areas to improve decision-making and reducing the transactional costs associated with data flows around the sector.

While its first goal is to implement in-year reporting, HESA is clearly positioning itself to introduce advanced digital data methods and large linked datasets drawn from across government departments into some aspects of HE reporting and decision-making.

Data platforms and dashboards
Although Data Futures appears rather mundane as an effort to streamline and improve student data collection and analysis, it is an immense organizational and technical undertaking. At its core is the construction of a new ‘data platform’ for data collection, and new ‘data dashboards’ and visualization technologies to analyse the data and communicate results.

With regard to the data platform, HESA released a specification document in 2016 for potential suppliers. The specification reveals the platform would include a vast number of interconnected technical components, including three ‘user interfaces’: a data collection portal, an analytics portal and a governance portal. Behind these interface portals would be a range of ‘services,’ all underpinned by ‘human and machine readable specifications,’ a ‘logical model’ and ‘physical data model,’ a ‘unique student identifier lookup service,’ and a ‘reporting engine.’ The data platform would also include cloud storage, encryption, secure file transfer services, metadata, code, rules, data files, metrics, and specifications in terms of quality, reports, and data delivery, and more.

The appointed supplier to deliver the specification, announced in 2017, is Civica, which ‘provides a wide range of software, digital solutions and technology-based outsourcing’ for ‘organisations to improve and automate the provision of efficient, high quality services, and to transform the way they work in response to a rapidly changing and increasingly digitalised environment.’ In its announcement of the contract, HESA said Civica would deliver an ‘improved data model and extended capabilities [which] will offer users of HESA data a regular flow of accessible information through an enhanced user interface and visualisation tools.’

Data dashboard development to communicate findings from the data platform is being undertaken through a collaboration between HESA itself and Jisc (Joint Information Services Committee) as part of their ‘business intelligence shared service.’ The Analytics Labs collaboration provides an agile data processing environment using ‘advanced education data analytics’ in order ‘to rapidly produce analyses, visualisations and dashboards for a wide variety of stakeholders to aid with decision making.’ It emphasizes ‘cutting-edge data manipulation and analysis,’ access to current and historic data for time series analysis of the sector, and competitor benchmarking, using the Heidi Plus software platform provided by the commercial supplier Tableau Server.

Notably, early in 2018 HESA signed an agreement with both The Guardian and The Times newspapers to use Heidi Plus to produce interactive HE dashboards of rankings and measures based on their league tables. This, claimed HESA, would ‘enable universities to accurately and rapidly compare and analyse competitor information at provider and subject level, changes in rank year on year,’ and ‘the highest climbers and the biggest “fallers.”’ It also noted that the dissemination and presentation of league table data help shape public opinion about different providers.

The data platform being built by Civica, twinned with dashboards produced using Heidi Plus, are therefore interfaces to the data infrastructure being built by HESA. Together, these software platforms enable student data to be collected, analysed, visualized and circulated to the public, the press, providers themselves, and policymakers and politicians—and in so doing, to shape opinion and influence decision-making. As such, these systems of measurement and visualization are intimately tied to political ambitions to subject HE to increased marketization and competition.

Measurement and visualization are never simply neutral representations. As David Beer has argued, ‘systems of measurement become powerful’ as ‘mechanisms of competition,’ especially when the results of those measurements are disseminated and circulated to circumscribe future possibilities. Moreover, Jamie Bartlett and Nathaniel Tkacz have described how data dashboards ‘bring about a new “ambience of performance”, whereby members of staff or the public become more attuned to how whatever is measured is performing.’ Metrics and their graphical representation shape how people and institutions think, behave, compare themselves, and act to change themselves based on those representations.

The metrics operating inside the Data Futures platform and dashboards, in this sense, are mechanisms of competition and performance measurement within the HE sector, acting to make marketization part of the ambient conditions of Higher Education. They  extend governmental ambitions around increased metricization and marketization through the computer interface and into the eyes, hands and decisions of university administrators and leaders. As HESA’s Andy Youell has written, the work to build a new HE information landscape is not so much about systems but about ‘changing behaviours’ and ‘changing attitudes to the value of data within institutions.’

A market of smart, connected universities
The utopian dream behind current efforts to reform HE, of which Data Futures provides the material infrastructure, is that marketization and competition will drive up innovation within universities. An innovative HE sector is at the forefront of the 2017 Industrial Strategy. The ‘winners’ will be those institutions able to provide measurable evidence of innovation and quality in research, teaching, and impact on society. Student data, in addition to ratings of research excellence, teaching excellence and knowledge exchange, are to become the measure of market competitiveness.

But Data Futures also signals a more long-term utopian vision of the future of the sector. In a 2016 article, HESA’s chief executive Paul Clark wrote in visionary terms of a future HE in which learning analytics are part of teaching intervention, research performance metrics are pervasive, policymakers routinely base decisions on sector data, and ‘universities are gathering and benchmarking more and more data to ensure they are operating efficiently, and competing effectively with their peers’:

In ten years’ time, it’s possible to envisage a digital HE sector, with data-driven universities operating within a smart, connected environment. In this vision, universities would routinely use data drawn from many sources and devices to design and deliver their services, allocate resources, and monitor their performance. Policy-makers would similarly pool data from across government and the public sector to design interventions, monitor progress, and gain a far better and more granular understanding of how policy should be designed and delivered in order to achieve their aims. And users of the system would be able to access critical real-time data and information on their own progress, the resources available to them, and what they can do to maximise their chances of success.

Although Data Futures alone won’t deliver this vision when it rolls out nationwide in 2020, it is building the infrastructure necessary for new data-driven universities of the imagined future.

Late in 2017, in fact, HESA ran the Data Matters conference for HE data practitioners and leaders, where it not only showcased Data Futures developments but also hosted speakers and panels on the latest developments in big data, learning analytics, data dashboards and visualization. Data Futures is establishing the necessary infrastructure to support the application of these technologies as part of a utopian vision of smart, connected universities driven by market demands and real-time metrics. It is also worth noting that there are similar ambitions to create a national student data network in the US, through the campaigning of the Postsecondary Data Collaborative and Gates Foundation funding.

Much more remains to be done to unpack and understand these new infrastructure projects and the implications of accelerating and expanding student data collection and analysis. This post simply represents a first attempt to map some aspects of the new data infrastructure that will orchestrate aspects of UK HE in years to come. It’s an infrastructure made up of policies, people, organizations and future imaginings as much as technologies, and an ideal illustration of a dynamic sociotechnical network in which politics, practices and technologies interpenetrate one another. It also highlights how governance of HE is extended across software companies, think tanks and consultancies, as well as government departments, public bodies and arms-length agencies, and of how software can act as a relay of government strategy into university planning.

As part of the regulatory apparatus for a reformed HE, controversial appointees to the board of the OfS have understandably generated concern. But they should not distract attention from current ongoing infrastructural work to establish a new digital architecture for Higher Education that universities, staff and students will inhabit for decades.

Image by Thomas Williams
Posted in Uncategorized | Tagged , , , , | Leave a comment

The Nudge Unit, data science and experimental education

Ben Williamson

light graph victor

The UK government’s Behavioural Insights Team has announced it has been experimenting with data science methods in school inspections. In partnership with the Office for Standards in Education (Ofsted), it has designed machine learning algorithms to predict and rate school performance.

Originally established as part of the Cabinet Office in 2010, the Behavioural Insights Team—or ‘Nudge Unit’ as it informally known—became a ‘social purpose company’ in 2014, jointly owned by the UK Government, the innovation charity Nesta, and its employees. Its staff have academic grounding in economics, psychology or a background in government policymaking, and it has expanded its office from London to Manchester, New York, Singapore and Sydney. It has always been closely associated with the ‘what works’ model of policy, employing randomized control trials (RCTs) to test out policy interventions. The BIT has also established a revenue-generating arm that uses ‘behavioural insights to design and build scalable products and services that have social impact.’ One of its advisory panel is Richard Thaler, the behavioural economist recently awarded a Nobel economics prize for his work on the application of behavioural science and ‘nudge’ techniques in public policy.

The BIT’s Data Science Team published details of its experiments in a new report in December 2017. The team’s defined aims are to ‘make use of publicly available data, web scraped data, and textual data, to produce better predictive models to help government’; ‘to test the implications of these models using RCTs’; and ‘to begin developing tools that would allow us to put the implications of our data into the hands of policymakers and practitioners.’

The report, Using Data Science in Policy, detailed a number of projects the Data Science Team had undertaken to apply behavioural insights to diverse areas of public policy:

over the past year we have been working to conduct rapid exemplar projects in the use of data science, in a way that produces actionable intelligence or insight that can be used not simply as a tool for understanding the world, or for monitoring performance, but also to suggest practical interventions that can be put into place by governments.

The experiments were in policy areas including health, social care and education.

School-evaluating algorithms
In its education project with Ofsted, the BIT described how it used ‘publically available datasets to predict which institutions are most likely to fail and thereby target their inspections accordingly. We showed that this data, married to machine learning techniques such as gradient boosted decision trees, can significantly outperform both random and systematic inspection targeting. … We are excited to be working with Ofsted to put the insights from this work into action.’

In order to apply data science and machine learning to school inspection, the BIT compiled publicly available data from the year before an inspection happened. These data, its report said, included workforce data, UK census and deprivation data from the local area, school type, financial data (sources of finance and spending), performance data (Key Stage 2 for primary schools and Key Stages 4 and 5 for secondary schools) and Ofsted Parent View answers to survey questions. Parent View is Ofsted’s online tool to allow parents to record their own views on their child’s school. These data are then considered in Ofsted inspections.

According to a report of the Ofsted experiment in Wired magazine, its ‘school-evaluating algorithm pulls together data from a large number of sources to decide whether a school is potentially performing inadequately.’ By matching statistical data to the Parent View data, which includes textual information that can be analysed for sentiment, BIT claims it can predict which schools are not performing well and are likely to fail an inspection. The system ‘can help to identify more schools that are inadequate, when compared to random inspections’ and may even be used to automate decisions made by Ofsted in the future.

So far, the Nudge Unit’s trial with Ofsted has not been used to inform any real-world decisions, although the two organizations plan to extend their partnership in 2018, and are considering the use of further datasets, including data that are not open to the public.

An important aspect of the experiment with Ofsted is that the BIT doesn’t want schools to know how the algorithm works, as the project’s director told Wired. ‘The process is a little bit of a black box—that’s sort of the point of it,’ he said. In other words, schools are to be kept in the dark about the school-evaluating algorithm so that they don’t have the opportunity to ‘game’ their data in advance, which would result in skewing the predictive model.

It’s not the first time the Nudge Unit has been involved in education in the UK. Earlier in 2017 it was reported that the Department for Education was recruiting a permanent behavioural insights manager and an adviser. The aim was to change the culture of the department with psychology specialists applying behavioural science in strategic policymaking processes, and to commission research, trials and interventions drawing on behavioural insights to ‘improve our education and children’s services’.

The Nudge Unit’s experiment with Ofsted, and the DfE’s recruitment of behavioural scientists, exemplify the increasing role of behavioural science agencies to produce policy-relevant science in public education in recent years, as Alice Bradbury, Ian McGimpsey and Diego Santori have previously documented. This raises a number of issues.

Data labs
The first issue is how public education is increasingly being influenced by arms-length agencies. As a co-owned entity of the Cabinet Office and Nesta, the Nudge Unit is strictly independent of government but now acting as an outsourced contractor within public policy. It was reported in Wired that if Ofsted rolls out the school evaluation algorithm developed by BIT, local authorities would be required to pay between £10,000-£100,000 to implement.

Nesta itself, the co-owner of BIT, is an often-overlooked organization in UK education. As a ‘policy innovation lab’ it has successfully campaigned for ‘coding’ to be included in the English National Curriculum and for data science to be applied in the analysis of public services. A core hub in a global network of policy labs, Nesta and similar organizations worldwide are seeking to innovate in public policy, often using technological innovations as models for government reforms.

As the US GovLab has reported, the application of data science in public policy by ‘data labs’ can help create a ‘smarter state.’ Indeed, Nesta and the Cabinet Office have previously collaborated to develop ideas about a ‘new operating system for government,’ using  data science, predictive analytics, artificial intelligence, sensors, autonomous machines, and platforms to redefine the role of government.

As such, organizations such as Nesta and the Nudge Unit, which perceive data science as a new model for enacting government, are now seeking to locate data science methods within the institutions and processes of educational policymaking and school evaluation. The Ofsted project is part of their wider ambitions around digital governance using data science to drive policymaking. They are seeking to attach arms-length ‘data labs’ to centres of public policy, bringing new forms of technical and statistical expertise—as well as economic, behavioural and psychological science—into policy processes, including education. This exemplifies what I have elsewhere described as ‘digital education governance’—the use of digital data to make education visible for inspection and intervention.

Inspecting algorithms
Second, as part of this shift, the Nudge Unit is seeking to transform the way school inspections are performed. Rather than inspection through embodied expertise, school evaluation is now to be enacted predictively, before the inspector arrives. Jenny Ozga has previously written of how digitally recorded data increasingly surrounds the inspection process. The Nudge Unit is seeking to pre-empt the inspection process through the application of machine learning algorithms which have been trained to spot patterns and make predictions from pulling together a wide range of multimodal data sources about schools and their contexts.

These deliberately ‘black-boxed’ and opaque systems, which schools would be unable to understand, could be significant actors in practices of school accountability. If, as anticipated, some of Ofsted’s tasks are automated by the Nudge Unit’s intervention, then it may be unclear how certain decisions have been made in relation to a school’s overall evaluation. Although the BIT claims it doesn’t wish to replace the professional inspector, it is clear that school inspection will become a more distributed task involving both human and nonhuman decisionmaking and judgment, with data science methods perceived as more objective and impartial means for producing evidence than professional observation. In this sense, it is entirely consistent with behavioural science claims that human decision-making is less rational and evidence-based–and more emotionally-charged, cognitively-biased and subjective–than is commonly assumed.

At a time when there is increasing political, public and legal concern about machine learning opacity and its lack of ‘explainability’ or transparency, it seems ethically questionable to create systems that are deliberately black boxed, not least as their algorithms may well contain biases and potential for statistical discrimination. The cognitive bias of the school inspector is to be combated with systems that may have their own encoded biases. If a school is predicted to be inadequate by the algorithm, its stakeholders will expect and need to know what factors and calculations produced that evaluation.

It is notable too that BIT claims ‘missing data’ is predictive of a failed inspection, presumably the consequence of human error in the data-inputting process, and that it is seeking other non-public data sources to improve its predictive models. It remains unclear how deeply BIT intends to scrape schools for data, or which additional data would be included in their calculations, raising methodological questions about reliability and commensurability of their analyses.

Behavioural government
The third issue relates to the application of behavioural science within education. Mark Whitehead and coauthors describe how ‘behavioural government’ has proliferated across public policy in many countries in recent years—especially the UK and US—through the application of ‘nudge’ strategies. Nudging involves the design of ‘choice architectures’ that can shape and condition choices, decisions and behaviours, and is deeply informed by behavioural and psychological sciences. The Nudge Unit exemplifies behavioural government.

In its project with Ofsted, the BIT is seeking to use data science as a way of constructing choice architectures for inspectors. The results of the data analyses can identify particular areas for concern, as predicted by the algorithm, that may then be targeted by the inspectors, thus creating a more efficient and cost-effective machinery of inspection. The BIT is, in effect, nudging Ofsted to make strategically informed choices about how to conduct inspection. This, claims BIT, would reduce the number of inspections required and free up Ofsted staff to work on improvement interventions with schools. (Though it might also lead to Ofsted staff reduction and cost-savings.) In these ways, the Ofsted inspector is being reimagined as a nudge operative, intervening in schools by offering them targeted improvement frameworks. At the same time, it is seeking to supplement subjective human judgment, with all the flaws that behavioural science claims comes with it, with algorithmic objectivity.

The Nudge Unit also makes extensive use of psychological insight. Perhaps the most obvious use of psychological data in the Nudge Unit’s project with Ofsted is the sentiment analysis it is performing on Parent View data, with aggregation of parents’ subjective feelings into patterns that can be used as objective indicators to supplement the statistical inspection.

More innovative sources of psychological data, however, could be used by the Nudge Unit to undertake algorithmic school inspections in the future.

Behavioural sciences have already amalgamated with data science in relation to the policy area of ‘social-emotional learning.’ The logic of the social-emotional learning movement is that ‘non-academic’ qualities are strongly linked to academic outcomes; students need to be trained to be socially and emotionally resilient if they are to succeed in school. There are close ties between many of the major voices in the social-emotional learning movement and the behavioural sciences, and educational technologies have become central to efforts to monitor and nudge students’ non-academic learning.

As I’ve documented elsewhere, a range of technical innovations to support social-emotional learning has been proposed and developed, such as behaviour monitoring apps and wearable biometric monitors, that might be able to detect indicators of student emotions such as engagement, attention, frustration and anxiety. Data from these devices could then be fed back to the teacher, who would be able to prompt the student in ways that might generate a more positive response.

Real-time student data, it is possible to speculate, could well become part of the school inspection process under a Nudge Unit style of data scientific experimentation. Student sentiment data, tracked to progress and attainment, would then become a measure, defined by behavioural economists, to be used for purposes of school accountability

Real-time psychological data, as well as more mundane user data scraped from the web or captured by mobile smartphones, argue Mark Whitehead and colleagues, now appear to present rich opportunities for behavioural scientists to both record and nudge behaviours and emotions.  They cite an article from Forbes claiming ‘the proliferation of connected devices—smartphones, wearables, thermostats, autos—combined with powerful and integrated software spells a golden age of behavioral science. Data will no longer reflect who we are—it will help determine it.’

Behavioural government is, then, informed by the testing culture of the tech sector, which constantly experiments on its users to see how they respond to small changes in design. As such, ‘behavioural design’ is the application of behavioural science to technological environments to influence and determine user behaviours. With increased behavioural science influence in education, twinned with massive escalation in data-processing ed-tech applications, the culture of testing and behavioural design could significantly impact on policy, schools and professional practitioners in years to come.

The education experiment
As with its work in other sectors, the Nudge Unit’s involvement with Ofsted and the Department for Education is bringing the methodological logic of data-driven experimentation and behavioural design into education. Increasingly, automated algorithms are being trusted to perform tasks previously undertaken by embodied professionals. Their opacity makes the decisionmaking these systems perform difficult to scrutinize. Ofsted has long been a source of concern for schools, of course. It is hard to see how transforming the inspector into an algorithm which is better at identifying more inadequate schools will reduce teachers’ worries about performance measurement. Data and the culture of performativity in education have a long history.

More generally, the application of data science to education policy is indicative of how the education sector itself is becoming the subject of increasing levels of experimentation with data science methods. The Department for Education is currently seeking to reintroduce baseline testing into pre-school settings. A previous trial of early years baseline testing in 2015 collapsed amid concerns over the methodology of the original contractor. In the tender for the second version of baseline assessment, however, the DfE has more carefully specified the testing methods it expects the contracted assessment company to use. In an experimental education sector, schools, professionals and students look increasingly like laboratory specimens, repeatedly subjected to tests and trials, inspections and interventions, as part of the pursuit of identifying ‘what works’ in education policy.

Image by Victor
Posted in Uncategorized | Tagged , , , , , | 4 Comments

Big Data in Education book launch

Ben Williamson

I was asked to prepare a few things to say to launch my book Big Data in Education: The digital future of learning, policy and practice in the Faculty of Social Sciences at the University of Stirling. Below are my notes.

Big data book cover

Seven years ago, like many other well-off parents in San Francisco, Max Ventilla was scoping out local schools. What he saw appalled him. State education, he concluded, was dangerously broken; a new model of school was required.

So he got a load of ‘progressive’ education literature about the state of American education, child-centred learning, school accountability, education technology and school design. He quit his job at Google, where he ran projects using big data to profile its millions of users, to set up his own new school.

AltSchool, as he called it, would be a ‘lab school’ combining child-centred progressivism with big data methods to deliver ‘personalized education’.

Max called venture capitalists he knew in Silicon Valley. 33 million dollars later, he hired teachers, managers—and a team of data analysts and software engineers to work on a ‘new operating system for education.’

Mark Zuckerberg of Facebook, among other investors, gave Max another 100 million for more AltSchools.

The tech and business press went wild. The Financial Times called AltSchool an example of ‘Silicon Valley’s classrooms of the future.’

Then Max revealed what his engineers were up to.

They’d built a software platform that could crunch data about almost everything students did. Student work could be uploaded to the system. Teachers’ responses would be logged. This all fed into a ‘Progress’ app—a ‘data dashboard’ displaying the progress students were making in academic learning and social-emotional development.

A ‘Playlist’ app was developed to recommend personalized tasks for students based on analysis of their past performance and predictions of their likely future progress.

Then AltSchool revealed it had cameras everywhere, tracking every movement and gesture of each student to assess engagement and attention.

Critics started to call it a ‘surveillance school’—using students as ‘guinea pigs’ for experimental data analytics. But Max and his investors, wanted it to scale-up across state education, to make more schools look like AltSchools.

Max had figured out a business model to satisfy investors. The AltSchool software platform would be offered for sale to all schools, starting in 2019. Meanwhile, last month Max shut down two of his lab schools, with three more to close in spring.

With the experimental beta-testing over, now Max and donors such as Mark Zuckerberg want to install the laboratory in every school.

AltSchool is prototypical of big data in education, and highlights a number of themes explored in the book.

So this book is about how educational data are produced and for what purposes, and about the technologies and companies that generate and process it.

And it’s about fantasy. A ‘big data imaginary’ of education is not just hype dreamt up in Silicon Valley, but a normative vision of education for the future shared by many. It has a seductive new data discourse of ‘personalization,’ ‘adaptive learning,’ ‘student playlists,’ ‘learning analytics,’ ‘computer-adaptive testing,’ ‘data-enriched assessment,’ and even ‘artificial intelligence tutors.’

It’s about ‘evidence-based’ education policy—that data analytics can provide real-time diagnostics and feedback at state, school, class and student levels—and commercial lobbying, venture capital and new forms of corporate philanthropy too, with ed-tech trying to capture public education for profit while attracting policymakers to their persuasive ideas.

It’s about science, with psychological, cognitive and neuro-scientists becoming expert in the experimental uses of student data.

And it’s about challenges to education research. Education research usually deals with human learning within social institutions, but now nonhuman ‘learning machines’ that can learn from and feedback to their human companions are starting to inhabit learning spaces as well. Some social science education researchers feel under threat from ‘education data science’ too.

Finally, the book is about power and the everyday ‘public pedagogies’ that teach lessons to millions globally, not just in educational institutions. Social media’s trending algorithms and filters direct attention to current events, politics, culture, and more, based on calculations of what you might like, what you’ve done, who you know. Tastes are being shaped, opinions and sentiments tweaked, and political views targeted and entrenched by political bots and computational propaganda. The power of big data in education extends beyond school to these public pedagogies of mis-education too.

Posted in Uncategorized | Tagged | Leave a comment

Learning machines

Ben Williamson

TerryKimura_facial recognition

When educators talk about theories of learning they are normally referring to psychological conceptions of human cognition and thinking. Current trends in machine learning, data analytics, deep learning, and artificial intelligence, however, complicate human-centred psychological accounts about learning. Today’s most influential theories of learning are those that apply to how computers ‘learn’ from ‘experience,’ how algorithms are ‘trained’ on selections of data, and how engineers ‘teach’ their machines to ‘behave’ through specific ‘instructions.’

It is important for education research to engage with how some of its central concerns—learning, training, experience, behaviour, curriculum selection, teaching, instruction and pedagogy—are being reworked and applied within the tech sector. In some ways, we might say that engineers, data scientists, programmers and algorithm designers are becoming today’s most powerful teachers, since they are enabling machines to learn to do things that are radically changing our everyday lives.

Can the field of social scientific educational research yet account for how its core concerns have escaped the classroom and entered the programming lab—and, recursively, how technical ‘learning machines’ are re-entering classrooms and other digitized learning environments?

Non-human machine learning processes, and their effects in the world, ought to be the object of scrutiny if the field of education research is to have a voice with which to intervene in the data revolution. While educational research from different disciplinary perspectives has long fought over the ways that ‘learning’ is conceptualized and understood as a human process, we also need to understand better the nonhuman learning that occurs in machines. This is especially important as machines that have been designed to learn are performing a kind of ‘public pedagogy’ role in contemporary societies, and are also being pushed in commercial and political efforts to reform education systems at large scale.

Algorithmic autodidacts
One of the big tech stories of recent months concerns DeepMind, the Google-owned AI company pioneering next-generation machine learning and deep learning techniques. Machine learning is often divided into two categories. ‘Supervised learning’ involves algorithms being ‘trained’ on a selected dataset in order to spot patterns in other data later encountered ‘in the wild.’ ‘Unsupervised learning,’ by contrast, refers to systems that can learn ‘from scratch’ through immersion in data.

In 2016 DeepMind demonstrated AlphaGo, a Go-playing AI system that learned in a supervised way from a training dataset of thousands of games played by professionals and accomplished amateurs. Its improved 2017 version, AlphaGo Zero, however, is able to learn without any human supervision or assistance other than being taught the rules of the game. It simply plays the game millions of times over at rapid speed to work out winning strategies.

In essence, AlphaGo Zero is a self-teaching autodidactic algorithmic system.

‘It’s more powerful than previous approaches because by not using human data, or human expertise in any fashion, we’ve removed the constraints of human knowledge and it is able to create knowledge itself,’ said AlphaGo’s lead researcher in The Guardian.

At the core of AlphaGo Zero is a training technique that will sound familiar to any education researchers who have encountered the psychological learning theory of ‘behaviourism’—the theory that learning is an observable change in behaviours that can be influenced and conditioned through reinforcements and rewards.

Alongside neural network architecture, a cutting-edge ‘self-play reinforcement learning algorithm’ is AlphaGo Zero’s primary technical innovation, It is ‘trained solely by self-play reinforcement learning, starting from random play, without any supervision or use of human data,’ as its science team described it in Nature. Its ‘reinforcement learning systems are trained from their own experience, in principle allowing them to exceed human capabilities, and to operate in domains where human expertise is lacking.’ As the reinforcement algorithm processes its own experiences in the game, it is ‘rewarded’ and ‘reinforced’ by the wins it achieves, in order ‘to train to superhuman level.’

Beyond being a superhuman learning machine in itself, however, AlphaGo may also be used ‘to help teach human AlphaGo players about additional, “alien” moves and stratagems that they can study to improve their own play,’ according to DeepMind’s CEO and co-founder Demis Hassabis. During testing, AlphaGo Zero was able not just to recover past human knowledge about Go, but also to produce new knowledge based on a constant process of self-play reinforcement.

The implication, in other words, is that powerful learning algorithms could be put to the task of training better humans, or even of outperforming humans to solve real-world problems.

The computing company IBM, which has also piled huge effort and resources into ‘cognitive computing’ in the shape of IBM Watson, has applied similar claims in relation to the optimization of human cognition. Its own cognitive systems, it claims, are based on neuroscientific insights into the structure and functioning of the human brain—as Jessica Pykett, Selena Nemorin and I have documented.

‘It’s true that cognitive systems are machines that are inspired by the human brain,’ IBM’s senior vice-president of research and solutions has argued in a recent paper. ‘But it’s also true that these machines will inspire the human brain, increase our capacity for reason and rewire the ways in which we learn.’

DeepMind and IBM Watson are both based on scientific theories of learning—psychological behaviourism and cognitive neuroscience—which are being utilized to create ‘superhuman’ algorithmic systems of learning and knowledge creation. They translate the underlying theories of behaviourist psychology and cognitive neuroscience into code and algorithms which can be trained, reinforced and rewarded, and even become auodidactic self-reinforcing machines that can exceed human expertise.

For educators and researchers of education this should raise pressing questions. In particular, it challenges us to rethink how well we are able to comprehend processes normally considered part of our domain as they are now being refigured computationally. What does it mean to talk about theories of learning when the learning in question takes place in neural network algorithms?

‘Machine behaviourism’ of the kind developed at DeepMind may be one of today’s most significant theories of learning. But because the processes it explains occur in computers rather than in humans, education research has little to say about it or its implications.

Developments in machine learning, autodidactic algorithms and self-reinforcement processes might enlarge the scope for educational studies. Cognitive science and neuroscience already embrace computational methods to understand learning processes—in ways which sometimes appear to reduce the human mind to algorithmic processes and the brain to software. IBM’s engineers for cognitive computing in education, for example, believe their technical developments will inspire new understandings of human cognition.

A social scientific approach to these computational theories of learning will be essential, as we seek to understand better how a population of nonhuman systems is being trained to learn from experience and thereby learning to interact with human learning processes. In this sense, the models of learning that are encoded in machine learning systems may have significant social consequences. They need to be examined as closely as previous sociological studies have examined the expertise of the ‘psy-sciences’ in contemporary expressions of authority and management over human beings.

Public hypernudge pedagogy
The social implications of machine learning can be approached in two ways requiring further educational examination. The first relates to how behavioural psychology has become a source of inspiration for social media platform designers, and how social media platforms are taking on a distinctively pedagogic role.

Most modern social media platforms are based on insights from behaviour change science, or related variants of behavioural economics. They make use of extensive data about users to produce recommendations and prompts which might shape users’ subsequent experiences. Machine learning processes are utilized to mine user data for patterns of behaviours, preferences and sentiments, compare those data and results with vast databases of other users’ activities, and then filter, recommend or suggest what the user sees or experiences on the platform.

Machine learning-based data analytics processes have, of course, become controversial following  news about psychological profiling and microtargeting via social media during elections—otherwise described as ‘public opinion manipulation’ and ‘computational propaganda.’ The field of education needs to be involved in this debate because the machine learning conducted on social media performs the role of a kind of ‘public pedagogy’—that is, the lessons taught outside of formal educational institutions by popular culture, informal institutions, public spaces, dominant cultural discourses, and both the traditional and social media.

The public pedagogies of social media are significant not just because they are led by machine learning, though. They are also deeply informed by psychology, and specifically by behavioural psychology. The behavioural psy-sciences are today deeply involved in defining the nature of human behaviours through their disciplinary explanations, and in informing strategic commercial and governmental aspirations.

In Neuroliberalism, Mark Whitehead and coauthors suggest that big data software is being regarded as spelling a ‘golden age’ for behavioural science, since data will be used not just to reflect the user’s behaviour but to determine it as well. At the core of the social media and behavioural science connection are the psychological ideas that people’s attention can be ‘hooked’ through simple psychological tricks, and then that their subsequent behaviours and persistent habits can be ‘triggered’ through ‘persuasive computing’ and ‘behavioural design.’

Silicon Valley’s social media designers know how to shape behaviour through technical design since, according to Jacob Weisberg, ‘the disciplines that prepare you for such a career are software architecture, applied psychology, and behavioral economics—using what we know about human vulnerabilities in order to engineer compulsion.’ Weisberg highlights how many of Silicon Valley’s engineers are graduates of the Persuasive Computing Lab at Stanford University, which uses ‘methods from experimental psychology to demonstrate that computers can change people’s thoughts and behaviors in predictable ways.’

Behaviourist rewards—or reinforcement learning—is important in the field of persuasive computing since it compels people to keep coming back to the platform. In so doing, they generate more data about themselves, their preferences and behaviours, which can then be processed to make the platform experience more rewarding. These techniques are, in turn, interesting to behaviour change scientists and policymakers because they offer ways of triggering certain behaviours or ‘nudging’ people to make decisions within the ‘choice architecture’ offered by the environment.

Karen Yeung describes the application of psychological data about people to predict, target and change their emotions and behaviours as ‘hypernudging.’ Hypernudging techniques make use of both persuasive computing techniques of hooking users and of behavioural change science insights into how to trigger particular actions and responses.

‘These techniques are being used to shape the informational choice context in which individual decision-making occurs,’ argues Yeung, ‘with the aim of channelling attention and decision-making in directions preferred by the “choice architect”.’

Through the design of psychological nudging strategies, digital media organizations are beginning to play a powerful role in shaping and governing behaviours and sentiments.

Some Silicon Valley engineers have begun to worry about the negative psychological and neurological consequences of social media’s ‘psychological tricks’ on people’s attention and cognition. Silicon Valley has become a ‘global behaviour-modification empire,’ claims Jaron Lanier. Likewise, AI critics are concerned that increasingly sophisticated algorithms will nudge and cajole people to act in ways which have been deemed most appropriate—or optimally rewarding—by their underlying algorithms, with significant potential social implications.

Underpinning all of this is a particular behaviourist view of learning which holds that people’s behaviours can be manipulated and conditioned through the design of digital architectures. Audrey Watters has suggested that behaviourism is already re-emerging in the field of ed-tech, through apps and platforms that emphasize ‘continuous automatic reinforcement’ of ‘correct behaviours’ as defined by software engineers. In both the public pedagogies of social media and the pedagogies of the tech-enhanced classroom, a digital re-boot of behaviourist learning theory is being put into practice.

Behavioural nudging through algorithmic machine learning is now becoming integral to the public hypernudge pedagogies of social media. It is part of the instructional architecture of the digital environment that people inhabit in their everyday lives, constantly seeking to hook, trigger and nudge people towards particular persistent routines and to condition ‘correct’ behavioural habits that have been defined by platform designers as preferable in some way. Educational research should engage closely with the public hypernudge pedagogies that occur when the behavioural sciences combine with the behaviourism of algorithmic machine learning, and look more closely at the underlying behavioural science theories of learning on which they are based and the behaviours they are designed to condition.

Big Dewey
The second major set of implications of machine learning relates to the uptake of data-driven technologies within education specifically. Although the concept of ‘personalized learning’ has many different faces, its dominant contemporary framing is through the logic of big data analytics. Personalized learning has become a powerful idea for the ed-tech sector, which is increasingly influential in envisioning large-scale educational reform through its adaptive platforms.

Personalized learning platforms usually consist of some combination of data-mining, learning analytics, and adaptive software. Student data are collected by such systems, then compared with an ideal model of student performance, in order to generate predictions of likely future progress and outcomes, or adapt responsively to meet individual students’ needs as deemed appropriate by the analysis.

In short, personalized learning depends on autodidactic machine learning algorithms being put to work to mine, extract and process student data in an automated fashion.

The discourse surrounding personalized learning frames it as a new mode of ‘progressive’ education, with conscious echoes of John Dewey’s student-centred pedagogies and associated models of project-based, experiential and inquiry-based learning. Dewey’s work has proven to be one of the most influential and durable philosophical theories in education, often used in conjunction with more overtly psychological accounts of the role that experience plays in learning.

With its combination of big data analytics and machine learning with progressivism, we could call the learning theory behind personalization ‘Big Dewey.’

Mark Zuckerberg’s philanthropic Chan-Zuckerberg Initiative is typical of the application of Big Dewey to education. CZI aims ‘to support the development and broad adoption of powerful personalized learning solutions. … Many philanthropic organizations give away money, but the Chan Zuckerberg Initiative is uniquely positioned to design, build and scale software systems … to help teachers bring personalized learning tools into hundreds of schools.’

To test out this model of learning in practice, new startup ‘lab schools’ have been established by Silicon Valley entrepreneurs. Many act as experimental beta-testing sites for personalized learning platforms–using students as guinea pigs–that might then be sold to other schools. As Benjamin Doxtdator has documented, these new lab school models of ‘hyperpersonalization’ utilize digital data technologies to ‘extract’ the ‘mental work’ of students from the learning environment in order to tune and optimize their platforms prior to marketing to other institutions.

Larry Cuban, however, has detailed the variety of ways that personalized learning has been taken up in schools in Silicon Valley, and himself sees strong traces of progressivism in their practices.

However, Cuban also notes that many employ methods more similar to the kind of ‘administrative progressivism’ associated with the psychologist EL Thorndike than Dewey. Thorndike was interested in identifying the ‘laws of learning’ through statistical analysis, which might then be used to inform the design of interventions to improve ‘human resources.’ Measurement of learning could thereby contribute to the optimization of ‘industrial management’ techniques both within the school and the workplace. Administrative progressivism was concerned with measurement, standardization and scientific management of schools rather than the student-centred pedagogies of Dewey.

‘What exists now is a re-emergence of the efficiency-minded “administrative progressives” from a century ago,’ argues Cuban, ‘who now, as entrepreneurs and practical reformers want public schools to be more market-like where supply and demand reign, and more realistic in preparing students for a competitive job market.’

With machine learning as its basis, personalization is a twenty-first century algorithmic spin on administrative progressivism. The ‘laws of learning’ are becoming visible to those organizations with the technical capacity to mine and analyse student data, who can then use this knowledge to derive new theoretical explanations of learning processes and produce personalized learning software solutions. As an emerging form of algorithmic progressivism, personalization combines the appeal of Dewey with the scientific promise of big data and autodidactic machine learning.

Ultimately, with the Big Dewey model, the logics of machine learning are being applied to the personalization of the learning experiences to be had by human learners. With this new model of education being supported with massive financial power and influence by Bill Gates, Mark Zuckerberg, and other edtech entrepreneurs, philanthropists and investors, Big Dewey is being forwarded as the philosophy and learning theory for the technological reform of education.

Machine learning escapes the lab
The machine behaviourism of autodidactic algorithm systems, public hypernudge pedagogies and personalized learning have become three of the most significant educational developments of recent years. All are challenging to educational research in related ways.

Machine behaviourism requires educational researchers to move their focus on to the kinds of reinforcement learning that occurs in automated nonhuman systems, and on how computational systems are being taught and trained by programmers, algorithm designers and engineers to learn from experience in an increasingly autodidactic way.

It’s not a sufficient response to claim that companies like DeepMind, IBM and so on take a reductionist view of what learning is—DeepMind’s Nature paper reveals an incredibly sophisticated learning model as pertains to neural networks software, while IBM has built its cognitive systems on the basis of established neuroscience knowledge about the human brain.

These systems can learn, but are not the same forms of learning known to most education researchers. As technical innovation proceeds, more and more learning is going to be happening inside computers. Just as educators hope to cultivate young minds to become lifelong independent learners, the tech sector is super-powering learning processes to create increasingly automated nonhuman machine learning agents to share the world with humans. What’s to say that educational researchers should not seek to develop their expertise in understanding nonhuman machine learning?

Theories of nonhuman learning are also becoming increasingly influential since machine learning processes underpin both the public hypernudge pedagogies of social media and personalized learning platforms I’ve outlined. The new behaviourist public hypernudge pedagogies, inspired both by behavioural science and behaviour design, are occurring at great scale among different publics, often according to political and commercial objectives, yet education research is oddly silent in this area.

While much has been written about big data and personalization, we’ve also still to fully explore how the tech sector philosophy of Big Dewey might affect and influence schools, teachers and students as adaptive learning platforms escape from the beta-testing lab and begin to colonize state education. Future studies of personalized learning could examine the forms of autodidactic machine learning occurring in the computer as well as the educational effects and outcomes produced in the classroom.

Image by Terry Kimura
Posted in Uncategorized | Tagged , , , , , | 1 Comment

Fast psycho-policy & the datafication of social-emotional learning

Ben Williamson

ClassDojo monster 25

[Paper prepared for the Annual Ethnography Symposium, University of Manchester, 30 August-1 September 2017, with the full title ‘The infrastructure of fast psycho-policy: psychological governance & the datafication of social-emotional learning’]

Mojo is a small, green alien student with the appearance of an extra from the animated movie Monsters University. Many researchers of education and technology may not know Mojo, but over 3 million teachers and 35 million students do, because Mojo is the cute brand mascot of the successful educational technology application ClassDojo used in primary schools worldwide to promote students’ ‘character’ development and ‘social-emotional learning’. Mojo is also, though, the friendly, visible face of an emerging infrastructure of interlocking technologies, organizations and policy discourses focused on the application of psychological expertise and techniques to measure and manage students’ behaviours and feelings in the classroom.

Launched with Silicon Valley venture capital support in 2011, ClassDojo started life as a simple behaviour management app available for free download to teachers. Designed for use on smartphones so it can be used in real-time in the classroom, ClassDojo encourages teachers to award ‘positive points’ for specific observable behaviours, gathers these points as data about student behaviour, and then allows teachers and school leaders to identify behavioural trends using its TrendSpotter visualization tool and automated report generator. Visualizations and reports are also available to parents. Its website claims ClassDojo builds ‘happier classrooms.’

However, in the last two years ClassDojo has extended its functionality to become a social media platform for schools, with real-time messaging, photo and video communication between schools and home, user-generated content, online video content hosting, and ‘school-wide’ functionality. Its founders and funders have likened it to Netflix, Spotify, LinkedIn and Facebook, and have claimed it can replace cumbersome school websites, group email threads, newsletters and paper flyers. It also has an online ClassDojo store targeted at teachers where they can purchase ClassDojo posters, resources and clothing.

In this paper I examine how ClassDojo has evolved  into an educational social media platform and  a key sociotechnical actor in the diffusion and enactment of a policy discourse of ‘social-emotional learning’ (SEL) worldwide. ClassDojo is just one app in a fast-growing industry of software tools designed to shape students’ social-emotional learning in the classroom–an industry which enjoys significant support from political centres of authority, international policy influencers, think tanks and philanthropic foundations. The paper draws on material published in two articles (‘Decoding ClassDojo’ and ‘Learning in the platform society’) where you can find full references to sources cited below.

Psycho-policy platforms
The evolution of ClassDojo needs to be understood as exemplifying within the educational field a trend that Jose van Dijck and Thomas Poell have described as the penetration of social media platforms into all kinds of everyday interactions, institutional practices and professional routines. Platforms are, argues Tarleton Gillespie, digital intermediaries that allow users to interact, host and share content, and buy or sell. But, he adds, platforms are also ‘curators of public discourse’—the result of choices about what can appear, how it is organized and monetized, and what its technical architecture permits or forbids. As such, technical, social and economic concerns determine platforms’ structure, function and use, note Jean-Christophe Plantin and coauthors. Moreover, they note, many social media platforms are now undergoing ‘infrastructuralization’ as ‘media environments essential to our daily lives (infrastructures) are dominated by corporate entities (platforms).’

In other words, platform operators are not mere ‘owners of information’ but ‘becoming owners of the infrastructures of society,’ as Nick Srnicek argues. In his view, platforms are characterized by acting as intermediaries to enable interaction between customers, advertisers, service providers, producers, suppliers, and even physical objects; thriving on network effects, whereby they accumulate users from whom they can gather data and generate value; offering free products and services; and deploying a strategy of constant user engagement through attractive presentations of themselves and their offerings. As emerging infrastructures, these platforms are increasingly acting as substrates to society.

As an educational intermediary, a curator of educational discourse, and a provider of free services that deploys strategies of user engagement to gain users through network effects, ClassDojo needs therefore to be studied and understood as the assembled product of a complex web of people and organizations that designed and maintain it; technical components; business plans; expert discourses, and the technical, social and economic concerns that frame them. By disassembling it into its component parts and examining it as contextually framed and produced, it becomes possible to see how it has evolved from a classroom app to a platform for schools to part of an infrastructure for social-emotional education across public education.

My methodological strategy is to approach ClassDojo as assembling and evolving in the context of a shift to ‘fast policy’ processes in education. By ‘fast policy’ I’m drawing on Jamie Peck and Nik Theodore’s argument that while contemporary policymaking may still be primarily government-centred, it also involves ‘sources, channels, & sites of policy advice’ that ‘encompass sprawling networks of human & nonhuman actors’. This means digital technologies, infrastructures, platforms, websites, social media activities, database devices and so on can all be considered as policy actors which may be followed as they are assembled, evolve, mutate and ‘become real’. Methodologically, an attention to fast policy processes demands ‘network ethnography’ approaches which seek to ‘follow policies’ as they are developed and realized. In my research, I am specifically following ClassDojo as a fast policy actor, both seeking to ‘disassemble’ it into the various parts it has been assembled from, and ‘reassembling’ the wider infrastructure of people, technologies and policy discourses that are seeking to ‘make real’ and enact the ‘social-emotional learning’ agenda in schools.

The term social-emotional learning (SEL) encompasses concepts such as character education, growth mindset, grit and perseverance, and other so-called ‘non-cognitive’ or ‘non-academic’ ‘personal qualities’ and competences. In the last couple of years, social-emotional learning has emerged as a key policy priority from the work of international policy influencers such as the OECD and World Economic Forum; psychological entrepreneurs such as Angela Duckworth’s ‘Character Lab’ and Carol Dweck’s ‘growth mindset’ work; venture capital-backed philanthropic advocates  (e.g. Edutopia); powerful lobbying coalitions (CASEL) and institutions (Aspen Institute) and government agencies and partners, especially in the US (for example, the US Department of Education ‘grit’ report of 2013) and UK (in 2014 an all-party parliamentary committee produced a ‘Character and Resilience Manifesto’ in partnership with the Centre Forum think tank, with the Department for Education following up with funding for schools to develop character education programs).

In sum, social-emotional learning is the product of a fast policy network of ‘psy’ entrepreneurs, global policy advice, media advocacy, philanthropy, think tanks, tech R&D and venture capital investment. Together, this loose alliance of actors has produced shared vocabularies, aspirations, and practical techniques of measurement of the ‘behavioural indicators’ of classroom conduct that correlate to psychologically-defined categories of character, mindset, grit, and other personal qualities of social-emotional learning. As Agnieszka Bates has argued, psychological advocates of SEL have conceptualized character as malleable and measurable, and defined the character skills that are most valuable to the labour market. As such, she describes SEL as a psycho-economic fusion of economic goals and psychological discourse in a corporatized education system. Specific algorithms and metrics have already been devised by prominent psycho-economic centres of expertise to measure the economic value of social-emotional learning.

Moreover, as Emily Talmage has identified, social-emotional learning is being advocated by some of the same organizations that promote social impact bonds, or ‘pay for success’ schemes whereby investors provide capital to start a new program and receive repayment with interest if it meets agreed metrics of success. In other words, says Talmage, ‘investors are using kids’ psychological profiles to gamble on the results of social programs, while using technology to generate a compliant, productive workforce.’

In these ways, social-emotional learning exemplifies the emergence of what has been termed psycho-policy and psychological governance in relation to public policy more widely—that is, the application of psychological expertise, interventions and explanations to public policy problems, specifically the application of practical techniques and ‘know-how’ for quantifying and then ‘nudging’ individuals to perform the ‘correct’ behaviours and affects. If character is malleable, it can be moulded and made to fit political and economic models.

ClassDojo as a psycho-policy platform
So, ClassDojo can be viewed as a platform diffuser of SEL psycho-policy and practice. The rest of this paper examines how it is being assembled to perform this task.

Shaping shared vocabularies
ClassDojo’s popular, and publicly charismatic founders have become spokespeople for social-emotional learning. They are regularly interviewed in the education technology and business media, and use these as venues for diffusing social-emotional learning discourses. They name-check psychological entrepreneurs such as Angela Duckworth and Carol Dweck to relay into classroom practices their psychological theories for classifying and measuring the correct behaviours of students. In a sense, they are governing educational language at a distance, working by making governmental and commercial aspirations around SEL into the shared concerns and aspirations of classroom practitioners and school leaders, and thereby penetrating into institutional practices and professional routines.

Rewarding character
ClassDojo’s most well-known feature is its rewards app for teachers to award ‘feedback points’ to individuals or groups, in real-time during classroom activities. This allows teachers to produce report cards on each student and whole classes, and also school leaders to take a view of behavioural trends across the whole school.

At the core of its rewards system is the psychological assumption that observable behavioural indicators transmitted from the embodied conduct of students in classrooms can be correlated with character skills and other aspects of SEL. By rewarding students who perform the correct behavioural indicators of SEL and character, ClassDojo is also designed to actively promote specific kinds of preferred behaviours. As one of ClassDojo’s founders has noted, it collects ‘millions of behaviour observations every day’ to enable ‘real-time information from the classroom,’ while one of its research partners says, ‘We want teachers to think about the kind of norms they want to set in the classroom, so growth mindset is integrated in it.’

Partner networks
As ClassDojo has sought income, it has successfully won venture capital funding to support new features and platform development. In 2016 it was awarded $21million US dollars to develop as a platform for school communication and distribution of in-house educational video content.

In particular, it has developed a number of ‘Big Ideas’ serials of animated videos as classroom resources for teachers to use to teach children the language of social-emotional learning, including video series on growth mindset, perseverance or ‘grit’ and mindfulness. These have been created through partnerships with major psychological centres of expertise. Carol Dweck’s mindset centre, PERTS at Stanford, was ClassDojo’s first academic partner on its mindset series, followed by Harvard’s Making Caring Common for the empathy series, and most recently Yale’s Center for Emotional Intelligence co-produced the mindfulness series.

As ClassDojo’s head product designer has claimed, through these videos, the ClassDojo ‘characters model for the kids the behaviour you are trying to instil.’ So what we see is how classroom norms of behaviour are being defined, via ClassDojo, through suturing together venture capital aspirations and psychological entrepreneurship. And it is doing so directly through reaching out to teachers, currently for free, to distribute and instil in students the ‘model’ behaviours defined by psy experts of SEL and character.

Network effects
Significantly, if we think of ClassDojo as a technology of fast psycho-policy, it has enjoyed spectacularly accelerated success in finding its way into pedagogic practice. One of its founders has claimed that ‘watching the graph of the user numbers has been incredible’ as ‘millions of teachers and students’ have begun ‘using this every day.’ Its growth has been fuelled through highly effective word-of-mouth marketing campaign on Facebook, Twitter and Instagram, which has allowed it to grow via network effects ‘faster than any other ed tech company.’

These network effects also exert material effects. Its product designer has claimed that ‘We look for an idea that can be powerful and high-impact and is working in pockets, and work to bring it to scale more quickly … incorporated into the habits of classrooms.’ We can understand this in two ways—‘working in pockets’ refers to taking a small-scale idea up to scale through network effects. But ‘working in pockets’ also well describes ClassDojo’s strategy—its platform sits on the smart phone of millions of teachers, sitting in their pockets, and never far away from their eyes and hands where it might be incorporated into the habits of classrooms. ClassDojo is in this sense a kind of pocket instructor that enacts psychological expertise through teachers’ own fingertips.

ClassDojo is, in other words, ambitiously attempting to ‘shift what happens inside and around classrooms’ as a way of changing ‘education at huge scale’ as its chief executive has claimed. However, these network effects are also generating value for the ClassDojo company and its investors as it amasses a huge global user base.

Monetizing behaviour
ClassDojo is also seeking to monetize student behaviour data. Although it has received over $30million dollars in venture capital investment, it has to date generated no revenue whatsoever, and is fairly opaque about its monetization plans. One of its investors has said that ‘This company has a greater market share than Coke in the U.S. Let’s get all the stakeholders on the platform … and scale before we think about monetization.’

However, its founders have given some clues. They have spoken of ClassDojo as a ‘huge distribution platform’ to parents, who might be willing to pay for additional content—such as additional Big Ideas videos—to take the ClassDojo platform and its preferred habits of character into the family home. Its ClassDojo store is already active for the sale of merchandise.

But also, its founders have noted ‘There’s a macro-trend happening where schools want to collect more data about behaviour.’ In fact, with the new federal law, Every Student Succeeds, that governs US schooling, states are now required to record at least one measure of ‘non-academic learning.’ So when ClassDojo’s founders have suggested that they will ‘build new, premium features that parents or school districts may be interested in paying for,’ it seems likely they are referring to the production of detailed behavioural reports of the kind that might support schools in their delivery of data recording their progress in supporting students’ non-academic learning targets.

Infrastructuralizing
Not only has ClassDojo extended through network effects to huge numbers of users. It has also developed ‘school-wide’ functionality to enable entire institutions to be signed in to the system in order to orchestrate institutional communication, data-sharing between classes, and establish ‘school values’ consistent with SEL across the school. Through its ongoing function creep, it is becoming an integral and embedded sociotechnical substrate to schooling practices. A teacher in a ClassDojo press release stated, ‘We can now create a school community that includes all of us: teachers, parents, students, principals, vice principals and other school staff. None of us can imagine teaching without it!’

Furthermore, one of its founders has said: ‘Looking back in 5-10 years, I hope to see that this other half of education—going beyond test scores to focus on building students as people—has become really important and that we helped to make that happen by connecting teachers, parents and students.’ This aspiration registers the emergence of a global effort to develop student character and SEL rather than to reduce them to test scores, which ClassDojo is seeking to support through mobilizing itself as an infrastructural underlay to connect teachers, parents and students around shared psychological vocabularies, normative values and aims. In other words, ClassDojo is becoming more infrastructural for schools, but it is also nested in a global infrastructure of educational measurement.

From test-based infrastructure to infrastructuralized psycho-policy platforms
In recent years, schools have been locked-in to data infrastructures of test-based performance measurement. As Dorothea Anagnostopoulos and colleagues have argued, the existing test-based data infrastructure is an assemblage of people, technologies and policies that stretches across and beyond formal education systems. It has produced ‘objective measures’ of students’, teachers’ and schools’ performance based on test results data and thereby defined ‘who counts’ as ‘good’ teachers, students and schools.

However, with the emergence of new kinds of technical platforms, such as ClassDojo, that emphasize SEL and character education, the data infrastructure of test-based performance measurement may be evolving. Jean-Christophe Plantin and coauthors have argued that ‘The rise of ubiquitous, networked computing’ twinned with ‘changing political sentiment have created an environment in which platforms can achieve enormous scales, co-exist with infrastructures, and in some cases compete with or even supplant them. … Rapidly “infrastructuralized platforms” have arisen in the digital age.’

ClassDojo is evidence of how a platform now integrated into and integral to many schools and classrooms worldwide is now co-existing alongside, and potentially even competing with or threatening to supplant the existing data infrastructure of test-based performance metrics. The ClassDojo platform operators are mobilizing networked computing to curate and diffuse the psy vocabulary of SEL and character into public education, reflecting changing political sentiment which has begun to focus on alleviating student anxiety and high stakes testing resulting from test-based performance measurement. In so doing, they are continually assembling and engaging users through attractive presentations and new features, generating network effects of valuable users all the time. The results is that ClassDojo has become an ‘infrastructuralizing platform’ for the measurement of behavioural indicators of social-emotional skills—and for nudging and compelling students to perform the ‘correct’ behavioural indicators that correlate with the affects of ‘good students’ in ‘happier classrooms’

Conclusion
In conclusion, we can see how ClassDojo is participating as an actor in current fast psycho-policy development and enactment. It is curating and diffusing SEL discourses into practice through teachers’ pockets and fingertips. In so doing, ClassDojo treats students as embodied behavioural indicators whose affects are rendered traceable through psychological categories of character, mindset and grit; it treats teachers as data entry clerks responsible for amassing ClassDojo’s global database, attracting their own social networks as new users, and as consumers at the online store; treats school leaders as data demanders, who require staff to enter the feedback points in order to generate school-wide behavioural trend insights; and treats parents as data consumers, who receive the data visualizations and report cards. ClassDojo also treats classrooms as little data markets where psycho-economically defined ‘valuable’ character skills and the performance of ‘correct’ behavioural indicators can be incentivized, nudged and exchanged for rewards. All the while, ClassDojo is thriving on the network effects of these activities to generate value for the company and its investors—driving up its user graph, its reach, and the value of its global datasets on student behaviour.

Finally, ClassDojo is nested in an emerging global infrastructure of measurement and intervention in social-emotional education. In its report on ‘The Power of Social and Emotional Skills,’ the OECD has claimed that ‘While everyone acknowledges the importance of social and emotional skills, there is insufficient awareness of “what works” to enhance these skills and efforts to measure and foster them.’ ClassDojo is currently positioning itself as a fast psycho-policy exemplar of ‘what works’ in social-emotional learning practice. In contrast to the existing infrastructure of test-based performance measurement, ClassDojo is a platform for translating psychological theories into the habits of classrooms, teachers and students, which is also nested in an expanding global infrastructure dedicated to the measurement and management of  the social and emotional lives of young people. This global policy infrastructure stretches across and beyond the borders of state education systems, and includes international policy influencers, think tanks, independent institutions, venture capital investors, software startups, and even impact investment market experts. In these ways, if policy trends shift toward the performance measurement of schools, teachers and students based on data recorded about the behavioural indicators of social-emotional learning, then ClassDojo will itself become integrated into existing metric practices of school evaluation, judgment and ranking.

Image credit: ClassDojo resources
Posted in Uncategorized | Tagged , , , , , | 13 Comments

Coding for what? Lessons from computing in the curriculum

Ben Williamson

Christiaan Colen

[This is a talk prepared for the Pop Up Digital conference, Gothenburg, Sweden, 19 June 2017]

There was a key moment in the popular American drama Homeland this year when a group of talented young computer programmers finally launched their new software system. You could see their arms in the air, and hear their cheers, before their boss said, ‘Now it’s time to get to work.’

But what have they created in this darkened software bunker? What work are they about to put their coding skills to? This is what the Homeland’s creators call a ‘propaganda boiler room.’ Driven by extreme political convictions, they’ve created thousands of fake social media accounts to spread disinformation into the news feeds of millions of users.

As all of you will have heard recently, the role of the web and social media in political life have now become major global concerns—not just the plots of TV drama. We’re hearing more and more in the news about fake news, hacking, cyberattacks, political bots and weaponized computational propaganda.

And from critical technology thinkers, too, we’re hearing that ‘software is taking command,’ that automation and ‘algorithms rule the world,’ and that you can either ‘program or be programmed.’ Digital technologies, we now know, aren’t just neutral tools—but powerful devices for shaping our actions, influencing our feelings, changing our minds, filtering the information we receive, automating our jobs, recommending products and media to consume, manipulating our political convictions—even for ‘personalizing’ what and how we learn.

But as Homeland dramatizes, if software is becoming more powerful in our everyday lives, then we also need to acknowledge there are people behind it—programmers who have learned to code to make the technologies we live with.

Within our own field, education and teaching, some have begun to suggest that we need to equip children with the tools and skills to take an active part in this increasingly software-supported and automated world. Recently, for example, the Financial Times magazine ran a piece on ‘Silicon Valley’s classrooms of the future.’

‘Having disrupted the world,’ it claimed, ‘the tech community now wants to prepare children for their new place in it. Leading venture capitalist Marc Andreessen predicts a future with two types of job: people who tell computers what to do, and people who are told what to do by computers. Silicon Valley wants to equip young people to rule the machines.’

As a result, Silicon Valley companies are now investing billions of dollars to re-engineer public education to achieve that aim.

One such effort, according to the New York Times, is the learning to code organization Code.org, ‘a major nonprofit group financed with more than $60 million from Silicon Valley luminaries and their companies, which has the stated goal of getting every public school in the United States to teach computer science. Its argument is twofold: Students would benefit from these classes, and companies need more programmers.’

But it’s not just in Silicon Valley that this enthusiasm for teaching children to ‘rule the machines’ has taken hold. Across the world, children are being told they must ‘learn to code’ to become ‘digital makers.’

In the UK, learning to code and computer science are now part of the formal curriculum for schools, in England, Wales and Scotland alike. Over the last couple of years, I’ve been studying the documents produced to promote learning to code, following how coding and computing have been embedded in the curriculum, and recently interviewing relevant policy influencers involved in the new computing curriculum in England.

Sweden is now embarking on a shift to embed coding, computing and digital competence in its schools—so what can we learn from how things have worked out in England? In our recent interviews, we’ve been trying to work out why various influencers want computer programming in schools—what are the purposes of learning to code in the curriculum? In other words, ‘coding for what?’

Now, we need to go back in time a little here, back to 2011, and to Edinburgh. Here, at the Edinburgh Television Festival, was Eric Schmidt, then chief executive of Google, giving the keynote address to an audience of media, industry and policy leaders. After talking about disrupting TV broadcasting through media streaming, Schmidt suddenly turned his attention to attacking the British education system.

‘In the 1980s the BBC not only broadcast programming for kids about coding, but (in partnership with Acorn) shipped over a million BBC Micro computers into schools and homes,’ he said. ‘That was a fabulous initiative, but it’s long gone. I was flabbergasted to learn that today computer science isn’t even taught as standard in UK schools. Your IT curriculum focuses on teaching how to use software, but gives no insight into how it’s made. That is just throwing away your great computing heritage.’

The talk tapped into a growing concern in the UK at the time that teaching children how to use Microsoft Office applications was inadequate to preparing them for living and working with more complex computer systems.

In fact, within six months of Schmidt’s speech, the Secretary of State for education in England at the time, Michael Gove, announced a complete reform of IT education during his own speech at a 2012 ed-tech trade show for IT teachers.

‘I am announcing today that the Department for Education is … withdrawing the existing National Curriculum Programme of Study for ICT from September this year,’ he announced. ‘The traditional approach would have been to keep the Programme of Study in place for the next 4 years, while we assembled a panel of experts, wrote a new ICT curriculum….  We will not be doing that. Technology in schools will no longer be micromanaged by Whitehall.’

So what happened?

Well despite Gove’s argument about not micromanaging the new curriculum, by September 2013, just 20 months later, entirely new programmes of study for computing in the National Curriculum appeared, to apply at all stage of compulsory schooling in England.

I’m going to fill in the gaps in this story in a minute, but if we briefly come back to the present, we find Google now much more positive about British education.

This is Google’s proposed new London headquarters, the enormous ‘landscraper’ building it plans to build next to King’s Cross railway station. On its announcement last November, new Google chief executive Sundar Pichai said:

‘Here in the UK, it’s clear to me that computer science has a great future with the talent, educational institutions, and passion for innovation we see all around us. We are committed to the UK and excited to continue our investment in our new King’s Cross campus.’

So in 5 years, Google has reversed its opinion of computing in the UK, and even of its educational institutions.

I think you’ll have detected the theme I’m developing here. Programming and computing in education is a shared agenda of major global commercial firms and national government departments and policymakers. One of the interviewees we spoke to about the new curriculum said, ‘Would you have got the attention of Michael Gove without Google or Microsoft government relations? I don’t think you would. You wouldn’t reach that level of policymaking.’

But actually it’s not as straightforward as business driving policy. What happened in England with computing in the curriculum was the result of a much messier mix of ambitions and activities including government, businesses, professional societies, venture capitalists, think tanks, charities, non-profit organizations, the media and campaigning groups. As another of our interviewees said, from the outside the new curriculum looked ‘sudden and organized’ but was actually a more ‘anarchic’ mess of ‘passions’ and ‘reasons’.

So, for example, the year before Eric Schmidt’s Edinburgh speech, the campaigning organization Computing at School had already produced a ‘white paper’ detailing a new approach to computing teaching. Computing at School is a teacher members’ organization, originally set up by Microsoft and chaired by a senior Microsoft executive.

Its 2010 white paper focused on ‘how computers work,’ the knowledge and skills of programming, and ‘computational thinking’—that is, it said, a ‘philosophy that underpins computing’ and a distinctive way to ‘tackle problems, to break them down into solvable chunks and to devise algorithms to solve them’ in a way that a computer can understand. The Computing at School white paper, and the outline computing curriculum it contained, was then put forward after Michael Gove’s speech as a suggested blueprint for the national curriculum.

Computing at School, we were told, was concerned that Gove’s decision not to ‘micro-manage’ the new subject would lead to an ‘implementation vacuum,’ and worked hard to lobby for its own vision. As we were told in an interview we conducted with Computing at School:

‘The Department for Education held consultation meetings for the ICT curriculum, I went to one. Afterwards I stayed behind to talk to the civil servant involved and told him about computer science as a school subject. I was able to put [our] curriculum on the table … complete revelation … led to relationship with the DfE, they went from thinking of us as a weird guerrilla group with special interests to a group they could consult with about computing.’

In fact, it was the Computing at School chairperson who was then appointed by the Department for Education to oversee the development of the new curriculum, and who led a 3 month process of stakeholder consultation and drafting of the new curriculum in autumn 2012.

One of the other key groups influencing computing in schools was Nesta—which is a bit like a think tank for innovation in public services. In 2011 Nesta oversaw a review of the skills requirements for the videogames and visual effects industries in the UK. The review was led by the digital entrepreneur Ian Livingstone, the chair of Eidos Interactive games company, and then the government’s ‘Skills Champion.’

Livingstone actually called his Nesta report, Next Gen, a ‘complete bottom up review of the whole education system relating to games.’ Nesta also produced a report on the legacy of the BBC Micro that Eric Schmidt had credited as a ‘fabulous initiative’ to get kids coding in the 80s. Nesta has continued to produce reports along similar lines, including one on getting more ‘digital making’ into schools, and another on the role of computer science education to build skills for the data analytics industry and the data-driven economy.

Soon after the Next Gen report was released, Livingstone and Nesta formed a pressure group, the Next Gen Skills campaign, which lobbied government hard to get programming and computer science in the curriculum. The campaign was supported by Google, Facebook, Nintendo, Microsoft, and was led by the interactive games and entertainment trade body UKIE.

Videogames, visual effects, data analytics and the creative digital economy are the real drivers for computing in the curriculum here—which Nesta claims has ‘influenced policymakers, rallied industry and galvanised educators to improve computer science teaching.’

Ian Livingstone, meanwhile, is establishing his own Academy Schools. Like the Swedish free schools approach, the Livingstone Academies will be privately run but publicly funded, and have significant discretion over curriculum.

‘It is the combination of computer programming skills and creativity by which today’s world-changing companies are built,’ Livingstone said when announcing the Livingstone Academies in a Department for Education press release. ‘I encourage other digital entrepreneurs to seize the opportunity offered by the free schools programme in helping to give children an authentic education for the jobs and opportunities of the digital world.’

The Livingstone Academies are basically government-approved models of the Next Gen vision. They’ll have industry partnerships, design studios, and even on-site startup business hubs to, it claims, ‘provide wider opportunities for future careers for a new generation of successful and confident citizens who will contribute to local, national and international economic success.’

So, Nesta and Livingstone have highlighted the powerful role of digital entrepreneurs and the language of the digital economy in securing government approval for computing in schools. As you can see, their emphasis is very firmly on programming and software engineering, rather than the more abstract study of the mathematics and algorithms that are the focus of the discipline of Computer Science.

Although programming and Computer Science are of course related, many critics have pointed out that most new computing courses and curricula are more closely connected with software development. We asked people about this is in our interviews, and were told by several people, including those at Computing at School and Nesta, that it was in everyone’s best interests to allow terms like Computer Science, coding, programming, computational thinking, digital skills and even digital literacy to be treated as the same thing.

Several people we interviewed were especially critical of the Shut Down or Restart report produced by the Royal Society in 2012. Its emphasis was on disciplinary computer science, and its recommendations reflected the views of major computer science academics and associations.

The Royal Society report was published just a few days after Michael Gove’s speech—in fact, he said he was looking forward to reading it. And you can see the influence of the Royal Society in the strong emphasis on the idea of computing at the ‘fourth science’ in the English computing curriculum. This goes well with the current emphasis in English education on established subject knowledge—though the fact the Department for Education authorized the Livingstone Academies indicates how government sees computing as a hard science and an economic catalyst at the same time.

In fact, we were told by several interviewees that a major issue in the development of the computing curriculum was that the government ministers and special advisers responsible for it didn’t think it was academic enough—it needed more hard Computer Science content and theory. Even though they weren’t supposed to be micro-managing it of course.

When the computing curriculum consultation group submitted its draft in late 2012, ‘The exact words were ‘the minister is not minded to approve the draft you sent,”’ one interviewee told us. The group had submitted its draft curriculum at 5 o’clock on a Friday evening and the chair was then contacted over the weekend by the special adviser to the minister.

One of our interviewees described how he called the working group chair to ask, ‘are we going to reform the drafting group…? And the answer was, “No, we’ve already done it. We were told unless we got it back to the minister by 9 o’clock on Monday morning with a greater emphasis on Computer Science, then computing would not be in the national curriculum.”’

Despite being a consultative curriculum drafting process, in the end the new programmes of study, we were told, were the product of just two senior executives responding to the demands of the minister and her special adviser to emphasize academic Computer Science.

But for many other people involved in trying to shape the new curriculum, the purpose wasn’t to reproduce disciplinary computer science through the school classroom, or skills development for the digital economy. One of the people we interviewed, also part of the curriculum consultation and drafting group, told us he was even banned from attending meetings after complaining about there being too much Computer Science content. Another had his expenses cancelled as part of the group to stop him doing wider consultation with teachers. The minister’s special adviser was allegedly behind both decisions.

Another area of influence on the computing curriculum was the role of charitable, non-profit and voluntary groups. Code Club is an after school programming scheme that puts volunteer coders together with children to teach them to code. It has its own coding curriculum that starts with visual programming applications like Scratch and then proceeds to programming with HTML, CSS and Python.

There are now over 5000 UK Code Clubs, teaching over 82,000 children programming. When it first started in 2012, the computing curriculum hadn’t even been drafted, yet Code Club is still going strong even though coding is now embedded in the curriculum.

One of the things that the continuing popularity of Code Club reveals is that computing remains very poorly resourced in schools. Code Club has an army of volunteer programmers—the computing curriculum has a teaching workforce of mostly ICT teachers who all need radical retraining. The government budget for this retraining worked out to about £100 per member of staff, which largely means external providers have stepped in.

As a result, Code Club now runs its own teacher training sessions, where volunteer programmers educate teachers in how to teach programming. Other training providers are available—Computing at School offers resources and training, but so do large commercial organizations, as we’ll see in a moment.

Code Club was also absorbed into the Raspberry Pi Foundation in 2015. The Raspberry Pi device itself is a very small, ‘hackable’ computer, and the foundation was set up as a charity to support its educational uses. But one of the other activities performed by Raspberry Pi is to catalyse the wider take-up of computing in schools. It has a couple of magazine titles, The MagPi and Hello World, to promote coding and making.

The MagPi is specifically about making with Raspberry Pi itself, while Hello World focuses on ‘plugging gaps’ in teachers’ knowledge and skills in computer science, coding, computational thinking, constructionism and digital making. Again, this reflects the deliberate ambiguity built in to the curriculum.

Probably the most high profile intervention into coding in schools so far came with the launch of the BBC nationwide campaign called Make It Digital in 2015.

‘BBC Make it Digital will capture the spirit of the BBC Micro, which helped Britain get to grips with the first wave of personal computers in the 1980s,’ the BBC claimed. ‘It will put digital creativity in the spotlight like never before, and help build the nation’s digital skills, through an ambitious range of new programmes, partnerships and projects.’

One of the key projects was the launch of the micro:bit, a small coding device which it distributed for free to a million UK schoolchildren in 2016. The BBC has also established a non-profit foundation to roll out the micro:bit internationally.

The micro:bit, Code Club’s courses, and Raspberry Pi’s magazines indicate how much the new curriculum relies on public and charitable organizations to provide the support and resources required when government departments withdraw their ‘micro-management’ of key subject areas but retain a strong steering capacity over strategy and direction. One of our interviewees, who worked at a coding charity, described how she acted as a ‘geek insider’ who could translate the language of ‘geek’ into government speak for ministers, their special advisers and civil servants.

But besides these charitable providers, the curriculum has also, as we’ve seen, become the target for promoters of academic computer science and for entrepreneurial influence based on arguments about the digital economy. I think it’s a model case of how education policy is being made and done these days—it’s steered by government but taken forward by wider networks of organizations, with the special advisers of government ministers taking a strong role in approving who’s involved and vetting the outputs produced by the participants.

Yes, it’s not micro-managed as Michael Gove promised, but it’s not unmanaged either. And that doesn’t make it easy to work out what the overall purpose of the curriculum is—because it means different things to different groups.

The missing aspect of the curriculum as it has ended up from this messy mix of organizations, interests and interventions, for me anyway, is a more critical understanding of the social power of computing. Several of our interviewees said that the more critical aspects of computing suggested during the curriculum consultation were systematically erased before the curriculum programmes of study were finally made public in 2013.

Look at the bottom left column of this table where text has been struck out—this is from the draft computing curriculum in 2012 and emphasized ‘critical evaluation of digital content,’ the ‘impacts’ of technology on individuals and society, and ‘implications’ for ‘rights, responsibilities and freedoms.’ The right hand column shows how this part of the draft curriculum was rewritten, now emphasizing the study of algorithms, Boolean logic, and data manipulation.

This is what was lost when the draft curriculum had to be rewritten between its submission on Friday night and the new deadline for 9 o’clock Monday morning specified by the minister’s special adviser.

I understand that here in Sweden there remains potential for more critical approaches to digital competence, so I want to spend the last few minutes focusing on that.

Just a week or so ago, the Austrian research group Cracked Labs produced a report on the commercial data industry. It demonstrated how we are being tracked and profiled via data collected from our use of telecoms, the media, retail, finance, social media and technology platforms, and public services.

One of the examples in the report is Oracle, one of the world’s largest business software and database vendors. Oracle’s ‘data cloud’ contains detailed information about 2 billion people, which it uses to ‘profile and sort,’ ‘find and target people,’ ‘sell data,’ ‘personalize content,’ and ‘measure how people behave.’

What does this have to do with computing in schools?

Well, last year Oracle announced it would fund European Union member states $1.4 billion dollars to advance computing and programming in schools through Oracle Academy, its global philanthropic arm. This is part of its ambition to spread computer science education around the world. It claims to have impacted on 30 million students in 110 countries already, mostly through retraining teachers, and annually invests $3.3 billion to ‘accelerate digital literacy worldwide.’

Most notably, in Europe, Oracle is seeking to ‘Level Oracle Academy’s entire curriculum to the European Qualifications Framework.’ This makes Oracle potentially very influential in European computing education. A European Union spokesperson said of the deal, ‘Digitally skilled professionals are critical to Europe’s competitiveness and capacity for innovation. Over the last ten years, we’ve seen the demand for workers with computer science and coding skills grow by four percent each year. Oracle’s efforts to bring computer science into classrooms across the European Union will help strengthen our digital economy.’

So, one of the world’s most powerful data harvesting companies is also one of the world’s most powerful computer science for education philanthropies, funding one of the world’s most powerful cross-national digital economies.

The Oracle example is an important one because it captures quite a lot of what’s going on with coding and computing more broadly:

First, coding and computer science are being put forward as solutions to the digital economy by businesses but also think tanks and government officials too, with students positioned as future digital workers and entrepreneurial citizens—or agents of social and economic progress through software.

Second, relationships are being built between national governments and commercial companies to deliver on major educational goals and purposes. This is changing how education systems are governed—not just by government departments but from the headquarters and philanthropic outgrowths of global technology companies. It’s an example of how tech companies, many from Silicon Valley, are becoming ‘shadow education ministries’ as Neil Selwyn has described them.

Third, and consequently, companies like Oracle, as well as Google and Microsoft and others, are directly influencing curricula across Europe and globally, changing what teachers practice and what students learn along the way. They are even actively supplying teacher training courses to equip teachers with skills and knowledge.

Fourth, these organizations are talking the language of ‘computer science’ which is appealing to many educational policymakers—in the UK, as we saw, giving coding the credibility of Computer Science has been really important. Yet what they are actually promoting is closer to software engineering as practised in the technology sector. Some, like Oracle, also mention ‘digital literacy’ but this clearly a functional kind of literacy in writing code.

And in doing so, these organizations are shaping computing to be a practical, skills-based subject area with a hard scientific surface—and definitely not a more critically-focused subject which might draw attention to the data surveillance practised by the same organizations persuading national governments to focus on computing education in schools.

As the Cracked Labs report shows, Oracle knows an awful lot about people. This is the kind of digital environment that children and young people are now living and learning in. That’s why, in closing here, I want to suggest the need for a different direction in coding and computing in the curriculum—or at least a proper discussion about its purposes. It’s great to see this conference as a space to start that dialogue here.

We are now teaching kids to code—which has all sorts of advantages in terms of tech skills, creativity and understanding how computers work. But there’s a risk we could just be reproducing the practices of Silicon Valley in our own classrooms.

As the philosopher of technology Ian Bogost has commented, ‘Not all students in computer-science programs think they’ll become startup billionaires… But not all of them don’t think so, either. Would-be “engineers” are encouraged to think of every project as a potential business ready to scale and sell.’ The commercial culture of computing that is creeping into computer science courses, he has added, downplays the social consequences of software engineering decisions while emphasizing ‘speculative finance.’

It is also notable that when the co-founder of Code Club criticized the ‘mass surveillance’ practices of Google a few years back that she was forced to resign by the Code Club board. Google was then one of Code Club’s main commercial sponsors.

‘We should not accept that privacy no longer exists, just because corporations doing mass surveillance also teach kids to code,’ she said. ‘I cannot stay silent about large corporations infringing on human rights, and I believe it is my moral obligation to speak out against it.’

We also need to think about the political uses and abuses of programming skills. Teaching children to code could actually be dangerous if it trains them with the right skills to work in Homeland’s propaganda boiler room. In many ways, young right wing activists are today’s most successful digital makers, using their programming skills to disseminate political values that many of us, I’m sure, find extreme and divisive.

Some critics are already arguing that learning to code is a distraction from learning ‘values filters so our children can interact in this environment.’

My view is that a properly purposeful and meaningful computing education would engage with the social and political power of code to engineer, in part, how we live and think. ‘To program or be programmed’ is a neat mantra, but you need a different kind of critical knowledge and skill set to understand how your information practices are being programmed by the engineers at Google, how you can be monitored and profiled through the Oracle data cloud, or how you can be manipulated via social media.

According to the Times Education Supplement, the weekly magazine for education professionals in the UK, ‘the algorithm’s gonna get you’ in the classroom too. That’s an overly paranoid headline—but maybe it might provoke educators to consider the social power of programming and the algorithmic techniques of data mining and surveillance it’s produced.

The programmer Maciej Ceglowski has said that ‘an enthusiastic group of nerds has decided to treat the rest of the world as a science experiment’ by creating ‘the greatest surveillance apparatus the world has ever seen.’

What would it mean to receive an education in computing that helped young people navigate life in the algorithmic data cloud in an informed and safe way, rather than as passive subjects of this vast science experiment?

Technical know-how in how computers work has its uses here, of course. But also knowing about privacy and data protection, knowing how news circulates, understanding cyberattacks and hacking, knowing about bots, understanding how algorithms and automation are changing the future of work—and knowing that there are programmers and business plans and political agendas and interest groups behind all of this—well, this seems to me worth including in a meaningful computing education too.

I am encouraged to see that there is scope for some more critical study in Sweden’s incoming digital competence curriculum. That type of study of computing and its impacts and implications, in the UK, was shut down before the curriculum had even started up.

Image by Christiaan Colen
Posted in Uncategorized | Tagged , , , , , , , , , | 5 Comments