Imaginaries and materialities of education data science

Future educationImage: The .edu Ocunet, by Tim Beckhardt

Ben Williamson

This is a talk I presented at the Nordic Educational Research Association conference at Aalborg University, Copenhagen, on 23 March 2017.

Education is currently being reimagined for the future. In 2016, the online educational technology  magazine Bright featured a series of artistic visions of the future of education. One of them, by the artist Tim Beckhardt, imagined a vast new ‘Ocunet’ system.

The Ocunet is imagined as a decentralized educational virtual-reality streaming network using state­-of-­the-­art Panoptic headsets to deliver a universal knowledge experience. The infrastructure of public education has been repurposed as housing for the Ocunet’s vast server network. Teachers have been relieved of the stress of child-behavior management, and instead focus their skills on managing the Ocunet—editing our vast database to keep our students fully immersed in the latest curriculum—while principals process incoming student data at all times.

The Ocunet is an artistic and imaginative vision of the future of education. I use it as an example to start here because it illustrates a current fascination with reimagining education. The future it envisages is one where education has been thoroughly digitized and datafied—the educational experience has been completely embedded in digital technology systems, and every aspect of student performance is captured and processed as digital data.

This may all sound like speculative educational science fiction. But some similar imaginative visions of the future of education are now actually catalysing real-world technical innovations, which have the potential to change education in quite radical ways.

In this talk, I want to show you how education is being imagined by advocates of a field of research and development becoming known as ‘education data science.’ And I’ll explore how the social and technical future of education it imagines—one that is digitized and datafied much like the Ocunet—is also being materialized through the design of digital data-processing programs.

The social consequences for the field of education in general are significant:

  • Education data science is beginning to impact on how schools are imagined and managed.
  • It’s influencing how learning is thought about, both cognitively and emotionally, and introducing new vocabularies for talking about learning processes.
  • Its technologies and methods, many developed in the commercial sector, are being used in educational research and to produce new knowledge about education.
  • And education data science is also seeking to influence policy, by making educational big data seem an authoritative source for accelerated evidence collection.

Big data imaginaries and algorithmic governance

Just to set the scene here, education is not the only sector of society where big data and data science are being imagined as new ways of building the future. Big data are at the centre of future visions of social media, business, shopping, government, and much more. Gernot Rieder and Judith Simon have characterized a ‘big data imaginary’ as an attempt to apply ‘mechanized objectivity to the colonization of the future’:

  • Extending the reach of automation, from data collection to storage, curation, analysis, and decision-making processes
  • Capturing massive amounts of data and focusing on correlations rather than causes, thus reducing the need for theory, models, and human expertise
  • Expanding the realm of what can be measured, in order to trace and gauge movements, actions, and behaviours in ways that were previously unimaginable
  • Aspiring to calculate what is yet to come, using smart, fast, and cheap predictive techniques to support decision making and optimize resource allocation

And here the figure of the computer algorithm is especially significant. While in computer science terms algorithms are simply step-by-step processes for getting a computer program to do something, when these algorithms start to intervene in everyday life and the social world they can be understood as part of a process of governing—or ‘algorithmic governance.’

By governing here we are working with ideas broadly inspired by Michel Foucault. This is the argument that every society is organized and managed by interconnected systems of thinking, institutions, techniques and activities that are undertaken to control, shape and regulate human conduct and action—captured in phrases such as ‘conduct of conduct’ or ‘acting upon action.’

Because the focus of much big data analysis—and especially in education—is on measuring and predicting human activity (that most data are people), then we might say we are now living under conditions of algorithmic governance where algorithms play a role in directing or shaping human acts. Antoinette Rouvroy and Thomas Berns have conceptualized algorithmic governance as ‘the automated collection, aggregation and analysis of big data, using algorithms to model, anticipate and pre-emptively affect and govern possible behaviours.’ They claim it consists of three major techniques:

  • Digital behaviourism: behavioural observation stripped of context and reduced to data
  • Automated knowledge production: data mining and algorithmic processing to identify correlations with minimal human intervention
  • Action on behaviours: application of automated knowledge to profile individuals, infer probabilistic predictions, and then anticipate or even pre-empt possible behaviours

For my purposes, what I’m trying to suggest here is that new ways of imagining education through big data appear to mean that such practices of algorithmic governance could emerge, with various actions of schools, teachers and students all subjected to data-based forms of surveillance acted upon via computer systems.

Schools, teachers and students alike would become the objects of surveillant observation and transformation into data; their behaviours would be recorded as knowledge generated automatically from analysing those data; and those known behaviours could then become the target for invention and modification.

Importantly too, imaginaries don’t always remain imaginary. Sheila Jasanoff has described ‘sociotechnical imaginaries’ as models of the social and technical future that might be realized and materialized through technical invention.Imaginaries can originate in the visions of single individuals or small groups, she argues, but gather momentum through exercises of power to enter into the material conditions and practices of social life. So in this sense, sociotechnical imaginaries can be understood as catalysts for the material conditions in which we may live and learn.

The birth of education data science

One of the key things I want to stress here is that the field of education data science is imagining and seeking to materialize a ‘big data infrastructure’ for automated, algorithmic and anticipatory knowledge production, practical intervention and policy influence in education. By ‘infrastructure’ here I’m referring to the interlocking systems of people, skills, knowledge and expertise along with technologies, processes, methods and techniques required to perform big data analysis. It is such a big data infrastructure that education data science is seeking to build.

Now, education data science has, of course, to have come from somewhere. There is a history to its future gaze. We could go back well over a century, to the nineteenth century Great Expositions where national education departments exhibited great displays of educational performance data. And we could certainly say that education data science has evolved from the emphasis on large-scale educational data and comparison made possible by international testing in recent years. Organizations like the OECD and Pearson have made a huge industry out of global performance data, and reframed education as a measurable matter of balancing efficient inputs with effective outputs.

But these large-scale data are different from the big data that are the focus for education data science. Educational big data can be generated continuously within the pedagogic routines of a course or the classroom, rather than through national censuses or tests, and are understood to lead to insights into learning processes that may be generated in ‘real-time.’

In terms of its current emphasis on big data, the social origins of education data science actually lie in academic research and development going back a decade or so, particularly at sites like Stanford University. It’s actually from one of Stanford’s reports that I take the term ‘big data infrastructure for education.’

The technical origins of such an infrastructure lie in advances in educational data mining and learning analytics. Educational data mining can be understood as the use of algorithmic techniques to find patterns and generate insights from existing large datasets. Learning analytics, on the other hand, makes the data analysis process into a more ‘real-time’ event, where the data is automatically processed to generate insights and feedback synchronously with whatever learning task is being performed. Some learning analytics applications are even described as ‘adaptive learning platforms’ because they automatically adapt—or ‘personalize’—in accordance with calculations about students’ past and predicted future progress.

What’s really significant is how education data science has escaped the academic lab and travelled to the commercial sector. So, for example, Pearson, the world’s largest ‘edu-business,’ set up its own Center for Digital Data, Analytics and Adaptive Learning a few years ago to focus on big data analysis and product development. Other technology companies have jumped into the field. Facebook’s Mark Zuckerberg has begun dedicating huge funds to the task of ‘personalizing learning’ through technology. IBM has begun to promote its Watson supercomputing technologies to the same purposes.

And education data science approaches are also being popularized through various publications. Learning with Big Data by Viktor Mayer-Schonberger and Kenneth Cukier, for example, makes a case for ‘datafying the learning process’ in three overlapping ways:

  • Feedback: applications that can ‘learn’ from use and ‘talk back’ to the student and teacher
  • Personalization: adaptive-learning software where materials change and adapt as data is collected, analysed and transformed into feedback in real-time; and the generation of algorithmically personalized ‘playlists’
  • Probabilistic prediction: predictive learning analytics to improve how we teach and optimize student learning

The book reimagines school as a ‘data platform,’ the ‘cornerstone of a big-data ecosystem,’ in which ‘educational materials will be algorithmically customized’ and ‘constantly improved.’

This text is perhaps a paradigmatic statement of the imaginary and ambitions of education data science, with its emphasis on feedback, systems that can ‘learn,’ ‘personalization’ through ‘adaptive’ software, predictive ‘optimization,’ and the appeal to the power of algorithms to make measurable sense of the mess of education.

Smarter, semi-automated startup schools

The imaginary of education data science is now taking material form through a range of innovations in real settings. A significant materialization of education data science is in new data-driven schools, or what I call smarter, semi-automated startup schools.

Max Ventilla is perhaps the most prominent architect of data-driven startup schools. Max’s first job was at the World Bank, before he became a successful technology entrepreneur in Silicon Valley. He eventually moved to Google, where he became head of ‘personalization’ and launched the Google+ platform. But in 2013, Max left Google to set up AltSchool. Originally established as a fee-paying chain of ‘lab schools’ in San Francisco, it now has schools dotted around Silicon Valley and across to New York. Most of its startup costs were funded by venture capital firms, with Mark Zuckerberg from Facebook investing $100million in 2015.

Notably, only about half of AltSchool’s staff are teachers. It also employs software engineers and business staff, many recruited from Google, Uber and other successful tech companies. In fact, AltSchool is not just a private school chain, but describes itself as a ‘full-stack education company’ that provides day-to-day schooling while also engaging in serious software engineering and data analytics. The ‘full-stack’ model is much the same as Uber in the data analytics taxi business, or Airbnb in hospitality.

The two major products of AltSchool are called Progression and Playlist. In combination, Max Ventilla calls these ‘a new operating system for education.’ Progression is a data analytics ‘teacher tool’ for tracking and monitoring student progress, academically, socially and emotionally. It’s basically a ‘data dashboard’ for teachers to visualize individual student performance information. The ‘student tool’ Playlist then provides a ‘customized to-do list’ for learners, and is used for managing and documenting work completed. So, while Progression acts as the ‘learning analytics’ platform to help teachers track patterns of learning, Playlist is the ‘adaptive learning platform’ that ‘personalizes’ what happens next in the classroom for each individual student.

Recently, AltSchool began sharing its ‘operating system’ with other partner schools, and has clearly stated ambitions to move from being a startup to scaling-up across the US and beyond. It also has ambitious technical plans.

Looking forward, AltSchool’s future ambitions include fitting cameras that run constantly in the classroom, capturing each child’s every facial expression, fidget, and social interaction, as well as documenting the objects that every student touches throughout the day; microphones to record every word that each person utters; and wearable devices to track children’s movements and moods through skin sensors. This is so its in-house data scientists can then search for patterns in each student’s engagement level, moods, use of classroom resources, social habits, language and vocabulary use, attention span, academic performance, and more.

The AltSchool model is illustrative of how the imaginary of education data science is being materialized in new startup schools. Others include:

  • Summit Schools, which have received substantial Facebook backing, including the production of a personalized learning platform allegedly now being used by over 20,000 students across the US
  • The Primary School, set up by Mark Zuckerberg’s wife Priscilla Chan
  • The Khan Lab School founded by Salman Khan of the online Khan Academy.

All of these schools are basically experiments in how to imagine and manage a school by using continuous big data collection and analysis.

So, as AltSchool was described in a recent piece in the Financial Times, while ‘parents pay fees, hoping their kids will get a better education as guinea pigs, venture capitalists fund the R&D, hoping for financial returns from the technologies it develops.’

And these smarter, semi-automated startup schools are ambitiously seeking to expand the realm of what is measurable, not just test scores but also student movements, speech, emotions, and other indicators of learning.

Optimizing emotions

As indicated by AltSchool, education data science is seeking new ways to know, understand and improve both the cognitive and the social-emotional aspects of learning processes.

Roy Pea is one of the leading academic voices in education data science. Formerly the founding director of the Learning Analytics Lab at Stanford University, Pea has described techniques for measuring the ‘emotional state’ of learners. These include collecting ‘proximal indicators’ that relate to ‘non-cognitive factors’ in learning, such as academic persistence and perseverance, self-regulation, and engagement or motivation, all of which are seen to be improvable with the help of data analytics feedback.

Now, academic education data scientists and those who work in places like AltSchool are not the only people interested in data scientific ways of knowing and improving students’ social and emotional learning. The OECD has established a ‘Skills for Social Progress’ project to focus on ‘the power of social and emotional skills.’ It assumes that social and emotional skills can be measured meaningfully, and its ambition is to generate evidence about children’s emotional lives for ‘policy-makers, school administrators, practitioners and parents to help children achieve their full potential, improve their life prospects and contribute to societal progress.’

The World Economic Forum has its own New Vision for Education report which involves ‘fostering social and emotional learning through technology.’ Its vision is that social and emotional proficiency will equip students to succeed in a swiftly evolving digital economy, and that digital technologies could be used to build ‘character qualities’ and enable students to master important social and emotional skills. These are ‘valuable’ skills in quite narrowly economic terms.

Both the OECD and World Economic Forum are also seeking to make the language of social and emotional learning into a new global policy vocabulary—and there is certainly evidence of this in the UK and US already. The US Department of Education has been endorsing the measurement of non-cognitive learning for a few years, and the UK Department for Education has funded policy research in this area.

So how might education data science make measurable sense of students’ emotions? Well, according to education data scientists, it is possible to measure the emotional state of the student using webcams, facial vision technologies, speech analysis, and even wearable biometric devices.

Future Classroom_Josan GonzalesImage: Automated teachers & augmented reality classrooms by Josan Gonzalez

These are the kinds of ideas that have been taken up and endorsed very enthusiastically by the World Economic Forum, which strongly promotes the use of ‘affective computing’ techniques in its imaginary vision. Affective computing is the term for systems that can interpret, emulate and perhaps even influence human emotion. The WEF idea is that affective computing innovations will allow systems to recognize, interpret and simulate human emotions, using webcams, eye-tracking, databases of expressions and algorithms to capture, identify and analyse human emotions and reactions to external stimuli. ‘This technology holds great promise for developing social and emotional intelligence,’ it claims.

And it specifically identifies Affectiva as an example. Originating from R&D at MIT Media Lab, Affectiva has built what it claims to be the world’s largest emotion database, which it’s compiled by analysing the ‘micro-expressions’ of nearly 5 million faces. Affectiva uses psychological emotion scales and physiological facial metrics to measure seven categories of emotions, then utilizes algorithms trained on massive amounts of data to accurately analyse emotion from facial expressions. ‘In education,’ claims Affectiva, ‘emotion analytics can be an early indicator of student engagement, driving better learning outcomes.’

Such systems, then, would involve facial vision algorithms determining student engagement from facial expressions, and then adapting to respond to their mood. Similarly, the Silicon Valley magazine for educational technology, EdSurge, recently produced a promotional article for the role of ‘emotive computing in the classroom.’

‘Emotionally intelligent robots,’ its author claimed, ‘may actually be more useful than human [teachers] … as they are not clouded by emotion, instead using intelligent technology to detect hidden responses. … Emotionally intelligent computing systems can analyse sentiment and respond with appropriate expressions … to deliver highly-personalized content that motivates children.’

Both the World Economic Forum and EdSurge also promote a variety of wearable biometric devices to measure mood in the blood and the body of a seemingly ‘transparent child’:

  • Transdermal Optical Imaging, using cameras to measure facial blood flow information and determine student emotions where visual face cues are not obvious
  • Wearable social-emotional intelligence prosthetics which use a small camera and analyzes facial expressions and head movements to detect affects in children in real-time
  • Glove-like devices full of sensors to trace students’ arousal

This imaginary of affective or emotive computing in the classroom taps into the idea that automated, algorithmic systems are able to produce objective accounts of students’ emotional state. They can then personalize education by providing mood-optimized outputs which might actually nudge students towards more positive feelings.

In this last sense, affective computing is not just about making the emotions measurable, but about using automated systems to manipulate mood in the classroom, to make it more positive and preferable. Given that powerful organizations like the World Economic Forum and OECD are now seeking to make the language of social-emotional learning into the language of education policy, this appears to make it possible that politically preferred emotions could be engineered by the use of affective computing in education.

Cognizing systems

Not only are the non-cognitive aspects of learning being targeted by education data science however. One of its other targets is cognition itself. In the last couple of years, IBM has begun to promote its ‘cognitive computing’ systems for use in a variety of sectors—finance, business, healthcare but also education. These have been described as ‘cognitive technologies that can think like a human,’ based on neuroscientific insights into the human brain, technical developments in brain-inspired computing, and artificial ‘neural networks’ algorithms. So IBM is claiming that it can, to some extent, ‘emulate the human brain’s abilities for perception, action and cognition.’

To put it simply, cognitive systems are really advanced big data processing machines that employ machine learning processes modelled on those of embrained cognition, but then far exceed human capacities. These kind of super-advanced forms of real-time big data processing and machine learning are often called artificial intelligence these days.

The promise of IBM for education is to bring these brain-inspired technologies into the classroom, and to ‘bring education into the cognitive era.’ And it is seeking to do so through a partnership with Pearson announced late in 2016, which will embed ‘cognitive tutoring capabilities’ into Pearson’s digital courseware. Though this is only going to happen in limited college courses for now, both organizations have made it quite clear they see potential to take cognitive tutoring to scale across Pearson’s e-learning catalogue of courses.

Pearson itself has produced its own report on the possibilities of artificial intelligence in education, including the creation of ‘AI teaching assistants.’ Pearson claims to be ‘leveraging new insights in disciplines such as psychology and educational neuroscience to better understand the learning process, and build more accurate models that are better able to predict—and influence—a learner’s progress.’

Neuroscience is the important influence here. In recent years brain scientists have popularized the idea of ‘neuroplasticity,’ the idea that the brain modifies itself in response to experience and the environment. The brain, then, is in a lifelong state of transformation as synaptic pathways ‘wire together’.

But the idea of brain plasticity has taken on other meanings as it has entered into popular knowledge. According to a critical social scientific book by Victoria Pitts-Taylor, the idea of neuroplasticity resonates with ideas about flexibility, multitasking and self-alteration in late capitalism. And it also underpins interventions aimed as cognitive modification and enhancement, which target the brain for ‘re-wiring.’

Tapping into the popular understanding plasticity as the biological result of learning and experience, both IBM and Pearson view cognitive computing and artificial intelligence technologies as being based on the plastic brain. IBM’s own engineers have done a lot of R&D with ‘neuromorphic’ computing and ‘neurosynaptic chips,’ and have hired vast collaborative teams of neuroscientists, hardware engineers and algorithm designers to do so. But, they claim, cognitive and AI systems can also be potentially brain-boosting and cognition-enhancing, because they can interact with the plastic brain and ‘re-wire’ it.

The ambitions of IBM and Pearson to make classrooms into engines of cognitive enhancement are clearly put in a recent IBM white paper titled Computing, cognition and the future of knowing. The report’s author claims that:

  • Cognitive computing consists of ‘natural systems’ with ‘human qualities’
  • They learn and reason from their interactions with us and from their experiences with their environment
  • Cognitive systems are machines inspired by the human brain that will also inspire the human brain, increase our capacity for reason and rewire the ways we learn

So, Pearson and IBM are aiming to inhabit classrooms with artificial intelligences and cognitive systems that have been built to act like the brain and then act upon the brain to extend and magnify human cognition. Brain-inspired, but also brain-inspiring.

In some ways we shouldn’t see this as controversial. As computers get smarter, of course they might help us think differently, and act as cognitive extensions or add-ons. Just to anticipate one of our other keynotes at the conference, Katherine Hayles has written about how ‘cognitive systems’ are now becoming so embedded in our environments that we can say there is ‘cognition everywhere.’ We live and learn in extended cognitive networks. So, says Hayles, cognitive computing devices can employ learning processes that are modelled on those of embodied biological organisms, using their experiences to learn, achieve skills and interact with people. Therefore, when  cognitive devices penetrate into human systems, they can potentially modify human cognitive functioning and behaviours through manipulating and changing the plastic brain.

As IBM and Pearson push such systems into colleges and schools, maybe they will make students cognitively smarter by re-wiring their brains. But the question here is smarter how? My concern is that students may be treated as if they too can be reduced to ‘machine learning’ processes. The history of cognitive psychology for the past half-century has been dogged with criticisms that it treats cognition like the functions of a computer. The brain has been viewed as hardware; the mind as software; memory as data retrieval; cognition as information-processing.

With this new turn to brain-enhancing cognitive systems, maybe cognition is being viewed as big data processing; the brain as neuromorphic hardware; mind as neural network algorithms and so on. As IBM’s report indicates, ‘where code and data go, cognition can now follow.’

Owning the means of knowledge production

So we’ve seen how education data science is seeking to embed its systems into schools, and how its aims are to modify students’ non-cognitive learning and embrained cognition alike. I want now to raise a couple of issues that I think will be relevant and important for all researchers of education, not just the few of us looking at this big data business.

The first is the issue of knowledge production. As I showed at the start, big data systems are making knowledge production into a more automated process. That doesn’t mean there are no engineers and analysts involved—clearly education data science involves education data scientists. But what it does mean is that knowledge is now being produced about education through the kinds of advanced technical systems that only a very few specialist education data science researchers can access or use.

What’s more, as many of my examples have shown, education data science is primarily being practiced outside of academic education research. It’s being done inside of AltSchool and by Pearson and IBM. And these organizations have business plans, investors and bank accounts to look after. AltSchool’s ‘operating system for education,’ as we saw, is being turned into a commercial offering, while IBM and Pearson are seeking to make cognitive tutoring into marketable products for schools and colleges to buy.

These products are also proprietorial, wrapped up in intellectual property and patents law. So education data science is now producing knowledge about education through proprietorial systems designed, managed and marketed by commercial for-profit organizations. These companies have the means for knowledge production in data-driven educational research. We could say they ‘own’ educational big data, since companies that own the data and the tools to mine it—the data infrastructure—possess great power to understand and predict the world.

De-politicized real-time policy analytics

And finally, there are policy implications here too, with big data being positioned as an accelerated and efficient source of evidence about education. One of these implications is spelled out clearly by Pearson, in its artificial intelligence report. It states that:

  • AIEd will be able to provide analysis about teaching & learning at every level, whether a subject, class, college, district, or country
  • Evidence about country performance will be available from AIEd analysis, calling into question the need for international testing

So in this imaginary, AI is seen as a way of measuring school system performance via automated, real-time data mining of students rather than discrete testing at long temporal intervals.

This cuts out the need for either national or international testing. And since much national education policymaking has been decided on the basis of test-based systems in recent decades, then we can see how policy processes might be short-circuited or even circumvented altogether. When you have real-time data systems tracking, predicting and pre-empting students, then you don’t need cumbersome policy processes.

These technologies also appear de-politicized, because they generate knowledge about education from seemingly objective data, without the bias of the researcher or the policymaker. The decisions these technologies make are not based on politicized debates or ideologies, it is claimed anyway, but on algorithmic calculations.

A few years ago Dorothea Anagnostopoulos and colleagues edited an excellent book about the data infrastructure of test-based performance measurement in education. They made the key claim that test-based performance data was not just the product of governments but of a complex network of technology companies, technical experts, policies, computer systems, databases and software programs. They therefore argued that education is subject to a form of ‘informatic power.’

Informatic power, they argued, depends on the knowledge, use, production of, and control over measurement and computing technologies to produce performance measures that appear as transparent and accurate representations of the complex processes of teaching, learning, and schooling. And as they define who counts as ‘good’ teachers, students, and schools, these performance metrics shape how we practice, value and think about education.

If test-based data gives way to real-time big data, then perhaps we can say that informatic power is now mutating into algorithmic power. This new form of algorithmic power in education:

  • Relies on a big data infrastructure of real-time surveillance, predictive and prescriptive technologies
  • Depends on control over knowledge, expertise and technologies to monitor, measure, know and intervene in possible behaviours
  • Changing ways cognitive & non-cognitive aspects of learning may be understood & acted upon in policy & practice
  • Is concentrated in a limited number of well-resourced academic education data science labs, and in commercial settings where it is protected by IP, patents and proprietorial systems.

This form of algorithmic power, or algorithmic governance as we encountered it earlier, is not just about performance measurement, but about active performance management of possible behaviours–and opens up possibilities for more predictive and pre-emptive education policy.

Conclusion

Although many applications of big data in education may still be quite imaginary, the examples I’ve shown you today hopefully indicate something of the direction of travel. We’re not teaching and learning in the Ocunet just yet, but its imaginary of greater digitization and datafication is already being materialized.

As educators and researchers of education, we do urgently need to understand how a big data imaginary is animating new developments, and how this may be leading to new forms of algorithmic governance in education.

We need more studies of the sites where education data science is being developed and deployed, of the psychological and neuroscientific assumptions they rely on, of the power of education data science to define how education is known and understood, and of its emerging influence in educational policy.

This entry was posted in Uncategorized and tagged , , , , , , , , , . Bookmark the permalink.

3 Responses to Imaginaries and materialities of education data science

  1. Zsuzsa Millei says:

    This is such a scary new education system, the scariest part is that it is not so distant, so we better engage with it. What about curriculum, what about pedagogy (dare me to say social justice work), all those civic values and the idea of learning to live together education is also about, schools supposed to teach the future citizens, but how do we imagine children, the raw materials for and products of education in this AltSchool? What becomes of childhood and education?

  2. Pingback: » How can the social sciences keep up with socio-technical change? The Sociological Imagination

  3. Pingback: Read Write Respond #015 – Read Write Collect

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s