We’re all familiar with the old idea that it’s better to teach a man to fish than just to give him fish. The implication is that it’s better to skill up society as a social scientist, than to give society pre-packaged social science conclusions
Over the last few years, much has been done to make social science more engaged and impactful. There’s the Alliance for Useful Evidence and ‘what works centres’ – now up to a dozen, with the next one set to be on children’s social care - all moving beyond creating repositories of evidence to ensuring that they are actually used.
There’s the REF and the various attempts – some better, some worse - to embed impact into assessment. There are new labs, like Nesta's Y Lab, linking universities to innovators in the public sector and civil society, and again, like here, the beginnings of social science parks that aim to make social science more engaged with the society around it.
And there’s a plethora of moves to change curricula in universities – recently collected in a useful survey from SIX, the Social Innovation Exchange - so that students are more engaged in problem-solving in their societies, not just observing.
My interest here is on what might happen next. What might contribute to a future social science that is significantly more meshed into the common sense of society, and helps society to better understand itself? And what could social science adopt from the very old saying that it’s better to teach a man to fish than just to give him fish?
There are two poles of belief about the proper role of social science. One sees a cadre of specialists, based in their own specialist institutions and disciplines, analysing, and interpreting the world and then feeding conclusions into an essentially passive society and grateful policymakers. The other sees specialists in the academy, working much more in partnership with a society that is, itself, skilled in social science, able to generate hypotheses, gather data, experiment and draw conclusions.
I want to suggest that two powerful winds are blowing which could amplify this second view of social science. They could leave the language of impact - which originates in the military, and is very much a reflection of the traditional linear view - rather redundant, perhaps to be replaced by a language of usefulness.
The first of these winds of change is the easiest to see. Social science is society’s way of understanding itself: why societies cohere or fall apart; why some grow and others shrink; why some care and others hate; how big structural forces explain the apparently special facts of our own biographies.
We are now seeing an extraordinary explosion of new ways to observe social phenomena, which are bound to change how we ask social questions and how we answer them.
Already today, each of us has left a trail of who we talk to, what we eat and where we go. It’s easier than ever to survey people, to spot patterns, to scrape the web, to pick up data from sensors, to interpret moods from facial expressions. It’s easier than ever to gather perceptions and emotions as well as material facts – for example through sentiment analysis of public debates over issues like Brexit. And it’s easier than ever for organisations to practice social science – whether investment organisations analysing market patterns, HR departments using behavioural science, or local authorities using ethnography. These tools are certainly not the monopoly of professional social scientists.
That deluge of data is a big enough shift on its own. But it is also now being used to feed interpretive and predictive tools using AI – who is most likely to go to hospital, to end up in prison, which relationships are most likely to end in divorce.
The combination is already changing the behaviours of governments, who are gathering data, matching data sets, and developing their own predictive tools. It’s certainly changed Nesta’s own research work, with far more use of data, analysis and visualisation – like the Arloesiadur project in Wales, which is pioneering new ways of mapping innovation in the economy.
The same forces are also fuelling a shift to experimentalism – testing ideas in reality rather than on paper. Companies like Amazon do A/B testing on any new service. Governments in Finland, Canada, the UK and the UAE are also moving this way. Our own Innovation Growth Lab now brings together a dozen countries, using RCTs to find out what really works in driving innovation and entrepreneurship - and is pushing economics to become more empirical, and more self-critical, rather than just deducing conclusions from assumptions. They’re moving us closer to Karl Popper’s vision of ‘methods of trial and error, of inventing hypotheses which can be practically tested …’
The combination of more data and more experiment is shifting attitudes to prediction
Prediction in complex environments is very hard, but it’s good for improving models and good for learning. As Philip Tetlock and others have shown, explicit prediction can show up experts as less convincing than they might like. But its virtue is that, attempting predictions and then analysing why they didn’t materialise, is an excellent device to improve genuine learning. In addition, more explicit short, medium and long-term forecasts - and then shared learning from what actually happens - helps to accelerate insight.
Just imagine how much could have been learned from the surprising economic results that followed the Brexit referendum; the surprising political results of the last election; or the surprising social effects of the new generations of social media.
Status, of course, tends to stand in the way of just such learning – and eminent social scientists generally prefer to ignore their own mistakes. But a culture where that wasn’t acceptable would be rather healthy. Keynes’ comment that when the facts changed he changed his views, is often quoted. But it’s much less often emulated.
This revolution in data, experiment and prediction, and the spread of tools to observe, analyse and predict, brings with it all sorts of challenges. How to ensure reproducibility; how to ensure that enough data is open; how to get the right data, since many of the most important facts are not captured; how not to ignore the left behind; how to avoid algorithms reflecting the biases of past actions.
But I suspect that its most profound challenge will be how to ensure the development of the right concepts and theories to make sense of data and navigate the interaction of theory and information.
The rise of machine learning will show just how much we need much better theory making
Indeed, this is paradoxically the weak link in much social science at the moment – too much attention to impact squeezes this out, and leaves a fear of speculative, imaginative thinking - and a deluge of data could make it worse. We need, for example, much better theories of how large parts of economies can work without intellectual property; theories of place; theories to explain enduring inequalities; theories to explain unusual risks, and how social and economic systems can be prepared for the once a century or millennium events that may be coming more often.
We need theories that explain why some people or places buck the trends.
The key point though it that it’s hard to see how any part of social science will look the same in 20 years’ time, and how any social scientist - or faculty - can avoid embracing a new set of data and intelligence tools and the new sub-disciplines that may result.
The second revolution is less visible but could be no less profound. This is the hunger of many people to be creators of knowledge not just users – generating information, running experiments, drawing conclusions. This is the idea of a social science that’s shared between the academy and the public as a common good; this is not a new idea, but one that’s much easier to realise precisely because of the spread of social media, and a population with an ever higher share of citizens with degrees.
At the moment, this shift to mass engagement in knowledge is most visible in neighbouring fields. Digital humanities mobilise many volunteers to input data and interpret texts – for example, making ancient Arabic texts machine readable. Even more striking is the growth of citizen science – eBird had 1.5 million reports last January; some 1.5 million people in the US monitor river streams and lakes, [email protected] has 5 million volunteers.
Cancer Research UK’s experiment Cell Slider - classifying online images of cancer tumours - involved large numbers of citizen scientists, greatly accelerating the classification of data. And a University of Washington study recently estimated the economic value of citizen science at over $2.5 billion each year.
This drive to people becoming creators of knowledge that’s relevant to them is very evident in healthcare, where patients groups are now large, funding their own research, and gathering data – as showcased a few months ago at our People Powered Health conference. At the event, Sharon Terry of the Genetic Alliance showcased the work of her organisation, which represents patients with rare conditions.
So far, however, there has been much less of this in social science, despite traditions like Mass Observation, and despite the fact that it is in many ways easier for people to observe and classify social phenomena. Yet, there are obvious parallels and no shortage of fascination – perhaps to track what’s happening on the streets; the prevalence of hate crime or speech; or the emergence of new kinds of economic life.
If social science could become more embedded in daily life, society could itself become more of a lab, with more citizens becoming social scientists; a future in which the role of the specialist mutates into that of a coach and a partner, an aide to an intelligent society more than a caste apart.
This shift, greatly amplified by technology, casts a new light on some older social science traditions. John Stuart Mill’s belief in experimental progress. John Dewey’s emphasis on how societies learn. And from the 1960s, the work of Donald Campbell who advocated a truly experimental society, as he put it "a process utopia, not a utopian social structure per se… [that] seeks to implement Popper’s recommendation of a social technology for piecemeal social engineering…"
Within some professions this is taking hold, blurring the boundaries between research and action. There’s already a network of police officers using experimental methods – the Society of Evidence Based Policing – determined to generate useful knowledge. In some countries, schoolteachers see their role as both teacher and researcher, using rough and ready devices with their peers to try out variations to curriculum or teaching methods. Thousands of charities now try to work out their ‘theory of change’ and collect data to make sense of their impact.
Imagine these multiplied 10 or a hundred fold, and we start to see an emancipatory vision of a society that is hungry to know itself and is helped by the academy to be reflexive.
Add these two revolutions together – the one based on data and machine intelligence the other on collective intelligence – and a rather different model of social science comes into view, more embedded, more engaged in the society around it, drawing on inputs and data, constructing hypotheses, both more rigorous and more creative at the same time.
I’ve shown elsewhere the powerful results being achieved by what I call collective intelligence assemblies - in fields as diverse as climate change, epidemics, cancer and labour markets
These bring together observation, analysis, prediction, interpretation and action – with powerful implications for almost every field of life and for the daily work of social science.
We need these assemblies to help us solve big difficult problems. And we need their combination of human and machine intelligence to avoid the risks of new pathologies – false facts, false interpretations, dangerous habits amplified by algorithms.
Where they take us, is to a very different view from the traditional notions of impact – which assumed a linear passage from research into society, politics and government. The alternative is a society with the skills and knowledge to experiment on itself, generating knowledge, guided by theory and method, working in partnership with specialists. The world, in other words, as a lab where the creation and use of social science is embedded into daily life.
We’re all familiar with the old idea that it’s better to teach a man to fish than just to give him fish. The implication is that it’s better to skill up society as a social scientist, than to give society pre-packaged social science conclusions.
When that starts to happen, today’s notions of impact may look rather thin and unsatisfying, reflecting their origins in the military. The alternative is to evolve social sciences that are both more technologically literate and more truly social: disciplines that are not just done for society, but also become skilled at acting with society too.
Geoff Mulgan's book ‘Big Mind’ will be published in late 2017 by Princeton University Press