Here I set out ideas on reshaping social science to make it more suited to the challenges and tools of the 2020s: more data driven, more experimental and fuelled by more dynamic feedback between theory and practice.
Social science at its grandest is the way societies understand themselves: why they cohere or fall apart; why some grow and others shrink; why some care and others hate; how big structural forces explain the apparently special facts of our own biographies. It observes but also shapes action, and then learns from those actions.
Starting with the idea of social science as collective self-knowledge, I describe how new approaches to intelligence of all kinds could help to reinvigorate it. I begin with data and computational social science and then move on to cover the idea of social research & development (R&D) and experimentation; new ways for universities to link into practice including social science parks, accelerators tied to social goals, challenge-based methods and social labs of all kinds; before concluding with the core argument: an account of how social science can engage with the emerging field of intelligence design. This is, I hope, a plausible, desirable and indeed essential direction of travel. I hope at the very least it will prompt comment and argument.
The first current is perhaps the most familiar—social scientists embracing the new capabilities of data-fed computing. We are all familiar with the extraordinary explosion of new ways to observe social phenomena which are bound to change how we ask social questions and how we answer them. Each of us leaves a trail of who we talk to, what we eat and where we go. It’s easier than ever to survey people, to spot patterns, to scrape the web, to pick up data from sensors, to interpret moods from facial expressions. It’s easier than ever to gather perceptions and emotions as well as material facts—for example, through sentiment analysis of public debates over issues like Brexit. And it’s easier than ever for organisations to practice social science—whether investment organisations analysing market patterns, HR departments using behavioural science, or local authorities using ethnography.
These tools are certainly not the monopoly of professional social scientists. In cities, for example, offices of data analytics link multiple data sets and governments use data to feed tools using AI, like Predpol or HART, to predict who is most likely to go to hospital or end up in prison. The opening up of administrative data is set to have a big impact through new programmes like LEO which links UK school data to tax records and could transform our understanding of social mobility. Surprising patterns invariably emerge when data is combined in new ways—like the police discovery that the best predictor of domestic homicide is a previous suicide attempt (by the perpetrator).
Within universities computational social science has a slightly narrower definition, usually referring to the use of social simulation, social network analysis, and social media analysis. Huge data sets are being gathered on everything from human history and archaeology to image creation and literatures, building on the long tradition of longitudinal studies (from the Framingham Heart Study to the National Child Development Study). Large-scale (computational) social science projects include The Human Project and Social Science One and very effective proponents of new research tools, such as Matthew Salganik are now reaching large audiences.
Social media provide a particularly fertile area for research, and some big companies are beginning to open up their data for researchers, for example, to understand the impact of social media on elections. There are strong umbrella bodies, conferences and research programmes, and signs of a big shift coming in training for social scientists.
This revolution in data, experiment and prediction, and the spread of tools to observe, analyse and predict brings with it all sorts of challenges, many of them ethical. How to ensure that enough data is open; how to get the right data since many of the most important facts are not captured; how not to ignore the left behind; how to avoid algorithms reflecting and then legitimising the biases of past actions.
But I suspect that the most profound challenge will be to develop better concepts and theories to make sense of data. We need, for example, much better theories of how large parts of economies can work without intellectual property; theories of place and belonging; theories to explain enduring inequalities; theories to explain unusual risks, and how social and economic systems can be prepared for the once a century or millennium events that may be coming more often.
In the natural sciences, some argued in the 2000s that the growth of data would obviate the need for the theory. Data would automatically show patterns. Theorists would become redundant.
A counter view, though, is that it’s hard to make sense of any data without some models or hypotheses, and interestingly, recent analysis of human cognition confirms that we start with models and then feed data in rather than the other way around. Much as I welcome the way that disciplines like economics have become more empirical again, it’s crucial that their engagement with data fuels creative generation of new theories and hypotheses. Otherwise we may just be left with better informed confusion and the vice (which I describe later on) of being forever trapped in first loop learning.
A key insight of social innovation is that societal self-knowledge often comes from praxis: the interplay of action and analysis, theory and practice, rather than detached observation. Anything that doesn’t yet exist (whether a new model welfare state or a novel way of providing eldercare) cannot easily be designed on the basis of backward-facing knowledge and data: hence the inherent tension between social creativity on the one hand and orthodox social science on the other.
The field of social innovation, which claims to provide answers to this dilemma, has grown greatly over the last ten or twenty years around the world both in research and in practice. Social innovation is now supported by many new funds provided by governments and foundations, new legal forms, and capacity building programmes, courses and research programmes in universities (I’ve previously written about the field’s past, present and future). It’s also now seriously engaging with the potential of data (as set out in this survey).
Social innovation both feeds off traditional social science—for example insights into the impact of early years education—and challenges it, since often practice is ahead of theory. This means that the task for universities has been to make sense of, critique and analyse what's working in the real world rather than following the models of traditional technology innovation where basic theories are developed in universities and then spread out in a linear fashion out into industry.
A key emerging strand within social innovation is social research and development (R&D). The idea that R&D could be systematically funded and organised, crystallized in the late 19th century. Today, between 2-4 percent of GDP in most advanced economies is devoted to R&D, either funded by government, foundations or businesses, and carried out by universities, government labs and corporations of many kinds. We now take it for granted that systematic R&D is crucial to economic growth and prosperity, which is why it is supported by all sorts of subsidies and tax breaks. The basic idea is to do fundamental research and then, using experimental scientific methods, to turn those insights into new products and services which can be useful in the world, whether these are pharmaceutical drugs or new kinds of aeroplane.
The idea of social R&D however is much less common and indeed most R&D funders around the world focus almost exclusively on hardware and using knowledge from the natural sciences rather than the social sciences.
At various points over the last century there have been attempts to apply R&D methods to social change (including from big US foundations like Ford and Rockefeller in the 1960s). In recent years Canada has been at the forefront of this, thinking through how public funders of research and big foundations could finance systematic research experimentation on social challenges such as homelessness, integration of refugees or youth unemployment.
The mechanics of doing this are not so different from traditional R&D involving funding at multiple stages running from fundamental research through generation of practical ideas, testing, experiment, gathering of evidence and then hopefully the scaling and propagation of the models which work. However, there are still no examples of social R&D being done systematically and at scale and this debate has hardly started in most countries.
There are many challenges in doing social R&D well. They include how to orchestrate experimentation, how to harvest insights and ensure they’re used, whether in government policies or the practices of professions like teachers or social workers; how to handle the ethical and political challenges of experiments involving people's lives; and how to avoid some of the risks of distortion, such as ignoring lived experience.
Nesta has had some experience of applying R&D in new fields. Our digital R&D fund for the arts was a collaboration of a public research funder (the AHRC), the main arts funder (the Arts Council) and Nesta, with the aim of funding partnerships between the innovators, the universities and tech companies to develop new applications of digital technology to help arts organisations either find new audiences or experiment with art forms.
We’ve also shown the value of experimentalism, the much more systematic testing out of ideas in reality rather than just on paper, which forms a significant strand of any more serious approaches to social R&D. Experimentation has long been normal in parts of health and is now mainstream in many parts of business with companies like Amazon and Google doing AB testing on new services of all kinds. The last five years have brought much greater use of experimentalism in governments, led by Canada, Finland, the UAE and UK, all of which in different ways have introduced more systematic approaches to testing out of new policies on a small scale before they are implemented across the whole country.
Nesta’s Innovation Growth Lab (IGL) is an example of what could be possible in the future. It brings together a dozen national governments and foundations to use experimental methods to find out what really works in driving innovation and entrepreneurship. The IGL uses RCTs in a field where very little was known about what of the hundreds of billions of spent on business support is actually effective and in so doing is pushing economics to become more empirical and more self-critical rather than just deducing conclusions from assumptions. The Behavioural Insights Team (BIT)—which Nesta co-owns—uses similar methods in behavioural economics running dozens of real life experiments to find out what kinds of nudges actually work in encouraging people to pay their taxes on time, retrofit their homes or adopt a healthier lifestyles.
This new culture of experiment is influencing many professions and turning them into social scientists. This shift is helped in the UK by a network of what works centres (linked by the Alliance for Useful Evidence). There’s already a network of police officers using experimental methods—the Society of Evidence Based Policing—to generate useful knowledge. In some countries school teachers see their role as both teaching and research, working with their peers to try out variations to curriculum or teaching methods (and the EEF encourages and funds this). The new Children’s Social Care What Works Centre is mobilising thousands of social workers to generate and use evidence in a similar spirit. At Nesta we encourage the hundreds of charities we fund to work out their ‘theory of change’ and collect data to make sense of their impact, embedding in the everyday Karl Popper’s vision of ‘methods of trial and error, of inventing hypotheses which can be practically tested’. There are of course large areas of government and social action that remain untouched by any of this. But systematic social R&D is no longer a pipedream.
So how should universities respond to this growing interest in learning by doing and active experiment? Here I summarise some emerging approaches that complement the classic activities of universities with others which generate insights through engagement with practice: social science parks, challenge-based learning, social labs and social accelerators.
The social science park
In the 1960s and decades after, many universities created science parks next to them to provide a home for spin-off companies, larger businesses and laboratories. The notion was that science parks of this kind would help the translation of basic research from universities into business and there are now literally thousands around the world. The social sciences, however, were reluctant to develop similar models. One recent exception is Cardiff University which has committed to creating a social science research park in central Cardiff bringing together the university, Nesta’s Y Lab and the What Works Centre for Public Policy in Wales. The idea is to create a space – which makes sense more in city centres than green field sites—where accelerators, labs and social ventures can grow, with active cross-pollination of practical knowledge and academic research.
The social labs
Over the last decade hundreds of new labs have been set up within governments and universities to pioneer public and social innovation. Nesta itself contains several, including the HealthLab. We’ve documented the many forms they take—some using data, others design, others still citizen ideas—and their varied relationship to formal structures, and helped set many up around the world. Some of the most interesting ones sit on the edge of universities, providing a space for praxis, and they also increasingly connect to each other, creating global networks for rapid sharing on topics like joblessness or transport design (documented in the monthly LabNotes).
A related trend is the rise of challenge-based university models. Here the idea is to base the work of the university more around problem solving than the propagation of established disciplines. These models mobilise undergraduates and graduates to work in teams, usually interdisciplinary, to solve real life problems, whether in science and engineering or in the social life of the city. In our various reports on this topic we’ve documented the many models in use around the world from Aalto in Finland to Stanford, Olin to Tsinghua, and shown how this method of working can be very powerful as a pedagogical tool helping students not only to deepen their understanding of core disciplines but also to understand how the real world works, and how to collaborate and achieve change. With well over 150 million students in universities around the world there is huge scope to mobilise many more of them to work on real life problems, for example around the sustainable development goals.
Over the last twenty or thirty years there has been a big expansion of business accelerators, some linked to universities and some in city centres providing more systematic support to business startups of kinds. Nesta has documented these accelerators in a succession of reports such as Startup Factories. We’ve analysed in depth what makes them work and not work, and through projects like Startup Europe we’ve helped many other countries learn from the trailblazers in creating ecosystems of support for accelerators. A more recent trend has been to apply similar models to achieving social impact and again Nesta has been involved both in the funding of these projects and in the analysis. Bethnal Green Ventures in London was a pioneer and there are many social accelerators dotted around the world which we’ve documented through reports like Good Incubation. These support startup social enterprises, charities or for-purpose commercial enterprises which can achieve a reasonable financial return and a social goal. The rigour of having to create a viable venture forces attention to evidence and results and in the last five years universities have become increasingly interested in hosting accelerators of this kind as a way to put social science to work.
Each of these approaches encourages social science to be engaged, practical and experimental; and each inevitably challenges traditional disciplinary boundaries and currencies.
These new approaches offer new answers to the broader challenge for social science: how to truly live up to its role as society’s collective self-knowledge, providing insights into everything from jobs to families, war to happiness.
I believe the best answers lie in seeing social sciences through the lens of intelligence design and asking how well they orchestrate the different tools and elements that together make up a recognisably intelligent system or society.
If we look at intelligence in any serious large-scale system or organisation it includes some of the following elements, all of which should be vital to a society’s self-understanding:
Any new social science discipline or subdiscipline that was being invented today would surely need an account of how it aims to organise each of these functions (I showed in my book 'Big Mind', for example, how economics could be reinterpreted through the lens of intelligence). The various methods mentioned earlier in this piece—from computational social science to experiments—fit in as parts of such an approach, but lose much of their impact if they are seen only as methods in search of problems rather than starting with problems and working backwards to find the most suitable insights, theories and methods. Duncan Watts made a similar argument: that social science should work more on solutions than theories.
Unfortunately, however, most disciplines have a quite unbalanced approach—often very strong on some parts, like observation or memory—and very weak on others. Moreover I’m not aware of any with a coherent account of how they should mirror the crucial property of intelligence in individual human brains which is the ability to connect these functions, from observations to judgement and creativity, ideally in close to real time.
These weaknesses become even more apparent if we situate social sciences within a broader story of societal learning. Intelligence in practice also and always involves learning loops: first loop learning that fits new data into existing models, paradigms and frameworks; second loop learning that generates new concepts and categories; and third loop learning that develops new ways of thinking. These together provide a good summary of what a healthy social science should look like (computational social science is itself a good example of third loop learning, but only as good as the second loop learning it builds on). Yet some disciplines become trapped in the first loop, continuously seeking to feed new data into old models rather than generating new categories.
Thinking about social science as applied intelligence makes it more natural to straddle disciplinary boundaries as many have advocated. For example, echoing E.O Wilson and others, Nicholas Christakis has argued that, next to the data revolution, and the rediscovery of experimentation, the key radical changes impacting the social sciences today are huge advances in the biological sciences; specifically, discoveries in physiology, neuroscience, and genetics (which has led to the emergence of new fields such as sociogenomics and biosocial science). Others argue that it’s the ability to think systemically that is crucial to the future of social science, learning from ecology and evolution, or reinvigorating its capabilities for design and imagination, which were quite strong in the 19th century but largely squeezed out by analytical orthodoxy in the 20th.
Yet the individual social sciences tend to resist these challenges; to be fairly inward looking; attached to particular methods; protective of boundaries; untroubled when the dominant models visibly fail; and epistemologically conservative.
For social sciences, in particular, any interest in more conscious intelligence design quickly brings in questions of collective intelligence: how to harness social inputs, drawing on the hunger of many people to be creators of knowledge not just users—generating information, running experiments and drawing conclusions. At the moment this shift to mass engagement in knowledge is most visible in neighbouring fields. Digital humanities mobilise many volunteers to input data and interpret texts—for example making ancient Arabic texts machine readable. Even more striking is the growth of citizen science—eBird had 1.5 million reports last January; some 1.5 million people in the US monitor river streams and lakes, Seti@home has 5 million volunteers. Cancer Research UK’s experiment Cell Slider classifying online images of cancer tumours involved large numbers of citizen scientists greatly accelerating the classification of data. And a University of Washington study recently estimated the economic value of citizen science at over $2.5 billion each year (you can read more about it in this book).
This drive to people becoming creators of knowledge that’s relevant to them is very evident in healthcare where patients’ groups are now large, funding their own research, gathering data, like the Genetic Alliance representing patients with rare conditions. But so far there has been much less of this in social science, despite traditions like Mass Observation and despite the fact that it is in many ways easier for people to observe and classify social phenomena than physical ones. Yet there are obvious parallels and no shortage of fascination with social facts that could prompt people to track what’s happening on the streets; the prevalence of hate crime or speech; the emergence of new kinds of economic life.
If social science could become more embedded in daily life then society could itself become more of a lab and more citizens could become part-time social scientists. Here we see a possible future in which the role of the specialist mutates into more of a coach and a partner, an aide to an intelligent society more than a caste apart, a vision that would revive older social science traditions, including John Stuart Mill’s belief in experimental progress. John Dewey’s emphasis on how societies learn and, from the 1960s, Donald Campbell’s advocacy of a truly experimental society, ‘a process utopia, not a utopian social structure per se… [that] seeks to implement Popper’s recommendation of a social technology for piecemeal social engineering…’. We’re all familiar with the old idea that it’s better to teach a man to fish than just to give him fish. The implication of this long tradition is that it’s better to skill up society’s ability to do social science rather only giving it already packaged social science conclusions.
If this is how social science could connect to citizens it can also apply collective intelligence and intelligence design concepts to its own work. One of the most promising developments of recent years is the proliferation of tools to support social scientists, allowing them to act in something more like a collective intelligence, many supported and documented by Sage Ocean. There are tools like Ureka to follow latest research, as well as ResearchGate, Academia.edu and Iris.ai—research discovery with artificial intelligence; there’s Benchfly helping video production for researchers; IN-PART to connect researchers with industry, and others like SciLine; Linknovate; konfer; Pivot; Kolabtree; Academic Labs; Ohio Innovation Exchange. There is Ask Wonder which you can email questions to and expert researchers will compile a list of resources for you; ThinkLab—an algorithm distributes resources for comments and discussion by researchers and others to reward engaging in difficult topics; Real Scientists—a twitter account where researchers and science journalists take over and talk about their life and outputs; The Conversation providing news. Finally, in the next few months there will be Nesta’s own Rhodonite and Clio search engines—analysing new trends in innovation, technology and social science at a global scale.
The main message of this piece is that social sciences should more often start with problems rather than specific disciplinary approaches; should situate their own tools within a broader theory of intelligence; and should build up a range of complementary methods alongside the classic ones of the academy (the peer-reviewed journal, lecture etc.).
To me this seems an obvious direction of travel. But it rubs against tradition; inertia; and the pull of status. The social sciences have quite a lot to say about why systems so often resist change. But they also show how, in time, new generations tend to break down the barriers.