Three ideas for blending digital and deliberative democracy
Three ideas for blending digital and deliberative democracy
In recent months there’s been a surge of interest in deliberative methods of public engagement in the UK. The Royal Society for Encouragement of Arts, Manufactures and Commerce (RSA) and Involve are leading a campaign for deliberative democracy, UK Select Committees have begun to experiment and the UK Government has announced funding for its own programme of so-called ‘mini-publics’ to be tested with local authorities across England.
Most of these methods are offline, face-to-face forms of engagement. A citizen assembly, to take one example, would typically involve randomly selecting a group of anywhere between 50 and 250 people, inviting them to a room, giving access to facts and evidence, allowing questioning of experts and then discussing and providing recommendations for policymakers to respond to or implement.
Deliberative methods require some slow brain work - often over a period of several days or weekends - and tackle areas where an understanding of complexity and trade-offs is necessary to make a decision. While this is an important condition for their success, it also means that they tend to be expensive and limited to the people in the room, which raises questions around community ownership and legitimacy.
More people are now asking what benefits there might be for digital innovations to somehow supplement or enhance deliberative methods, but ideas or details on what this might look like or why it might be useful are lacking. For instance, the recent Innovations in Democracy programme from the UK Government (for which I am currently serving on the advisory board) aims to pilot eight to ten citizen assemblies across England over the next year including a remit to experiment with “online civic tech tools to increase broad engagement”. However, it’s still unclear what this will look like in practice.
In what follows I’ll present three options for experimentation. I focus on the potential benefits of blending offline and online approaches, looking in particular at the potential for digital tools to broaden participation in deliberation; to increase the diversity of ideas feeding into the process; and to boost transparency and trust.
1) Extending online dialogue: broadening the process of deliberation outside the room
One of the early promises of digital democracy was that digital platforms could extend the reach of the deliberative process, going further than just the people in the room, and allowing deep interaction among participants to take place at scale and across a region or an entire country.
Examples of this include the Australian Citizens Parliament (ACP), which used an ‘Online Parliament’ to bring 300 people together to make recommendations on how to strengthen Australia’s democracy. Participants were randomly selected to ensure representativeness and then invited to join a curated, closed group which tried best to replicate the conditions for deliberation online. Other examples like Common Ground For Action ask participants to rank and filter ideas that have already been discussed by a face-to-face group. Participants are curated and invited to rank a series of options, before debating and reflecting on other people’s opinions in a safe online environment. Each of these approaches provide a useful way for the offline participants to gauge opinions of people outside the room, as well as facilitating deeper understanding of the issues among a broader group.
Research in this area generally doesn’t consider whether there are more benefits to using online measures by themselves, or in blending them with more traditional face-to-face approaches. If they were to be blended there’s a practical challenge in deciding precisely how the online output will feed into the offline discussion.
One option would be for the online exercise to happen first, and the recommendations used as the starting point for the offline discussion where participants build on, or add to, them over time. Another option would be for online discussions to be summarised daily and fed into the process at clearly defined points. Both of these raise challenges around impartiality (‘who is deciding how to summarise the online discussion?’) and expectations management (‘is my input online actually being used in any meaningful way?’). Ideally the task of sorting through and responding to online proposals should be assigned to the randomly selected offline group to preserve impartiality, rather than some unknown moderator.
There will inevitably be a challenge around motivation - it’s harder to hold people’s attention online for more than ten minutes, let alone engage them over several days or weekends. This issue is generally reflected in participation numbers for online experiments in deliberation. In the case of ACP mentioned above, only 300 people participated in the Online Parliament across the whole country (out of 8,000 contacted) - a figure not staggeringly higher than the 150 invited to join the physical event.
Other attempts to foster a culture of online deliberation include the German Pirate Party’s failed experiments with Liquid Feedback, where an overly complex tool worked well as a method of communication within the party for already-engaged members, but struggled to attract participants when targeted at the wider electorate. This reminds us that people who are confident with technology and already politically active will be far more likely to volunteer to participate online than other groups, and that different user experiences will yield very different participation rates.
Deliberative democracy experts will also argue there’s a palpable shift in the quality of engagement when people from different backgrounds are brought together in the flesh, including a higher sense of responsibility and civic duty which is hard to replicate digitally.
In response some are trying to reintroduce the face-to-face element online. James Fishkin is working on a prototype for scaling online deliberation where participants see one another and interact across a series of live video-sessions, and an automated moderator asks questions to prompt participants to reflect, and ensure that all voices are heard equally. Other approaches to motivating higher levels of engagement include the use of games, like Engagement Lab’s @Stake which has shown to improve levels of empathy and creativity in the civic process.
Experimental approaches like these aside, a good starting point for practitioners would be to look carefully at the evidence of what works in online deliberation before rolling out something new. Projects like Cornell University’s Regulation Room provide a set of generalised, evidence-based approaches (and software tools) for designing a productive deliberative conversation online and lowering barriers to participation.
2) More viral engagement: improving the diversity ideas that feed into the process
A rather different school of thought argues that online forms of participation should be less about deep deliberation, and more about creating more viral, large-scale forms of engagement like crowdsourcing.
Done well, this can be deployed to powerful effect. The large-scale participation demonstrated by political parties like Podemos in 2015 on Reddit, or German party Aufstehen’s recent opinion mapping exercise with 34,000 participants, demonstrate how inviting large numbers of people to submit, rank and filter ideas can generate collective intelligence that helps policymakers understand the key points of division among a population. In these examples participation is generally quicker and less reflective than more deliberative methods. But it serves a rather different purpose: to improve understanding of issues that matter most, or to highlight pain-points which need to be fleshed out in more detail in a face-to-face environment.
In practice, combining this with offline deliberation might look something like Madrid City Council’s recent announcement whereby thousands of ideas being collected on their digital democracy platform Decide Madrid will be sorted by a randomly selected citizen assembly. The idea here is for the online crowdsourcing platform to set the parameters for the citizens assembly, who take on the role of sifting through online proposals, and deciding which ones to turn into implementable policies. A similar system, where the offline group is tasked with filtering and responding to online submissions, was used in Ireland for the 2017 citizens assembly on abortion.
Technology can also help us with more efficient ways of filtering the noise when inviting masses of online submissions. Wikisurveys like Pol.is constrain what people are able to input, and use dimensionality reduction to find the key issues and the present the widest variety of opinions to policymakers, preserving minority voices rather than amplifying the loudest ones - an area where traditional social media has failed.
Facilitators might also look into commissioning methods that try automate the process of online crowdsourcing. Chatbots, like those used by the Government of Jersey, go to where audiences already are (e.g. Facebook Messenger) and ask questions that people can answer as if responding to a message from a friend. It’s also not uncommon for citizen assemblies to commission external polling or market research companies to survey opinions of the wider population (though these aren’t cheap), which can then be used as briefing materials during the offline deliberations.
Where fewer people are expected to participate - say, if the issue being addressed is highly local - then online forum tools and idea generation platforms could be used to widen participation beyond those in the room. Examples of tools include those like Your Priorities, Consul or Discourse, some of which were tested during the Nesta-led D-CENT project a few years ago.
Again, expectations management and fairness will need to be considered carefully, and there’s the same challenge of finding a way for this to integrate constructively into the offline process. The examples above demonstrate that this type of online engagement can work well before the offline process starts, whether generating a set of crowdsourced ideas to be used at the beginning of the deliberative process; to narrow or focus the agenda; or even to decide what issues are chosen for deliberation in the first place.
A more obvious objection here (with the exception of a representative opinion poll) is that participants are self-selecting and therefore will inevitably be less representative of the wider population than in the more curated methods mentioned above.
In some ways this is a deliberate design choice. Many of these methods embrace the fact that online engagement is quicker and scrappier. They typically aim to lower the barriers to participation, for instance by reducing the amount of personal details required to participate. This makes the process of participation simpler and more inviting, but it also means that it’s more difficult to maintain a representative sample or carefully control who is taking part.
In response one could look to the newDemocracy Foundation’s guidelines about how a self-selecting groups might operate alongside more more representative offline groups. nDF suggests that crowdsourcing activities can be organised around something called ‘proposal team’, whose role is to gather facts and ideas from the wider populace for the offline group to formally consider as part of their ongoing deliberations. As nDF puts it:
"The policy jury and review panel [i.e. the offline group], responsible for reviewing proposals, deliberating, and making the final recommendation or decision, would have the advantages of being representative, deliberative, and corruption-resistant. The proposal teams, responsible for generating ideas for the mini-publics to consider, would foster broader participation, a more diverse range of ideas, and increased public support for the process and outcomes of the deliberation."new Democracy Foundation
In other words, proposal teams offer an alternative opportunity for interested outsiders to take part, but it is the randomly selected offline group that remains the final arbiter of decisions and ultimate source of legitimacy. nDF also goes on to say that, in order to improve the quality and reduce bias of contributions from self-selected proposal teams, organisers should ensure a very wide representation of opinions. In practice this might include planning basic marketing campaign at the beginning of the process, reaching out and inviting a range of different advocacy groups to take part in order to ensure that crowdsourcing can be as inclusive as possible. Proposal teams could also take a hybrid form, gathering ideas via a structured questionnaire or survey that could be delivered online or offline. Examples include participatory budgeting in Paris, where a digital platform operates in parallel to a offline activities such as pop-up street stalls that collect ideas from pedestrians all around the city.
3) Boosting transparency using digital tools
A more straightforward use for digital tools would be to make the offline process of deliberation more transparent. There are examples where journalists have harshly criticised citizen assemblies, either because of perception of bias or the sense that highly important decisions were being made behind closed doors.
This isn’t rocket science - it can be as simple as creating a simple website, providing live video streams, or publishing recommendations online. In the case of the 2017 Irish citizens’ assembly on abortion this material became a valuable asset for journalists, as well as a useful educational resource for the wider public in the run up to the referendum in 2018.
For a more a radical approach we could look to vTaiwan - a radically transparent consultation process developed by activists in Taiwan, and later adopted by the Government. The process starts by asking all expert stakeholders (including government departments) to provide their own official statements and to publish any raw facts or data related to the topic. These statements are then published on Slideshare on the vTaiwan website under a strict criteria of accessible language and readability. Forum comments, survey data, videos, discussions throughout the consultation, even in preparatory meetings, are also made publicly available, and where possible, transcribed and published using tools SayIt - an open source transcription tool created by mySociety - which publishes text as structured, searchable data.
By turning the entire process of consultation into publicly accessible audit-trail of information, anyone can go back and see exactly where a particular decision was made and on what facts or opinions it was based, in turn helping outsiders to gain more trust in the process. It also improves collective awareness and makes conversations more productive. For example, facilitators may point to where a conversation or debate has already been resolved in an earlier point in the consultation, in order avoid duplication or wasted time discussing similar issues more than once.
By opening up the process, facilitators may be putting off people who may not wish to be identified. Therefore a careful balance needs to be struck between transparency and respect for participants’ privacy. This might include only filming experts who join the process (i.e. not citizens themselves), using audio recordings, or published transcriptions instead of raw video. There’s also the issue of cost. Resources for transcribing, uploading, summarising all needs to be factored in and this could add to an already expensive process..
Being proactive about experimentation and learning
As the discussions above make clear, there’s still a lot of room to experiment and tweak the parameters of how offline and online engagement can be blended to the best effect. There are no clear answers or flat-pack solutions. As we’ve argued before, the field of online engagement needs to get a lot smarter about how it thinks about experimental design and evaluating the impact of different approaches.
The UK Government’s new Innovations in Democracy project mentioned above is an opportunity to do just that. Eight to ten citizen assemblies will be piloted across the country in the coming months, and there will be a learning coordinator whose role is to capture what worked and why. With this comes a good chance to compare different approaches, and to understand how different design choices lead to different outcomes and levels of participation. For instance, does online crowdsourcing of ideas before the assembly significantly enhance the quality of face-to-face deliberation? Do deliberative online forums foster higher participation rates than other types of crowdsourcing? How do attempts to boost transparency or invite wider participation affect people’s perception of the process?
If you have any experience of blending online engagement with deliberative democracy methods we’d love to hear from you in the comments below.
Thanks to Sarah Allen, Isaac Stanley and Tom Symons for useful comments and conversations.