Crowdsourcing for democracy using Wikisurveys

Many examples - whether on social media or other platforms - have shown a tendency to polarise opinions rather than bridge divides as false information circulates, and people gravitate towards others who share their political affiliations.

Wikisurveys are one promising alternative for online public engagement, offering a simple and engaging way to gauge the ideas, opinions and feelings of a large group of people on a given topic or policy area. We first came across examples in our digital democracy research around two years ago, and although they have been around for a while their use has gradually expanded in recent years.

When it comes to digital democracy, we’ve often argued against focusing on flashy tools, rather than the difficult task of changing institutions and cultures. We obviously don’t see Wikisurveys as a panacea for all our democratic woes. But in a world where traditional social media increasingly become channels for directing toxicity and abuse towards our politicians, Wikisurveys could be one option for how we start to build a more healthy public sphere on the internet.

As I’ll argue below, Wikisurveys have the potential to improve the quality of feedback and level of interaction that politicians, parliamentarians and political parties have with people on a large scale. I’ll define what they are, give some example use cases, and provide some guidance on how they can be deployed in the context of democratic engagement.

What are Wikisurveys?

One way to think about Wikisurveys is as a survey that is created by the people that are taking it. Any participant can add a question, or “statement”. This is then added to a pool of statements, which are randomly presented back for individual participants to respond to or rank. Over time, as participants react to one another’s submissions the Wikisurvey builds a more and more accurate picture of the most popular and unpopular statements.

There may be a moderator, who frames the discussion (i.e providing one overarching question which starts the conversation), or inserts “seed” statements which populate the survey at beginning, or to help guide the conversation.

A picture of the All Our Ideas platfrom

Example of a Wikisurvey user-interface allourideas.org, hosted by New York City Council: https://www.allourideas.org/planyc_example?guides=true

Wikisurveys are unique in that they transcend the limitations of closed research methods (e.g. surveys) and open research methods (e.g. qualitative research, interviews), landing somewhere in the middle. They respond to several problems with traditional approaches:

Quantitative online engagement (e.g. surveys, polls)

Consultations usually operate like a black box. In traditional digital engagement exercises (e.g. a survey hosted on a government website) you rarely find out where your responses go, and it’s seldom possible to understand how the results of the consultation were reflected in the final decision. Wikisurveys are visual, that is, everyone can see the results evolve as the survey takes place. This improves participant satisfaction and boosts the legitimacy of the process, as people can immediately see the results of participation, how well their statements fared in relation to others, and the aggregate areas of consensus and disagreement.

A list of ideas generated by a typical All Our Ideas Wikisurvey

Example of live results shown on a separate page, from a Wikisurvey on allourideas.org run by Nesta, at the Collective Intelligence Conference in October 2018.

Traditional surveys are too prescriptive. Surveys and quantitative research methods are pre-defined by the moderator or researcher. By contrast, Wikisurveys are adaptive, allowing people to respond to the results over time. Participants can add new questions of their own, shaping the conversation and changing its course.

When the Taiwanese government used a Wikisurvey to crowdsource a set of new regulations on the ridesharing economy, they found that participants began to compete to make more inclusive and nuanced statements over time. This is an interesting comparison to traditional social media platforms where the incentive is to make a provocative statement, in order to generate the most shares or likes.

Qualitative online engagement (e.g. digital forums, free-text, interviews)

Most tools make online engagement noisy. Traditional online engagement forums haven’t evolved much beyond the basic web forums of the 90s. For researchers or policymakers, this design is susceptible to what Mark Klein refers to as the “signal-to-noise” problem. Forums and online surveys create masses of text which is highly resource intensive to sort through, and difficult to turn into digestible insights without being selective or biased. What’s more, people may suggest hundreds of duplicate ideas, or even suggest ideas that no-one ever sees (e.g. if they’re right at the bottom of a page).

Wikisurveys limit characters counts, thereby discouraging long, text-based contributions. More importantly, in a Wikisurvey everyone responds to everyone else’s comments in a structured way - the user interface focuses participants attention on one or two statements at a time, and people are encouraged to reflect on other people’s opinions before adding their own ideas. This reduces duplication and creates more valuable data. It also makes it much easier for moderators or policymakers to visualise and and interpret the results (more on visualisation later).

A series of dots, each representing one statement, clustered on a scale from left to right.

Some Wikisurveys summarise the results of a survey linearly, placing statements on a scale from those with the highest levels of consensus on the left to most divisive on the right. Source: https://pol.is/report/r6xd526vyjyjrj9navxrj

Traditional methods favour the most politically active. Evidence from e-participation research shows that low barriers to participation are an important factor to attracting and retaining more participants. For instance in participation exercises that only allow text-based forms of participation it’s often the most eloquent or confident that are more likely to participate more actively.

Wikisurveys encourage both a “fat head” and a “long tail” of engagement - that is, there are different levels of engagement from higher bar (submitting statements) to low bar (ranking or responding to statements). By providing different “levels” of engagement, Wikisurveys make the process less intimidating and may therefore encourage higher levels of participation.

Online engagement frequently descends into argument. The most successful forms of online engagement have been those enabling support of political campaigns and mass mobilisation (think the 15-M Movement in Spain or the Arab Spring). Online forums and social media have changed the way campaigns and political groups are able to disseminate ideas and communicate with one another. But what they’ve been less good at is more empathy-building forms of engagement, namely those that require deliberation and consensus.

Platforms like Twitter and Facebook commonly encourage pointless arguments and exchanges where either the loudest or most emotional voices gain the most attention. In response to this challenge, Wikisurveys remove the “reply” function entirely, which means that responses cannot turn into spiralling threads or abusive exchanges between participants. Instead the incentive is to make stand-alone statements that appeal broadly to the whole group.

Chat bot.png

Online engagement can be boring. Filling out a traditional government consultation or survey can either feel writing an exam, or like filling in a job application (“stage 1 out of 5”). In contrast, one distinctive aspect of Wikisurveys is that all qualitative statements become associated with a set of quantitative values (namely the sum of participants’ responses to each statement). This means that the information created during Wikisurveys is highly amenable to data analysis and visualisation, which can be translated into interesting user experiences. Results can be clearly summarised either as a list, grid or as a dynamic 2D or 3D visualisation (see Pol.is below).

Structuring the input in this way also makes it possible to adapt Wikisurveys into more simple, text-based formats such as via SMS messages or chatbots, that can be sent directly to people’s devices and can be answered as if writing a personal message to a friend.

Examples of Wikisurveys

There are a large number of examples out there, many of which are pitched as market research / collective intelligence tools for companies and large organisations. I’ll focus more specifically on examples from the field of democratic innovation

Pol.is

Pol.is is an open source online survey tool which asks participants to react in a constructive way to one another’s opinions in order to find points of consensus. From the user’s perspective, anyone participating can make a statement, then all other participants are asked to “agree”, “disagree” or “pass” on each of those statements.

Since its creation in 2014, Pol.is has been used by to crowdsource ideas about how to improve a local neighbourhood in Kentucky; to decide new regulation about how to regulate the sharing economy in Taiwan; and to understand opinions and ideas among 34,000 people in a Wikisurvey run by a German political party.

Over time as statements and responses are collected, the tool performs something called “dimensionality reduction” where a machine learning algorithm clusters and visualises groups according to the ideas that people agree or disagree with most.

Groups are represented on the basis of how divided they are, regardless of the number of people who agree or disagree with them. This means that you can have one group mobilising many people in favour of one idea (say, 1000 people flood the conversation and vote in favour of one statement), but minority ideas will still remain represented with equal space on the screen. This puts the onus on deliberation, not mobilisation.

Visual depiction of people clustered around different opinion groupings on Pol.is

Screenshot from the Pol.is conversation on ridesharing by vTaiwan: https://vtaiwan.tw/uberx/

The visual depiction of ideas and opinions in pol.is is amenable to many different types of data visualisation and interactive design. It’s often the case that the only way to engage people with traditional methods like surveys and offline engagement is to pay them, therefore the importance of designing more enjoyable and fun engagement experiences shouldn’t be underestimated. Researchers in Taiwan are showing how data collected via pol.is can be transformed into a more fun and immersive experience using virtual avatars / VR.

Allourideas.org.

All Our Ideas is a Wikisurvey tool developed by researchers at Princeton University in 2010. In contrast to Pol.is, where participants are presented with one statement at a time and three options (“agree”, “disagree” or “pass”), All Our Ideas uses something called pairwise comparison. Pairwise comparison works by presenting participants with two statements at a time, simply asking them to choose whichever they prefer the most. Over time, All Our Ideas is able to rank ideas from most popular to least popular.

One of the best known examples of All Our Ideas was run by the New York City Mayor’s Office of Long-Term Planning and Sustainability, where top-ranked ideas were integrated into the city’s PlaNYC 2030 Sustainability Plan. Over four months around 1400 respondents provided nearly 32,000 votes and 464 new ideas, many of which the council had previously not considered. A number of other local governments and organisations, mainly in the US, have begun to use Wikisurveys since.

Pairwise comparison can be a useful way to ask a crowd to quickly filter through a large number of ideas, and more recently exciting new projects using this mechanism have begun to emerge. For instance, a number of more experimental collective intelligence tools like Remesh are using pairwise comparison, combined with natural language processing, to enable the effect of one person having a seamless conversation with a large crowd in real-time.

Guidance for when to use (and when not to use) Wikisurveys

There are of course limitations to using Wikisurveys. If you’re looking for deeper, qualitative insights then they may not provide the right answer. On the other hand, if you’re looking for strictly representative samples then a commissioned survey or poll might work better. The following guide can ensure you get the most out of Wikisurvey experiments. These insights are based on lessons learned from speaking to political parties and policymakers who have used Wikisurveys, as well as our own experience running a handful of experiments with 250 or so colleagues here at Nesta.

Be clear about the purpose. Wikisurveys are good for crowdsourcing ideas and gauging key division points among large numbers of people. It could also be an effective method for “temperature checking” the mood or diversity of opinion among a population. This could be hugely useful (and simple) for MPs or local politicians, but none in the UK, to our knowledge, have tried this yet.

Wikisurveys play one role in a broader process. Wikisurveys often work well when blended with other forms of participation. Typically they should lead onto other forms of engagement (e.g. setting the scene for face-to-face meetings) where ideas can be fleshed out and turned into more concrete actions.

It’s not just about the tool: As touched on at the beginning of this post, a good participation exercise should start with a willingness to open up, to act on public opinion and give feedback to participants about how their input was used. This is aligned with our general guidance about the use of digital democracy tools: a tool won’t solve all your problems - you have to be willing to change internal processes and cultures as well.

Think about outreach. If you’re going to ask something more complicated or specific, it’s worth thinking carefully about how you might attract people with relevant technical knowledge or lived experience to participate. Online marketing like Facebook Ads can work very well (although admittedly not that popular at the moment). It’s also worth reaching out to relevant advocacy groups and people with access to larger networks (trade union organisations, etc.)

Keep the question simple and relevant. The best Wikisurveys ask open-ended, accessible questions, without being too abstract or blue-sky. A good recent example asked local residents of a city: “how might we improve our local neighbourhood to make it a better place to live, work and spend time?” Another might include asking teachers for their feelings about a hypothetical education policy.

There’s a world beyond the internet. Any good public engagement exercise should start by trying to understand who the audiences are and how you’re going to access them. If there are groups you want to engage but likely won’t have access to your Wikisurvey, then make active efforts to reach them by other means.

Give people time. Although Wikisurveys are simple and quick to set up, but it may take several days or weeks until an accurate picture of the group’s opinions can be summarised.

Don’t be too precious: We need a much greater culture of trial and error when it comes to digital engagement. Many politicians we speak to are excited by by the opportunities but too worried by the risks to actually try anything. Be bold, and start by trying Wikisurveys on a lower risk topic. See if they work, and what questions work better than others.

There are still open research questions and challenges with using Wikisurveys. We know relatively little about how people respond to different types of visualizations, and without clear explanation people can find them off-putting or counter-intuitive rather than fun. It's also hard to know whether Wikisurveys simply offer a tool for mapping existing opinions rather than encouraging genuine reflection on what other people are saying. As mentioned above, there are promising examples of people showing more deliberative behaviours using Wikisurveys, but there needs to be more effort to move beyond anecdotal evidence and test these questions more scientifically.

Final notes

If you’ve already used Wikisurveys, reach out to us - we’d love to add it to our growing list of collective intelligence tools and projects. We can also offer advice or put you in touch with the right people to run one.

If you’re interested in joining an experiment into using Wikisurveys, UCONN and the University of Westminster have just launched weDialogue, trialling pol.is in the context of online news commenting.

Also if you have any other examples (successful or failed) of Wikisurveys or similar tools for democratic engagement feel free to let us know in the comments below.

Acknowledgements: Thanks to Colin Megill, Tom Saunders and Kathy Peach for useful comments and conversations.

Further reading

https://www.technologyreview.com/s/531696/inspired-by-wikipedia-social-scientists-create-a-revolution-in-online-surveys/

https://www.bitbybitbook.com/en/1st-ed/asking-questions/how/wiki/

https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0123483

Author

Theo Bass

Theo Bass

Theo Bass

Senior Researcher, Government Innovation

Theo was a Senior Researcher in Nesta's Research, Analysis and Policy Team

View profile