Who was behind this experiment?

What was the experiment?

Many crowdsourcing tasks are performed by volunteers but paid crowdworkers are increasingly playing an important role in analysing information during disasters. This experiment explored the impact of different payment levels, varying the type of tasks, varying the difficulty of tasks, and receiving feedback on task accuracy and crowdworker motivation.

The experiment had originally been designed to test how to maintain crowdworker engagement in the long-term. As it proved difficult to engage a sufficiently large crowd of participants during the experiment, the researchers were forced to change their focus to short-term incentives to increase crowd accuracy and motivation.

What did they find?

The experiment found that higher pay for crowdworkers did not always result in more or high quality work and even had an adverse impact on labelling accuracy. In addition, crowdworkers favoured repetition over variation in tasks as they could complete more tasks in less time, and were more likely to respond to feedback from other crowdworkers than experts.

Why is it relevant?

Thanks to new data sources such as social media posts and drone footage aid agencies and local authorities have access to more data than ever before when assessing and coordinating their responses. In humanitarian emergencies, organisations increasingly rely on crowdsourcing data analysis from volunteers. However, volunteer engagement is often short-lived. This can have problematic consequences for long-term recovery, as reduced participation in crowdsourcing efforts means it gets harder to keep maps and information up to date and to allocate resources in an effective way.