A GPS tracker and a map being held by people

Who was behind this experiment?

What was the experiment?

The experiment tested four different recommendation algorithms on the citizen science platform SciStarter in order to match users with projects that best suit their interests and capabilities. The aim was to understand which recommendation algorithm would increase user engagement in projects the most.

What did they find?

When measuring the interaction of users with the recommended projects, the experiment found that three out of four algorithms performed as well or better than the control group, which represented the status quo on SciStarter. This indicates that users are more engaged with SciStarter and the projects when receiving personalised recommendations. The experiment also found that an algorithm that used a technique called matrix factorisation increased user engagement on the platform the most by recommending less well known projects.

Why is it relevant?

Citizen science has been remarkably successful in advancing scientific knowledge. It has helped to speed up data analysis, increase sample collections, and sometimes even led to historical scientific breakthroughs. Citizen science platforms offer thousands of different projects covering everything from astronomy to marine debris and linguistics. For citizen scientists, identifying the projects that match their skills and interest is difficult due to the wide variety of projects that many platforms offer. This can affect their motivation and satisfaction.

Follow SciStarter on Twitter and stay up to date: @SciStarter