Localising AI for crisis response
Putting power back in the hands of frontline humanitarians and local communities.
This report documents the results of a year-long project to design and evaluate new proof-of-concept Collective Crisis Intelligence tools. These are tools that combine data from crisis-affected communities with the processing power of AI to improve humanitarian action.
The two collective crisis intelligence tool prototypes developed were:
- NFRI-Predict: a tool that predicts which non-food aid items (NFRI) are most needed by different types of households in different regions of Nepal after a crisis.
- Report and Respond: a French language SMS-based tool that allows Red Cross volunteers in Cameroon to check the accuracy of COVID-19 rumours or misinformation they hear from the community while they’re in the field, and receive real-time guidance on appropriate responses.
Both tools were developed using Nesta’s Participatory AI methods, which aimed to address some of the risks associated with humanitarian AI by involving local communities in the design, development and evaluation of the new tools.
The project was a partnership between Nesta’s Centre for Collective Intelligence Design (CCID) and Data Analytics Practice (DAP), the Nepal Red Cross and Cameroon Red Cross, IFRC Solferino Academy, and Open Lab Newcastle University, and it was funded by the UK Humanitarian Innovation Hub.
We found that collective crisis intelligence:
- has the potential to make local humanitarian action more timely and appropriate to local needs.
- can transform locally-generated data to drive new forms of (anticipatory) action.
We found that participatory AI:
- can overcome several critiques and limitations of AI – as well as helping to improve model performance.
- helps to surface tensions between the assumptions and standards set by AI gatekeepers versus the pragmatic reality of implementation.
- creates opportunities for building and sharing new capabilities among frontline staff and data scientists.
We also validated that collective crisis intelligence and participatory AI can help increase trust in AI tools, but more research is needed to untangle the factors that were responsible.
Our project demonstrates that it is possible to build AI with local infrastructure, local data, and local talent, and that it is possible to build AI that responds to local values and priorities. However, much more investment and experimentation is needed to realise a future where locally-developed and owned AI becomes ‘business as usual’.
This report is published alongside two technical reports, which contain a detailed methodology for both projects and GitHub repositories containing the code, prototypes and technical specifications with recommendations for future development: