Citizens as sensors
There is a growing excitement about crowdsourcing the process of collecting information about urban environments (see the BBC article on Tomorrow’s Cities ).
Citizens can already use their smartphones to send pictures or texts to local authorities to report potholes, traffic jams, or illegal rubbish deposits. In the future other measures could be collected automatically when users accept to turn their smartphone into a “sensor” for noise, air quality or road bumps. Simplifying to the extreme, this vast amount of data will then be, crossed, analysed and processed to allow a better level of local service, prediction and intervention.
However appealing the idea of harnessing the immense digital potential of urban populations, citizens as sensors seems to me to be a terrible waste of human capabilities.
The advantage of having people detect and report events in their environment, rather than static or automatic sensors, lies in the unique human capacity to understand what is relevant to communicate. People, when they are intentionally communicating, think about who they are communicating with and how the information will be used, and are thus selective about what they say.
Communication works under a principle of relevance
When citizens report a pothole, they consider the significance for their neighbourhood and therefore use their local knowledge to signal a pothole that is for instance in a particularly dangerous crossing, but they also try to double guess what the local authorities deem important. In other words, most people will choose to report events taking into consideration what they believe are the priorities for their neighbourhood but also the constraints of the people who will fix it. Communication, be it about potholes or anything else, works under a principle of mutual relevance and constitutes one of the hallmarks of human intelligence. The human ability to envisage others' points of view is at the origin of our capacity to communicate.
And here comes the crunch. In a configuration in which citizens are just moving around feeding an algorithmic system, how are they to know what is significant for the system and the local authoritities. How exactly is the reported information being organised and analysed by the data management systems that collect the incoming texts, data points or pictures? What triggers an intervention? How many reports or events are needed? What is the relative weight of a human report as compared to the digital sensors they carry? Are some streets more critical, is traffic an issue, is the weather?
While every local council will appreciate having a large, free, distributed, mobile source of data, not to mention capable of filtering and deciding intelligently, this comes at the cost of providing citizens with the tools to understand what is relevant.
As a cognitive scientist, I would argue that citizens can only become active partners in the process of making cities smart, and thus contribute to the collective intelligence of their environment, if they can understand and potentially define how the information they are providing is being assembled, interpreted and acted upon. Data visualisations, dashboards, information on cost of repairs and crew availability, are all sources of knowledge about the city that would allow people to make intelligent, even expert, decisions about what is relevant to report. While this may be more than anyone would want to know about potholes, and in many cases citizens will be happy to automate the recording of their environment, total opacity will certainly not lead to increased smartness but just exponential irrelevance.
For citizens to become active participants in managing and maintaining their urban environment they cannot be relegated to becoming sensors.