The way to make robots co-operate is to learn from the animal kingdom, says Edmund Hunt - and working together will allow them to tackle otherwise impossible tasks. Swarm robotics is ready to come of age.
A meandering ant happens, by chance, upon a picnic basket: hundreds of her nestmates will follow, making up a transitory, self-organised assembly whose work ultimately benefits the entire colony. That’s the inspiration for a new branch of engineering – swarm engineering – which will upend attitudes and practices that have been with us since the start of the Industrial Revolution.
The appeal of swarm engineering lies in the transformative capabilities that emerge when large groups of simple agents are given free reign. We usually think of robots as solitary automatons, designed to perform a specific task in a specific way in a specific place. Rows of robotic arms on a factory production line, replacing rows of human workers. In future, we might expect a somewhat more humanoid robot with a few more capabilities – but just as predictable.
But that’s just one way of designing and deploying robots, and perhaps not even the one with the most potential. The more exciting prospect is that over the next decade we will start to see large swarms of robots cooperating.
Our scientific understanding of collectives in biology – from cells to animal groups – has greatly advanced over the past 20 years. The underlying theory of swarms, increasingly entertained by engineers as a powerful problem-solver, is now meeting enabling technologies in low-power sensors and computation, advanced batteries and manufacturing. Swarm robotics is at a tipping point: it will become part of our reality over the next ten years.
Swarm robotics is at a tipping point: it will become part of our reality over the next ten years.
There’s more to swarming than many hands making light work: this isn’t about just having lots of robotic arms working at the same time. What distinguishes swarms from other multi-robot systems is their self-organisation, a mechanism which is pervasive in biology (Camazine et al. 2003). Unlike most technological systems, where control is exerted from the top down by an external user, self-organised systems have no leaders to direct what each member is doing at each point in time.
Instead, many chance interactions between component parts at a low level of organisation – the individual ants in our picnic basket for example – leads to emergent structure at a higher level – the swarm of ants feasting on our cakes and sandwiches. The nature of these interactions is specified only by simple rules, based on locally available information – the behaviour of nearby neighbours, the state of the immediate environment – and not on the global pattern.
At the heart of self-organisation is feedback. This allows rapid adaptation to changing conditions. To continue with the picnic basket example, an ant’s chance encounter with food will lead it to recruit others nearby, who will recruit others in turn. This is positive feedback: a tiny scrap of local information escalates into a colony-wide change in foraging location. As the food gets eaten up, though, and as there are fewer colony members left to recruit, the number of ants on the patch stabilises. This is negative feedback, constraining an otherwise runaway process. You can imagine how a swarm of robot harvesters could use the same principles to pluck fruit from a tree.
Information does not have to be communicated directly between agents: it can also be mediated via the environment: this is known as stigmergy, a term deriving from the Greek words for “mark” and “action”. Consider termites building their gigantic mounds. There is no termite foreman telling the other termites what to do: nor do worker termites discuss how to get the job done. Rather, one termite lays down a piece of building material; then an encounter with its partially-built wall will stimulate a nestmate to add to it when it happens to pass by. Construction robots could work according to similar principles, without needing to know their exact position within a grand plan.
So feedback and stigmergy can help us design swarms of robots that get jobs done in quite a different way to the top-down, carefully planned approaches we’re used to (Brambilla et al. 2013; Hamann, 2018). But why is this approach any better? Well, swarms have extremely valuable characteristics that are lacking in a conventionally organised robot team.
Swarms are robust because there is a profound equality among group members.
The first benefit of the swarm approach is scalability. The interaction ‘rules’ can stay the same regardless of the size of the swarm; and because the interactions are local, there is no need to keep all of the robots in perpetual contact with a base station.
Second, the reliance on interactions with the local environment means that swarms are highly flexible and able to respond quickly to changing work needs. Monolithic robots are famously incapable of adapting to even minor changes in their environment.
Third, swarms are robust because there is a profound equality among group members. Leaders only exist because of what they know, such as the location of a picnic basket, and for as long as that information is useful. That means individuals can be lost with little long-term cost to swarm function, which will be very helpful in remote or challenging environments.
Finally, there’s a more general benefit. The physicist Philip Anderson said, “more is different” (Anderson, 1972). Chemistry cannot be predicted from the properties of individual atoms. A single honeybee’s waggle dance doesn’t let us figure out where the colony will build its hive. And the capabilities of a single flying robot don’t reveal how it might prioritise search-and-rescue locations with a fleet of its peers. Even though the individual robots may be simple, the interplay of their interactions and feedback effects can result in surprising properties that amount to more than the sum of their parts.
The flying robot example is pertinent, because we are mostly likely to see robot swarms in the air (and water) before we see them on land: these are easier environments for robots to navigate. So we can already envisage swarms of flying drones being used in agriculture to monitor crop growth, identify damage or weeds, perhaps calling in nearby units to spray herbicides or replant seeds eaten by birds.
On land, of course, there are countless obstacles to navigate around or otherwise get stuck in, but there are still environments where swarms will come into their own. Mapping and locating damage to subterranean pipes, for example, is challenging because it is hard for an operator to communicate with a conventional robot in an enclosed pipe; but robot swarms – using local, peer-to-peer communication to keep track of each other – can use ultrasound to locate and map cracks as soon as they appear. The dawn of ‘self-healing’ cities is nigh – and an end to costly and disruptive roadworks.
Navigating less controlled environments will need more sensors (and hence more weight) and planning (more computing power). This is a challenge to swarm engineering’s emphasis on individual simplicity (Hamann, 2018). Again, though, there are many animals we can look to for inspiration, from jumping fleas to scuttling cockroaches to sticky-footed geckos. Once locomotion has been mastered, ground-based swarms could work alongside firefighters to map out burning buildings looking for survivors. They could then lead people to safety, lighting up a safe exit route like the floor lights on a plane. Closer to home (literally) we may see our houses constructed brick-by-brick by a swarm of robot builders; or our groceries efficiently picked and packed in the warehouse by autonomous swarms, before delivery to our door by fleets of self-driving electric vans.
The dawn of ‘self-healing’ cities is nigh – and an end to costly and disruptive roadworks.
One key challenge for deploying any technology to the field is energy supply, especially for small, lightweight robots. But battery technology continues to progress, and as low-power electronics continue to advance, energy can usefully be collected from sunlight, or through microbial fuel cells – consuming, say, spilt oil or harmful algal blooms. Another nature-inspired approach might also help: just as social insects transfer food from mouth-to-mouth, well-charged robots could transfer energy to their ailing neighbours (Hamann, 2018).
This is unfamiliar, but not necessarily expensive technology, particularly if we can use techniques like 3D printing with cheap raw materials. Remember that we are trying to build large numbers of relatively disposable units, not a few sophisticated and expensive machines: it shouldn’t really matter if a few fail. The swarm will still have to know how to deal with malfunctioning units, however – whether by ignoring or deactivating them – and to guard against malicious agents that may deliberately try to steer them in the wrong direction.
“Dead” robots will raise issues of their own: for example, it would be painfully ironic if robots deployed to safeguard the environment become litter or e-waste. One solution could be to build robots that are (mostly) biodegradable, safely decomposing rather than having to be rounded up and disposed of (Rossiter et al. 2016). That would be especially appealing for disaster response scenarios where urgent mapping or clean-up is required.
But as the research and development continues, getting the hardware right might turn out to be the easy part. The tricky part will be specifying the interaction rules that the robots must follow – particularly if we want to see surprising (and beneficial!) behaviours emerge from them in collective use. How do we “engineer emergence”, so that robot swarms dependably do what we want, but in ways we might not anticipate? (Winfield et al. 2004)
Once more we can follow nature’s lead. Biological evolution is a profligate process of experimentation: chance genetic mutations give rise to different individual-level behaviours. Over time, these give rise to significant behavioural diversity – but only some of this will be beneficial. Think of how schools of fish evade predators: that’s the result of natural selection picking out individual behaviours and local interaction rules that work effectively at the scale of the school.
We can simulate the evolution of swarm robot behaviours in a similar way, working through countless generations and scenarios, before deployment into real-world environments (Floreano & Mattiussi, 2008). Indeed, one approach to guiding the artificial evolutionary process that may prove particularly effective is to select for swarms that are good at collectively processing information (Walker & Davies, 2013), rather than just selecting for the particular task at hand. This may favour their adaptability in unpredictable environments, just as human general intelligence confers great advantage on our otherwise rather feeble bodies.
A key concept in the domain of swarms is that of collective intelligence – that many individuals, cooperating as a group, can gain rapid, accurate insights into the true state of a changing world (Seeley, 2010). Such insight is the bedrock of good decision-making. Human and swarm intelligence will undoubtedly interact to help us make the right choices in the face of challenging problems, like how and where to expend our limited resources as we try and cope with the effects of a changing climate. First, though, human-swarm interaction is another area where progress will have to be made. Swarms will be mostly autonomous, but they will also need to respond to high-level directions from users – perhaps to move from one area to another or to switch priorities.
Human and swarm intelligence will undoubtedly interact to help us make the right choices in the face of challenging problems.
As swarm sizes ramp up from tens to hundreds and then thousands, it will become increasingly challenging for an operator to keep track of what is going on (Kolling et al. 2016). Organising control into sub-swarms may make it easier to track what large numbers of robots are doing on a macroscopic level, without having to worry about the micro-level details. Just as in thermodynamics, where the state of a gas comprising countless particles is summarised into values like temperature and pressure, estimating and communicating summary statistics for the swarm to a remote user will be a key challenge to meet. For smaller swarms, on-site interaction may be possible through voice or gesture commands.
Finally, we have to think about how these swarms will be received by the general public. The word “swarm” itself carries some baggage, with connotations of pesky insects and science-fictional nightmares. It’s possible that people will find swarms of robots unsettling or uncanny, at least in their early encounters with them.
But the basic idea here is the same as in many benign, even beautiful, systems seen in nature: graceful schools of fish, say, or synchronised flocks of birds. A murmuration of starlings is one of the most life-affirming, humbling sights one can see. Swarm technology will be transformative: it gives us new power to meet humanity’s growing needs, such as food production, reliable urban infrastructure, exploration and monitoring of our oceans, or even to explore space.
It could also chime with the spirit of this century. The impulse for top-down control is seen in society as in traditional engineering: many today express a preference for strong leaders over quarrelsome legislatures. But embracing the power of the collective, free-flowing mind, over the regimented thoughts of an individual, is surely the way to renaissance in both engineering and society. Between growing strains from urban complexity and population, and opportunities to explore our planet and its place in the solar system, we need engineering solutions offering scalability, flexibility and robustness. Swarm robotics is ready to come of age.
Dr Edmund Hunt is a research fellow working on translating insights from collective animal behaviour into applications in swarm robotics. He is based at the Bristol Robotics Laboratory and the University of Bristol, and has a PhD in behavioural ecology and complexity sciences.
Anderson, P. W. (1972). More is different. ‘Science’, Vol. 177, No. 4047, pp.393-396.
Brambilla, M., Ferrante, E., Birattari, M. and Dorigo, M. (2013). Swarm robotics: a review from the swarm engineering perspective. ‘Swarm Intelligence’, Vol. 7, No. 1, pp.1-41.
Camazine, S., Deneubourg, J. L., Franks, N. R., Sneyd, J., Bonabeau, E. and Theraulaz, G. (2003). ‘Self-organization in biological systems’. New Jersey: Princeton University Press.
Floreano, D. and Mattiussi, C. (2008). ‘Bio-inspired artificial intelligence: theories, methods, and technologies’. Cambridge, MA: MIT Press.
Hamann, H. (2018). ‘Swarm robotics: A formal approach’. Springer International Publishing.
Kolling, A., Walker, P., Chakraborty, N., Sycara, K. and Lewis, M. (2016). Human interaction with robot swarms: A survey. ‘IEEE Transactions on Human-Machine Systems’, Vol. 46, No. 1, pp.9-26.
Rossiter, J., Winfield, J. and Ieropoulos, I. (2016). Here today, gone tomorrow: biodegradable soft robots. In ‘Electroactive Polymer Actuators and Devices (EAPAD) April 2016’, Vol. 9798. International Society for Optics and Photonics.
Seeley, T. D. (2010). ‘Honeybee democracy’. New Jersey: Princeton University Press.
Walker, S. I. and Davies, P. C. (2013). The algorithmic origins of life. ‘Journal of the Royal Society Interface’, Vol. 10, No. 79, pp.20120869.
Winfield, A. F., Harper, C. J., & Nembrini, J. (2004). Towards dependable swarms and a new discipline of swarm engineering. In ‘International Workshop on Swarm Robotics’ pp. 126-142. Berlin, Heidelberg: Springer