What worked well and what didn't
Central to this project was our desire to be experimental and try something new. We want to be as open and transparent about what we learned to support others developing similar games.
Our general approach: iterative and evidence-informed
Because we were working on a high level of abstraction (designing a board game that emulates the innovation policymaking process), a large part of the work we carried out involved researching and understanding what innovation policymakers do on a day-to-day basis, and identifying how we could transform this into an engaging set of basic rules – what games designers call ‘game mechanics’. However, there were a few challenges with this approach.
Although a literal interpretation of policymaking processes helped us to develop the game’s theme and narrative, this approach affected our ability to turn this into engaging game mechanics. After the first few design iterations we decided to take a more flexible approach, creating prototypes that presented the policymaking process as a playful system. We used game mechanics that were more engaging to a general audience, such as collaborative problem solving, which aligned with parts of the policy-making process. This allowed us to swap between game mechanics based on users' responses.
Choosing the right level for our simulation
One of the first issues we encountered when approaching the game is that policymakers are dealing with increasingly complex issues in increasingly unpredictable environments. This means that there is no clear mechanism that guarantees the policies have the impact policymakers predict.
The world is a complex and open-ended system, so even a highly-credible simulation is full of assumptions and simplifications. This particularly becomes a problem when trying to turn the world into a game. One of the most important aspects in a game is how it communicates why and how a player’s action causes a certain reaction (the 'feedback mediation'). This mechanism is crucial for the player's experience of the game, allowing them to plan and do better the next time they play, or in other words, to learn while playing.
The real world tends to provide us with ‘fuzzy’ feedback that is unclear or even contradictory. So, giving policymakers a simulation that tells them precisely how their selected policies have contributed to, for example, a measurable effect in a broad phenomenon such as reducing air pollution, would not be realistic. As a result, we decided against modelling a simulation at the societal level and instead focused on exploring certain aspects of the innovation policymaking process.
How the game evolved
Because the game was developed following an iterative design approach, several prototypes and versions of the game were created over six months. It was hugely helpful to test them with a range of stakeholders so that we could make the following important improvements to the game’s design.
In one of the first versions of the game, the player had to collect four of the same policy proposal cards to present a policy. Each subsequent card in a stack got more expensive – that is, you had to use an even larger number of game tokens representing currency or research budget. As a result, the last policy proposal card in the stack was very expensive.
Our original idea was to show that, at times, policymakers need to use a lot of resources to reach stakeholders who might have good ideas, but are hard to reach. The problem with this system was that it made the game more about luck than skill. Based on how the cards were shuffled, a player’s fourth card might end up at the bottom of the pile, which makes it hard to ‘prove’ which policy option is the most effective. This meant that in ‘play tests’ some teams who did very well on the first round did badly on the second because of the pricing ‘ratchet’, potentially sending the wrong message to players about the benefits of collaboration and reaching out to other stakeholders within the game mechanism.
We addressed this problem by introducing 'network' cards, which allowed us to vary the ‘price’ of the policy proposal cards more convincingly. The price fluctuated based on how many network cards a player had, so they could now use their skills to collect the right amount of network cards to make ‘buying’ policy proposal cards cheaper. This changed the game to base it more on skill than luck.
Making it engaging
After building network cards into the game, we needed a mechanism that would provide an engaging core to the game while still being rooted in skill. We turned to a number of potential mathematical problems and puzzles to help us.
The two best mathematical problems we found to achieve this were ‘set packing’ and ‘maximum cover’. Set packing is the act of making a particular combination with given pieces, such as cards, while maximum cover is the act of trying to cover maximum ground with the least amount of pieces possible.
By distributing network cards among themselves, the players had to work together to get the right combination of cards to get maximum value.
Not only did this encourage more conversation and interaction, it also gave the game a strong problem to solve at its centre: to juggle contacts and resources to get as much evidence out of the world as possible, played out through mathematical-type puzzles. During play testing this system worked well because players understood the game better each time they played, which improved their performance overall.
Playing with the ‘rhetoric'
We used what game designers call ‘rhetorics’ to develop the game’s design, based on underlying mechanics rather than the theme itself. Rhetorics is a way of persuading players of the game’s narrative through rule-based representations and interactions. We looked at potential mechanical changes; what these expressed about the innovation policymaking process and whether they were relevant.
The first design had no set solution. We cannot safely say there is a predetermined solution for each societal challenge, such as air pollution or congested public health systems, which could act as the game’s goal or objective. However, we know that the best policymakers collect enough high-quality evidence to help them lean towards one solution rather than another.
We also wanted to push against the idea that ‘policymakers know best’. To do this, we introduced a mechanism that allowed there to be an achievable solution each time the game is played. However, the ‘correct’ solution is redefined with each play and not dictated by any of the players.
We achieved this through the mechanism of a ‘shuffling algorithm’: every time the game is played, the cards are reshuffled and reordered so that there is a ‘set’ policy solution to the challenge. But there can be a different solution every time the game is played. By using their skills and collaborating, players can uncover the pattern of cards that is created by the algorithm for that specific play of the game.
In one of the later play tests, a player suggested having private and public goals to balance their policymaking priorities (country, organisation or department for instance). Framing the game mechanics like this would allow individual players to win at the expense of the wider system, discouraging collaboration, and detract players from having a clear sense of public good at the heart of their objectives. The game is ultimately about teamwork: teams can win only through good communication and collective planning.