Co-designing learning for evidence use and engagement

Creating innovation learning programmes is a delicate choreography. Our partners build capabilities and apply that knowledge while the world continues around them. Policies change, project goals shift, personnel flow in and out. In addition, many initiatives aimed at improving research-policy engagement lack clarity of aims, understanding of decision-making contexts, or responsiveness to current evidence use and engagement needs and practice.

Encouraging behaviour change meant we needed to ensure that the learning journey was easy, accessible, social and timely. It also required a level of pragmatism, responsiveness and contextualisation to ensure learning is fit for purpose and aligned with the goals of decision-makers. We also needed to make sure it was situated within the complexities of policy making in government and inclusive of the wide understanding of what evidence and expertise means, to whom and for what purpose.

What did we do?

We wanted to put users at the centre of our research so we adopted a co-designed approach throughout. Increasingly, co-approaches are seen as a promising way to promote inclusivity, distribute accountability and unearth richer insights when seeking to make partnership working meaningful. They are also seen as a promising means of supporting context-specific evidence use and knowledge mobilisation practices through the collaborative generation of evidence. In the pilot, we were able to combine emerging evidence on “what works” with a design thinking perspective which allowed us to iterate, adapt, learn and evolve over the course of the programme.

It was essential that we use a collaborative approach to harmonise the programme plan with the objectives and realities of our diverse stakeholders. Our goal was to ensure the relevance of learning. We delivered 10 ½-day workshops to 30 policymakers from two teams at DLUHC, undertaking six-week design sprints between workshops which included 10 co-design sessions with policy teams in-between workshops. Each interaction aimed to determine which methods, activities and engagement approaches worked well and could be used practically within the participants’ working environment and aligned with stages of the ROAMEF policy cycle. This agile approach allowed us to respond to context-specific barriers to evidence use and to adapt learning to meet the key needs and situations of the teams and their live policy challenges.

“I approached the learning pilot unsure of how relevant it would be to our day-to-day work, but regularly we would address topics that would be issues we were working on that week/month. I remember this in particular for the advisory group work and theory of change work. The relevance of the topics has been impressive – and has made the pilot even more accessible and helpful.”

Programme participant

About co-designed projects

To help demystify the co-designing process, we have broken down our approach by elements of the pilot: within the call for applications, identification of learning objectives, creation of learning environments and the content, methods, and expertise shared throughout.

Nesta worked closely with the CSA office to set up the call for participation in the pilot. This included jointly constructing goals and objectives, sharing the callout with the department and co-designing an application process that ensured senior sponsorship and an inclusive application review. From the 10 policy teams that applied, two policy teams were invited to take part based on their motivations and opportunities for engaging with evidence and expertise through live policy work.

  • The Regeneration Group sought to leverage new methods, relationships and knowledge to understand how investment in regenerating the built environment can help people to prosper.

The Partnerships for People and Place team sought to enable policy co-creation at a community level by testing the hypothesis that better coordination within and between central government and local places can improve efficiency and outcomes of place-based policy.

Learning objectives were based on conversations and assessments undertaken with policy teams, and team leaders. In addition, collaborative co-design sessions took place with nominated team representatives ahead of each workshop. These allowed us to sense-check how learning objectives shift alongside work priorities – and informed the methodologies, content and expertise used in workshop delivery. They also provided important feedback for future iterations of content, design and evolutions of the programme structure.

The pandemic accelerated already existing trends in how we work and live, providing both opportunities and constraints when designing within a digital learning environment. Our co-design sessions were vital not only for understanding where teams were at a given time in the policy cycle, but in creating briefs for the design of a blended learning approach, which included both slide decks and bespoke and engaging online activities to bring this learning to life.

In lieu of being in a room together, online whiteboards such as Miro have become a common way to work together. This provides intuitive mechanisms for interacting with a logical flow and clear visual cues that allow people to focus on the discussion and task at hand. As facilitators, they also allowed us to more easily ‘scaffold’ learning by linking activity components and resources into a single space, serving as a knowledge management repository for the programme itself. While digital delivery allowed us to more easily tap into notes and digital resources to share live within sessions, we know we couldn’t fully read the room when not together in person. As hybrid working establishes itself, we will continue to add different ways of accessing and participating that are inclusive, diverse and engaging in both online and in-person contexts.

As we sought to co-design different learning content, one foundation stone was our use of language. We know that how evidence is defined and used varies depending on context, that perceptions of expertise can range from lived experience to that of a tenured academic and that within methodologies there can be a plethora of best-practice approaches to quality assurance. Meeting participants where they are within their own current conceptions and capabilities allowed us to co-define what ‘good’ and ‘good enough’ evidence use and engagement for decision-making looks like in practice. It also helped us to show the different epistemologies and identities that exist within academia and bring them to life for participants through activities such as Personas, Expert Advisory Group Design, and Engagement Simulation activities.

What did we learn?

The best solution is several solutions.

There is a wealth of information available when it comes to evidence types, sources, methodologies and the quality assurance underpinning them. Official government guidance on methods has been captured in key resources such as the Her Majesty’s Treasury’s Magenta Book, Green Book and Aqua Book, while efforts to help increase the usability of evidence have been supported through the creation of evidence frameworks, such as the UK Standards of Evidence, that help communicate the validity and trustworthiness of an evidence base.

Democratising the design process helped dissolve the boundaries between knowledge giver and receiver. Every key moment of the pilot incorporated honest discussions and feedback amongst facilitators and participants. “Action learning loops” ensured that we were acting on this new knowledge to refine upcoming sessions and evolve the programme as a whole. Participants also reflected on how the pilot supported the iterative process that is both policymaking and evidence use and generation.

”The pilot brought a better understanding of the importance of ROAMEF cycle stages. Taking a more proactive approach to using evidence and academic expertise and iterating through the ROAMEF cycle to help develop new and ongoing policy.”

Programme participant

Everyone involved was learning about how different approaches to problem framing, evidence searching and academic engagement could usefully inform policy and practice. This mutual learning was also reflected in our approach to methodologies for evidence use and expert engagement. For example, in our final workshop focused on embedding and legacy, we presented Most Significant Change as both a method of qualitative programme evaluation when quantitative indicators may be hard to define, whilst also using most significant change as an evaluation methodology on the learning pilot itself.

In many ways our role within the learning journey – both as Nesta and as individuals – was as knowledge mobilisers. We helped to identify and synthesise relevant evidence and expertise from a range of different sources and disciplines and co-design curricula that met the needs, capacities and capabilities of policy teams. This required a range of competencies that enabled us to cultivate empathy, understanding and relationship building between several disparate decision-making systems: that of governments, academia and third sector. It also required a wide understanding of methods for evidence use and generation, reflexivity to our own preferences, ability to manage knowledge and the patience and collaboration it takes to work through these elements.

Throughout the process, we were prone to some of the same barriers and facilitators that allow for the internal and external exchange of knowledge. These include factors such as the realities of aligning business processes within different institutions and subjective and systemic biases that come into play within our own evidence use and engagement practice. Mobilisation takes people, not papers – creating a responsive, agile learning journey requires a high level of resource and time to facilitate trust and mobilise the right evidence and expertise towards the right purposes.

What's next?

The pilot’s workshops became not the driver or centrepiece of the programme, but punctuation of a learning journey: moments to reflect, recharge and bring on board new thinking to tackle upcoming policy challenges. The workshops and the co-design sessions provided a space away from the day-to-day demands of the job, while collaborative space for learning was frequently noted as a “next step” stemming from the pilot. A key opportunity was the protected time to step back, spend quality time with colleagues to create things together and think about the bigger picture and how diverse forms of evidence and expertise fit within it. Here the journey is the means, but also the ends.

“The news article task, and other related activities helped me to take a step back and reflect on the programme and team's wider objectives. This has led to setting up regular reset days where the team comes together to mentally reset: celebrating progress and re-envisioning goals.”

Programme participant

At the heart of evidence use and engagement for policy is learning: from diverse forms of expertise, different evidence types and sources and about how to share learning to benefit others. CAPE has allowed us the time and space to reflect critically, collaborate, share insights on the ‘how’ of research-policy engagement, and creatively implement new approaches to fostering evidence capability. A co-designed learning journey allowed us to situate innovative practice within real journeys and motivations. It helped to make building evidence capability something that moves beyond the merits of one method over another and towards an inclusive determination of what the right evidence and expertise is, how it's defined and acquired, and in support of diverse policy goals at different stages of the policy making process.

We see ourselves at the beginning of a broader learning journey within the sector. Additional investment and attention is needed to generate and share learning on what works best in research-policy engagement. Nesta will be conducting and sharing additional insights stemming from the pilot to help inform future iterations of evidence capability learning programmes. We’ll also be creating a learning toolkit on how to improve evidence capabilities – set to launch in Autumn 2022 – to support the replication of learning journeys in practice. If you’re interested in learning more about the pilot programme or learning for improving evidence informed decision-making, reach out to [email protected].

The opinions expressed in this publication are those of the author. For more information, view our full statement on external contributors.

Author

Kuranda Morgan

Kuranda Morgan

Kuranda Morgan

Evidence Lead, Evidence and Experimentation

Kuranda worked as the evidence lead in the Evidence and Experimentation team, championing efforts to improve evidence use and academic engagement capability for Nesta and our partners.

View profile

Steve Lee

Steve Lee is the CAPE Learning Associate, based in the UK.