Three Things You Can Do Before Launching Your Next Experiment

Experimentation Works
4 min readMay 24, 2019

--

Image from rawpixel.com

In April 2018, Experimentation Works was launched, bringing together public servants from across the Government of Canada, deepening our knowledge of experimentation and inspiring us to learn by doing, and by sharing the lessons learned along the way with each other. Since then, a number of departments have launched their first experiments, testing interventions in forms, informational guides, and other channels. We have learned so much over this past year, but the journey doesn’t end here. We recognize that there are many more opportunities where we can leverage experimentation to effectively improve our policies, programs, and services for Canadians. So, how might we lay the groundwork for our upcoming experiments as we grow and mature further in this area?

In this post, I will share with you three strategies that my colleagues and I have found useful at the early exploration/design phase of experimentation (in addition to conducting research reviews and other desk research).

1. Leveraging existing data sources

Data infrastructure is a key enabling factor for experimentation. Before designing your next experiment, ask yourself whether you can use data that you already have to better understand your experimentation context. Designing an A/B test on your website? Consider checking existing analytics for the website to learn about baselines and existing trends. Using administrative data to track outcomes in an experiment? Consider getting familiar with existing data before running your experiment. Many departments have a wealth of data sources that can be explored at an early stage to help you refine your research questions and inform your experimentation initiatives.

2. Using qualitative research and design thinking approaches to better understand clients’ perspectives

Before designing your next experiment, consider conducting qualitative research (such as interviews) to understand clients’ journeys and identify where their pain points are. If you have the expertise in your department, you can leverage innovative design thinking methods such as journey mapping to better capture clients’ experiences and understand their perspectives. This deeper understanding could help you design better interventions that address key barriers and pain points that create bottlenecks for clients navigating through a given system or process. Using such a mixed-methods approach (combining qualitative and quantitative research methods) can increase the chances that your intervention would be effective. For a concrete example, read about Employment and Social Development Canada’s experience with this approach here.

You may also consider getting feedback from a small group of clients on the interventions that you’re designing (e.g., the new form or guide that you plan to use in your experiment) before running your experiment. At Immigration, Refugees, and Citizenship Canada (IRCC), we created a dedicated usability space in our local Immigration Office (where clients come in to access a range of services such as writing their citizenship test or completing their permanent residence ‘landing’) where we can talk to clients and invite them to test out new products and other potential changes to service offerings. It’s like opening a window to our clients’ experiences, and it reminds us constantly that we are not the end user; clients’ perspectives may be vastly different from our own, and considering their perspectives at early design stages can be invaluable. If you’re unable to talk to clients directly, consider incorporating the perspectives of those who work closely with clients, such as front-line service employees and call center agents who respond directly to client inquiries.

3. Conducting a ‘pre-mortem‘ to maximize the learning potential.

Every experiment is a learning opportunity, whether the impacts of the interventions tested are positive, negative, or null. However, you can design your experiments in such a way that allows you to maximize the potential for learning — especially if you’re working with more costly, resource-intensive interventions. Consider conducting a pre-mortem exercise with the team before running your experiment. To do this, the team would imagine one or more of the following scenarios unfolding, and brainstorm all the possible reasons that this scenario might happen:

  • What if the implementation of your experiment does not follow through or is not done properly (i.e., according to the experimentation protocol you’ve designed)?
  • What if the experiment is implemented successfully, but you learn that the impacts of the interventions tested are negative or null?[i]

A pre-mortem allows you to unpack and dig deeper into potential reasons that your experimentation initiative may not work out as expected. This type of thinking — although counterintuitive to most of us — can help you address some of the limitations of your experimental design to reduce the chances of unsuccessful outcomes. It can also help you discover strategies that could enable you to learn from potential negative or unexpected results. For example, it can shed the light on additional data that you may want to collect as part of your experiment to maximize its value regardless of the outcome. It can also highlight some key questions about your operational environment that you may need to ask before jumping into implementation.

By using some of these strategies to lay the groundwork for experimentation, this can enable us to move forward with steady steps and maximize our learning from experimentation. Looking forward to many more years of Experimentation Works! There are exciting opportunities ahead, and many more lessons for us to learn as we continue to build a solid evidence base together on what works and what does not.

Have you used some of these strategies in your experimentation projects already? Do you have other strategies that you would like to share?

We would love to hear from you!

Post by Monica Soliman Ph.D., Behavioural Scientist at Immigration, Refugees, and Citizenship Canada (EW Expert)
monica.soliman@cic.gc.ca

[i] Personally, I don’t consider this second scenario to be a ‘failure’. When it comes to experimentation, finding out what does not work is a feature and not a bug in the process- as long as we’ve done our due diligence in the exploration and experiment design stages.

Article également disponible en français ici: https://medium.com/@exp_oeuvre

--

--

Experimentation Works
Experimentation Works

Written by Experimentation Works

Showcasing experimentation across the Government of Canada: https://linktr.ee/GCExperimentation | Follow our journey en français: https://exp-oeuvre.medium.com/

No responses yet