Revisiting experimentation (and why we do what we do)
Later this week we’ll be posting about our second cohort of Experimentation Works, about which we’re very excited. First, though, we wanted to take a step back and revisit a question we’ve often talked about over the past few years: what exactly is experimentation?
Defining experimentation
There are a range of definitions used in various countries and by different international organizations, and we’ve tried to land on one that is simple and inclusive, while still reflecting the need for rigorous measurement that is at the heart of experimentation.
So when we talk about experimentation, we look for two things: a rigorous way to measure impact, and a way to compare multiple possible approaches. Many of you may think of randomized control trials, and they definitely count, but so do a range of other approaches, including quasi-experimental and others.
As importantly, good experimentation is also paired with other approaches such as ethnographic research and user centered design, which we often refer to as exploratory: if you haven’t done the work up front to understand your problem and your users, then you haven’t laid the foundation needed to run a successful experiment.
These other approaches also help contextualise the results once they become available through the experiment. In other words, if you run an experiment while being in the dark about the broader context, you not only risk running a problematic trial, you won’t be in a good position to interpret the evidence meaningfully and tie the results to a possible decision. We think the latter is particularly important in a public service context. In addition, the exploratory work context will also help position the trial for replication and will help others determine if the results apply to their own specific environments (by comparing basic information).
Experimentation can of course be useful whether you are going through a process of continual improvement to better implement existing programs, or testing new, innovative ways of doing things to deliver better value for your users (in our case, Canadians).
Experimentation Works — an iterative experience
We think the above definition is useful in giving you a sense of why we chose the projects we did for this second cohort of Experimentation Works itself, since we knew it was important to have a healthy mix of exploratory and experimental projects, as well as of participants who weren’t quite ready for those stages, but who wanted to learn nonetheless as invited observers.
That definition might also offer a hint as to why we thought running a second cohort of EW was so important: we think of EW as an opportunity to work with colleagues across the Government of Canada, complementing the traditional central agency role of providing instructions on what needs to be done. The key element we continue to emphasize is that you can’t just talk about experimentation: you have to learn by doing, and continually get better.
And that was built into the design of EW: we started small, but had scalability in mind, aiming to grow bigger if the model proved effective.
We also wanted to use EW as a vehicle for piloting new features that could eventually scale across the wider system of the Government of Canada. An example of this is the beta release of an Experimentation Inventory, in partnership with our colleagues in Open Government. This platform is one of the tools we are user testing with the EW2 Cohort as we aim for a wider release in the future, and gives space where teams can pre-register their research questions and plans and then publish what they find.
The approach we took to the inventory’s release is what experimental government is all about. It is good science, and hits across many best practices: reliable data, transparency, sharing lessons learned, admitting to failure.
As we moved to incorporate lessons learned from the first EW cohort, we knew there were many elements that could be evolved and built upon. For example, we moved to a Government of Canada-wide call for proposals approach; we bolstered and made more regular the learning aspects of the experience; we increased the number of partners inside and outside our organization, and importantly, we built in more time to do the experimenting itself.
Still to come, later this week: our EW2 cohort!
Post by the TBS EW project team: Dan Monafu, Sarah Chan, Pierre-Olivier Bédard.
Article également disponible en français ici: https://medium.com/@exp_oeuvre