Fears and assumptions
As we started planning for EW we had a number of ideas and assumptions about how it would work. And, to be frank, we had a number of fears as well (which, really, are just negative assumptions).
One of our initial assumptions was that teams from departments across the government would be fully willing and able to fill out our proposal template, which would outline: their proposed research question, the way in which they wanted to structure their experiment in order to answer said question, and the capacity the team would have to actually run that experiment. We could then select the best proposals and bring in experts to work with the chosen teams, ironing out the details and refining their ideas as they went.
One of our original fears was that this process would work so well that we would be inundated with proposals from departments wanting to run experiments, and second, that we wouldn’t have enough experimental expertise to support them.
Both of these assumptions turned out to not be quite true. We were pleasantly surprised with the number of experimentation experts we were able to recruit from across government. However, we found that many teams, even those very interested in running an experiment, did not have enough capacity to even answer the questions we laid out in our template in a way that would allow us to properly assess their proposals. While participant teams are clearly subject matter experts, they had less grounding in experimentation than we anticipated.
It would appear that our expectations in general, and specifically our template were too onerous for departments to meet without guidance/support from experimentation experts. Our initial (and misplaced) fears — that we would be overwhelmed with too many applications — may have led us to over-build this part of the process.
Earlier intervention — pre-application submission — was fully supported by the experts, who argued that the sooner they could help teams shape and craft experiments the better as an uncorrected early mistake would be likely to have negative amplified effects down-the-line. We were able to provide this level of early support because another one of our assumptions (that we wouldn’t have the right ratio of expertise-to-interested-departments) was also unfounded. We had to emphasize an alternative path to the usual default in government, which is to seek executive approval on a project plan before the experts had time to weigh in.
There was also a related, but separate issue of timing, with some interesting projects or teams not able to align their timelines with those of EW — either running experiments too early, coming online too late, or switching priorities after the first few promising chats. However, this is a very normal part of doing business in a co-created, partnership model. It is unclear if something like that will ever be solved through better planning or better mitigation.
Lessons for the future
This part of the EW process has caused us to reflect and question many of the ideas that underpin the project, and develop some next steps if / when we run a future EW, or similar projects like it:
- In near-term, it is important to bring the experts in very early, forego or delay the complete application process, and instead work closely with teams to get their subject-matter expertise translated into ‘experimentable’ questions.
- Implications from lesson (1) above are that the role of experts becomes more important than previously thought (and we already thought they were pretty important!). We seem to need our experts more and earlier than we had imagined.
- In the medium-to-long term, there might be a possibility to return to the ‘purer’ up-front application system if departments’ experimentation capacity increases across-the-board and we again fear that our expert-to-department ratio would be imperiled.
John Medcof is part of the executive team championing the EW project at TBS
Article également disponible en français ici: https://medium.com/@exp_oeuvre