Politics. Science. They are two worlds apart. One is about trying to understand and reveal the true nature of the universe using empirical evidence. The other is more invested in constructing its own reality; cherry-picking evidence which conforms to the desired perception of the universe. Ok, so this is a gross simplification. Politicians have by no means an easy task. They are expected to make huge decisions on limited evidence and understanding. Meanwhile, whilst we all like the romantic idea that the science we do is empirical and non-biased, there are frequent examples (such as the perils of the impact factor or sexism in peer review) to counter this. We do understand, however, that evidence lies at the core of what we do. A good research paper will highlight what evidence has led to a conclusion or outcome, how that evidence was collected, and any uncertainties or limitations of the evidence. This is essential for transparency and reproducibility. What if we could introduce the same tools to politics?
In October 2017 I spent multiple hours reviewing government policy documents to assess just how well they were using evidence. I was contributing to the Sense about Science publication transparency of evidence: spot check. This document is the product of a collaboration in 2015 between Sense about Science, the Institute for Government and the Alliance for Useful Evidence wherein the evidence transparency framework was proposed. This framework aims to encourage government to be transparent in their use of evidence. In November 2016, Sense about Science published the original transparency of evidence report which was a trial use of this framework applied to a random selection of ‘publicly-available policy documents’. After feedback from the departments and participants involved, the framework has been refined to produce the spot check.
The review involved a team of young scientists, including me, each assessing how a subset of around 10 of these policies is using evidence. At this stage the quality of this evidence, or whether the policy has merit based on the presented evidence, was not considered. The priority is to assess the transparency in how evidence is being used to shape policy. We scored each policy in four key areas (with a score out of 3 given for each area):
- Diagnosis: The policymakers should outline all they know about a particular issue including its causes, impacts and scale with supporting evidence. Any uncertainties or weaknesses in the evidence base should be highlighted.
- Proposal: The policy should outline the chosen intervention with a clear statement of why this approach has been selected as well as any negatives. It should also be made clear why other approaches have not been used, and if the chosen intervention has not been fully decided on how the Government intends to make that decision. Once again the strengths and weaknesses of the evidence base should be acknowledged and discussed.
- Implementation: If the method for implementing the proposal has not been made, what evidence will be used to make that decision? If it has, why has this approach been selected over alternatives, and what negatives exist? As previously, supporting evidence should be provided and assessed for its quality.
- Testing and Evaluation: Will there be a pilot / trial of the policy and if not why not? How will the impacts and outcomes of the policy be assessed? The testing methods and criteria for success should be made clear, with an accompanying timetable.
For full details of this framework refer to Appendix 1 of the transparency of evidence: spot check publication. Whilst the framework is fairly explicit, it was nevertheless challenging as a reviewer to provide a fair assessment of each policy. The policies ranged in content from cyber-security to packaging waste; some were a few pages long, some closer to 100 pages; some were still at the consultation stage and others were ready to implement. Furthermore, sometimes values and pragmatism are as important in policy making as the available evidence. Policies based on such values can still be scored highly provided it is explicit and justified why these values have taken priority over any available contradictory evidence.
The findings discussed within the report are consistent with what I found when reviewing the policies. In particular, whilst inclusion of supporting evidence has improved since the original assessment, an approach of “info-dumping” seems to have been adopted whereby evidence is provided without being explicit about why it is relevant or it has been used. Similarly often references are cited without it being clear why. Many policies also failed to make the chain of reasoning from diagnosis to testing and evaluation of a policy clear. These complaints should not be unfamiliar to scientists! Finally, very few documents discussed how policies would be tested and evaluated. I am hoping by this point it should be clear why we as scientists can have a positive input. The same skills we use to produce high quality research and papers can be used to produce transparent and testable policies.
We have established why a scheme to engage young researchers in assessing and improving use of evidence in policy making has value, however perhaps you may still be wondering why we should care? Linking back to the theme of this blog, in the next few years we are going to see a raft of policies worldwide designed to combat climate change in response to the Paris Agreement. As the people providing the evidence, climate scientists will have a role in scrutinising these policies and ensuring they will achieve the predicted outcomes. For this to happen, transparency of evidence is essential. Furthermore, we all exist as citizens outside of our research, and as citizens we should want the ability to properly hold government and other policy makers accountable.