Can scientists improve evidence transparency in policy making?

Email: a.w.bateson@pgr.reading.ac.uk

Twitter: @a_w_bateson

Politics. Science. They are two worlds apart. One is about trying to understand and reveal the true nature of the universe using empirical evidence. The other is more invested in constructing its own reality; cherry-picking evidence which conforms to the desired perception of the universe. Ok, so this is a gross simplification. Politicians have by no means an easy task. They are expected to make huge decisions on limited evidence and understanding. Meanwhile, whilst we all like the romantic idea that the science we do is empirical and non-biased, there are frequent examples (such as the perils of the impact factor or sexism in peer review) to counter this. We do understand, however, that evidence lies at the core of what we do. A good research paper will highlight what evidence has led to a conclusion or outcome, how that evidence was collected, and any uncertainties or limitations of the evidence. This is essential for transparency and reproducibility. What if we could introduce the same tools to politics?

 

jamie-street-136939-unsplash(1)
For effective public scrutiny of policies, transparency in how evidence is used is essential. Credit for photo: Jamie Smith, Unsplash

In October 2017 I spent multiple hours reviewing government policy documents to assess just how well they were using evidence. I was contributing to the Sense about Science publication transparency of evidence: spot check. This document is the product of a collaboration in 2015 between Sense about Science, the Institute for Government and the Alliance for Useful Evidence wherein the evidence transparency framework was proposed. This framework aims to encourage government to be transparent in their use of evidence. In November 2016, Sense about Science published the original transparency of evidence report which was a trial use of this framework applied to a random selection of ‘publicly-available policy documents’. After feedback from the departments and participants involved, the framework has been refined to produce the spot check.

The review involved a team of young scientists, including me, each assessing how a subset of around 10 of these policies is using evidence. At this stage the quality of this evidence, or whether the policy has merit based on the presented evidence, was not considered. The priority is to assess the transparency in how evidence is being used to shape policy. We scored each policy in four key areas (with a score out of 3 given for each area):

  • Diagnosis: The policymakers should outline all they know about a particular issue including its causes, impacts and scale with supporting evidence. Any uncertainties or weaknesses in the evidence base should be highlighted.
  • Proposal: The policy should outline the chosen intervention with a clear statement of why this approach has been selected as well as any negatives. It should also be made clear why other approaches have not been used, and if the chosen intervention has not been fully decided on how the Government intends to make that decision. Once again the strengths and weaknesses of the evidence base should be acknowledged and discussed.
  • Implementation: If the method for implementing the proposal has not been made, what evidence will be used to make that decision? If it has, why has this approach been selected over alternatives, and what negatives exist? As previously, supporting evidence should be provided and assessed for its quality.
  • Testing and Evaluation: Will there be a pilot / trial of the policy and if not why not? How will the impacts and outcomes of the policy be assessed? The testing methods and criteria for success should be made clear, with an accompanying timetable.

For full details of this framework refer to Appendix 1 of the transparency of evidence: spot check publication. Whilst the framework is fairly explicit, it was nevertheless challenging as a reviewer to provide a fair assessment of each policy. The policies ranged in content from cyber-security to packaging waste; some were a few pages long, some closer to 100 pages; some were still at the consultation stage and others were ready to implement. Furthermore, sometimes values and pragmatism are as important in policy making as the available evidence. Policies based on such values can still be scored highly provided it is explicit and justified why these values have taken priority over any available contradictory evidence.

The findings discussed within the report are consistent with what I found when reviewing the policies. In particular, whilst inclusion of supporting evidence has improved since the original assessment, an approach of “info-dumping” seems to have been adopted whereby evidence is provided without being explicit about why it is relevant or it has been used. Similarly often references are cited without it being clear why. Many policies also failed to make the chain of reasoning from diagnosis to testing and evaluation of a policy clear. These complaints should not be unfamiliar to scientists! Finally, very few documents discussed how policies would be tested and evaluated. I am hoping by this point it should be clear why we as scientists can have a positive input. The same skills we use to produce high quality research and papers can be used to produce transparent and testable policies.

We have established why a scheme to engage young researchers in assessing and improving use of evidence in policy making has value, however perhaps you may still be wondering why we should care? Linking back to the theme of this blog, in the next few years we are going to see a raft of policies worldwide designed to combat climate change in response to the Paris Agreement. As the people providing the evidence, climate scientists will have a role in scrutinising these policies and ensuring they will achieve the predicted outcomes. For this to happen, transparency of evidence is essential. Furthermore, we all exist as citizens outside of our research, and as citizens we should want the ability to properly hold government and other policy makers accountable.

Peer review: what lies behind the curtains?

Email: a.w.bateson@pgr.reading.ac.uk

Twitter: @a_w_bateson

For young researchers, one of the most daunting prospects is the publication of their first paper.  A piece of work that somebody has spent months or even years preparing must be submitted for the process of peer review. Unseen gatekeepers cast their judgement and work is returned either accepted, rejected or with required revisions. I attended the Sense about Science workshop entitled ‘Peer review: the nuts and bolts’, targeted at early career researchers (ECRs), with the intention of looking behind these closed doors. How are reviewers selected? Who can become a reviewer? Who makes the final decisions? This workshop provided an opportunity to interact directly with both journal editors and academics involved in the peer review process to obtain answers to such questions.

This workshop was primarily structured around a panel discussion consisting of Dr Amarachukwu Anyogu, a lecturer in microbiology at the University of Westminster; Dr Bahar Mehmani, a reviewer experience lead at Elsevier; Dr Sabina Alam, an editorial director at F1000Research; and Emily Jesper-Mir, the head of partnerships and governance at Sense about Science. In addition, there were also small group discussions amongst fellow attendees regarding advantages and disadvantages of peer review, potential alternatives, and the importance of science communication.

18527387_1077446359022532_4975831821751623706_o
The panel of (L-R) Dr Sabina Alam, Dr Amarachukwu Anyogu, Dr Bahar Mehmani and Emily Jesper-Mir provided a unique insight into the peer review process from the perspective of both editor and reviewer. Photograph credited to Sense about Science.

Recent headlines have highlighted fraud cases where impersonation and deceit have been used to manipulate the peer review process. Furthermore, fears regarding bias and sexism remain high amongst the academic community. It was hence encouraging to see such strong awareness from both participants and panellists regarding the flaws of the peer review. Post-publication review, open (named) reviews, and the submission of methods prior to the experiment are all ways either in use currently or proposed to increase the accountability and transparency of peer review. Each method brings its own problems however; for example, naming reviewers risks the potential for less critical responses, particularly from younger researchers not wanting to alienate more experienced academics with influence over their future career progression.

One key focus of the workshop was to encourage ECRs to become involved in the peer review process. In the first instance this seems counterintuitive; surely the experience of academics further into their career is crucial to provide high quality reviews? However, ECRs do have the knowledge necessary. We work day to day with the same techniques, using the same analysis as the papers we would then review. In addition, a larger body of reviewers reduces the individual workload and will improve the efficiency of the process, particularly as ECRs do not necessarily have the same time pressures. Increased participation ensures diversity of opinion and ensures particular individuals do not become too influential in what ideas are considered relevant or acceptable. There also exist personal benefits to becoming a reviewer, including an improved ability to critically assess research. Dr Anyogu for example found that reviewing the works of others helped her gain a better perspective of criticism received on her own work.

18527133_1077447019022466_6449209230504509579_o
Participants were encouraged to discuss the advantages and disadvantages of peer review and potential changes that could be made to address current weaknesses in the process. Photograph credited to Sense about Science.

One key message that I took away from the workshop is that peer review isn’t mechanical. Humans are at the heart of decisions. Dr Alam was particularly keen to stress that editors will listen to grievances and reconsider decisions if strong arguments are put forward. However, it also then follows that peer review is only as effective as those who participate in the process.  If the quality of reviewers is poor, then the quality of the review process will be poor. Hence it can be argued that we as members of the academic community have an obligation to maintain high standards, not least so that the public can be reassured the information we provide has been through a thorough quality control process. In a time when phrases such as ‘fake news’ are proliferating, it is crucial more than ever to maintain public trust in the scientific process.

I would like to thank all the panellists for giving up their time to contribute to this workshop; the organisations* who provided sponsorship and sent representatives; Informa for hosting the event; and Sense about Science for organising this unique opportunity to learn more about peer review.

*Cambridge University Press, Peer Review Evaluation, Hindawi, F1000Research, Medical Research Council, Portland Press, Sage Publishing, Publons, Elsevier, Publons Academy, Taylor and Francis Group, Wiley.