Evidence week, or why I chatted to politicians about evidence.

Email: a.w.bateson@pgr.reading.ac.uk

Twitter: @a_w_bateson

On a sunny Tuesday morning at 8.30 am I found myself passing through security to enter the Palace of Westminster. The home of the MPs and peers is not obvious territory for a PhD student. However, I was here as a Voice of Young Science (VoYS) volunteer for the Sense about Science Evidence WeekSense about Science in an independent charity that aims to scrutinise the use of evidence in the public domain and to challenge misleading or misrepresented science. I have written previously here about attending one of their workshops about peer review, and also here about contributing to a campaign aiming to assess the transparency of evidence used in government policy documents.

The purpose of evidence week was to bring together MPs, peers, parliamentary services and people from all walks of life to generate a conversation about why evidence in policy-making matters. The week was held in collaboration with the House of Commons Library, Parliamentary Office of Science and Technology and House of Commons Science and Technology Committee, in partnership with SAGE Publishing. Individual events and briefings were contributed to by further organisations including the Royal Statistical Society, Alliance for Useful Evidence and UCL. Each day had a different theme to focus on including ‘questioning quality’ and ‘wicked problems’ i.e. superficially simple problems which turn out to be complex and multifaceted.

DgiC0YjX4AEtZeH
Throughout the week both MPs, parliamentary staff and the public were welcomed to a stand in the Upper Waiting Hall to have conversations about why evidence is important to them. Photo credit to Sense about Science.

Throughout the parliamentary week, which lasts from Monday to Thursday, Sense about Science had a stand in the Upper Waiting Hall of Parliament. This location is right outside committee rooms where members of the public will give evidence to one of the many select committees. These are collections of MPs from multiple parties whose role it is to oversee the work of government departments and agencies, though their role in gathering evidence and scrutiny can sometimes have significance beyond just UK policy-making (for example this story documenting one committee’s role in investigating the relationship between Facebook, Cambridge Analytica and the propagation of ‘fake news’). The aim of this stand was to catch the attention of both the public, parliamentary staff, and MPs, and to engage them in conversations about the importance of evidence. Alongside the stand, a series of events and briefings were held within Parliament on the topic of evidence. Titles included ‘making informed decisions about health care’ and ‘it ain’t necessarily so… simple stories can go wrong’.

Each day brought a new set of VoYS volunteers to the campaign, both to attend to the stand and to document and help out with the various events during the week. Hence I found myself abandoning my own research for a day to contribute to Day 2 of the campaign, focusing on navigating data and statistics. I had a busy day; beyond chatting to people at the stand I also took over the VoYS Twitter account to document some of the day’s key events, attended a briefing about the 2021 census, and provided a video roundup for the day (which can be viewed here!). For conversations that we had at the stand we were asked to particularly focus on questions in line with the theme of the day including ‘if a statistic is the answer, what was the question?’ and ‘where does this data come from?’

DgoFUzoXkAAVW8m
MP for Bath, Wera Hobhouse, had a particular interest in the pollution data for her constituency and the evidence for the most effective methods to improve air quality.  Photo credit to Sense about Science.

Trying to engage people at the stand proved to be challenging; the location of the stand meant people passing by were often in a rush to committee meetings. Occasionally the division bells, announcing a parliamentary vote, would also ring and a rush of MPs would flock by, great for trying to spot the more well-known MPs but less good for convincing them to stop to talk about data and statistics. In practice this meant I and other VoYS members had to adopt a very assertive approach in talking to people, a style that is generally not within the comfort zone of most scientists! However this did lead to some very interesting conversations, including with a paediatric surgeon who was advocating to the health select committee for increasing the investment in research to treat tumours in children. He posed a very interesting question: given a finite amount of funding for tumour research, how much of this should be specifically directed towards improving the survival outcomes of younger patients and how much to older patients? We also asked MPs and members of the public to add any evidence questions they had to the stand. A member of the public wondered, ‘are there incentives to show what doesn’t work?’ and Layla Moran, MP for Oxford West and Abingdon, asked ‘how can politicians better understand uncertainty in data?’

DgitoU7W4AEvohS
Visitors to the stand, including MPs and Peers, were asked to add any burning questions they had about evidence to the stand. Photo credit to Sense about Science.

The week proved to be a success. Over 60 MPs from across parliamentary parties, including government ministers, interacted with some aspect of evidence week, accounting for around 10% of the total number of MPs. Also, a wider audience who engaged with the stand included parliamentary staff and members of the public. Sense about Science highlighted two outcomes after the event: one was the opening event where members of various community groups met with over 40 MPs and peers and had the opportunity to explain why evidence was important to them, whether their interest was in beekeeping, safe standing at football matches or IVF treatment; the second was the concluding round-table event regarding what people require from evidence gathering. SAGE will publish an overview of this round-table as a discussion paper in Autumn.

On a personal level, I had a very valuable experience. Firstly, it was great opportunity to visit somewhere as imposing and important as the Houses of Parliament and to contribute to such an exciting and innovate week. I was able to have some very interesting conversations with both MPs and members of the public. I found that generally everybody was enthusiastic about the need for increased use and transparency of evidence in policy-making. The challenge, instead, is to ensure that both policy-makers and the general public have the tools they need to collect, assess and apply evidence.

Can scientists improve evidence transparency in policy making?

Email: a.w.bateson@pgr.reading.ac.uk

Twitter: @a_w_bateson

Politics. Science. They are two worlds apart. One is about trying to understand and reveal the true nature of the universe using empirical evidence. The other is more invested in constructing its own reality; cherry-picking evidence which conforms to the desired perception of the universe. Ok, so this is a gross simplification. Politicians have by no means an easy task. They are expected to make huge decisions on limited evidence and understanding. Meanwhile, whilst we all like the romantic idea that the science we do is empirical and non-biased, there are frequent examples (such as the perils of the impact factor or sexism in peer review) to counter this. We do understand, however, that evidence lies at the core of what we do. A good research paper will highlight what evidence has led to a conclusion or outcome, how that evidence was collected, and any uncertainties or limitations of the evidence. This is essential for transparency and reproducibility. What if we could introduce the same tools to politics?

 

jamie-street-136939-unsplash(1)
For effective public scrutiny of policies, transparency in how evidence is used is essential. Credit for photo: Jamie Smith, Unsplash

In October 2017 I spent multiple hours reviewing government policy documents to assess just how well they were using evidence. I was contributing to the Sense about Science publication transparency of evidence: spot check. This document is the product of a collaboration in 2015 between Sense about Science, the Institute for Government and the Alliance for Useful Evidence wherein the evidence transparency framework was proposed. This framework aims to encourage government to be transparent in their use of evidence. In November 2016, Sense about Science published the original transparency of evidence report which was a trial use of this framework applied to a random selection of ‘publicly-available policy documents’. After feedback from the departments and participants involved, the framework has been refined to produce the spot check.

The review involved a team of young scientists, including me, each assessing how a subset of around 10 of these policies is using evidence. At this stage the quality of this evidence, or whether the policy has merit based on the presented evidence, was not considered. The priority is to assess the transparency in how evidence is being used to shape policy. We scored each policy in four key areas (with a score out of 3 given for each area):

  • Diagnosis: The policymakers should outline all they know about a particular issue including its causes, impacts and scale with supporting evidence. Any uncertainties or weaknesses in the evidence base should be highlighted.
  • Proposal: The policy should outline the chosen intervention with a clear statement of why this approach has been selected as well as any negatives. It should also be made clear why other approaches have not been used, and if the chosen intervention has not been fully decided on how the Government intends to make that decision. Once again the strengths and weaknesses of the evidence base should be acknowledged and discussed.
  • Implementation: If the method for implementing the proposal has not been made, what evidence will be used to make that decision? If it has, why has this approach been selected over alternatives, and what negatives exist? As previously, supporting evidence should be provided and assessed for its quality.
  • Testing and Evaluation: Will there be a pilot / trial of the policy and if not why not? How will the impacts and outcomes of the policy be assessed? The testing methods and criteria for success should be made clear, with an accompanying timetable.

For full details of this framework refer to Appendix 1 of the transparency of evidence: spot check publication. Whilst the framework is fairly explicit, it was nevertheless challenging as a reviewer to provide a fair assessment of each policy. The policies ranged in content from cyber-security to packaging waste; some were a few pages long, some closer to 100 pages; some were still at the consultation stage and others were ready to implement. Furthermore, sometimes values and pragmatism are as important in policy making as the available evidence. Policies based on such values can still be scored highly provided it is explicit and justified why these values have taken priority over any available contradictory evidence.

The findings discussed within the report are consistent with what I found when reviewing the policies. In particular, whilst inclusion of supporting evidence has improved since the original assessment, an approach of “info-dumping” seems to have been adopted whereby evidence is provided without being explicit about why it is relevant or it has been used. Similarly often references are cited without it being clear why. Many policies also failed to make the chain of reasoning from diagnosis to testing and evaluation of a policy clear. These complaints should not be unfamiliar to scientists! Finally, very few documents discussed how policies would be tested and evaluated. I am hoping by this point it should be clear why we as scientists can have a positive input. The same skills we use to produce high quality research and papers can be used to produce transparent and testable policies.

We have established why a scheme to engage young researchers in assessing and improving use of evidence in policy making has value, however perhaps you may still be wondering why we should care? Linking back to the theme of this blog, in the next few years we are going to see a raft of policies worldwide designed to combat climate change in response to the Paris Agreement. As the people providing the evidence, climate scientists will have a role in scrutinising these policies and ensuring they will achieve the predicted outcomes. For this to happen, transparency of evidence is essential. Furthermore, we all exist as citizens outside of our research, and as citizens we should want the ability to properly hold government and other policy makers accountable.

Peer review: what lies behind the curtains?

Email: a.w.bateson@pgr.reading.ac.uk

Twitter: @a_w_bateson

For young researchers, one of the most daunting prospects is the publication of their first paper.  A piece of work that somebody has spent months or even years preparing must be submitted for the process of peer review. Unseen gatekeepers cast their judgement and work is returned either accepted, rejected or with required revisions. I attended the Sense about Science workshop entitled ‘Peer review: the nuts and bolts’, targeted at early career researchers (ECRs), with the intention of looking behind these closed doors. How are reviewers selected? Who can become a reviewer? Who makes the final decisions? This workshop provided an opportunity to interact directly with both journal editors and academics involved in the peer review process to obtain answers to such questions.

This workshop was primarily structured around a panel discussion consisting of Dr Amarachukwu Anyogu, a lecturer in microbiology at the University of Westminster; Dr Bahar Mehmani, a reviewer experience lead at Elsevier; Dr Sabina Alam, an editorial director at F1000Research; and Emily Jesper-Mir, the head of partnerships and governance at Sense about Science. In addition, there were also small group discussions amongst fellow attendees regarding advantages and disadvantages of peer review, potential alternatives, and the importance of science communication.

18527387_1077446359022532_4975831821751623706_o
The panel of (L-R) Dr Sabina Alam, Dr Amarachukwu Anyogu, Dr Bahar Mehmani and Emily Jesper-Mir provided a unique insight into the peer review process from the perspective of both editor and reviewer. Photograph credited to Sense about Science.

Recent headlines have highlighted fraud cases where impersonation and deceit have been used to manipulate the peer review process. Furthermore, fears regarding bias and sexism remain high amongst the academic community. It was hence encouraging to see such strong awareness from both participants and panellists regarding the flaws of the peer review. Post-publication review, open (named) reviews, and the submission of methods prior to the experiment are all ways either in use currently or proposed to increase the accountability and transparency of peer review. Each method brings its own problems however; for example, naming reviewers risks the potential for less critical responses, particularly from younger researchers not wanting to alienate more experienced academics with influence over their future career progression.

One key focus of the workshop was to encourage ECRs to become involved in the peer review process. In the first instance this seems counterintuitive; surely the experience of academics further into their career is crucial to provide high quality reviews? However, ECRs do have the knowledge necessary. We work day to day with the same techniques, using the same analysis as the papers we would then review. In addition, a larger body of reviewers reduces the individual workload and will improve the efficiency of the process, particularly as ECRs do not necessarily have the same time pressures. Increased participation ensures diversity of opinion and ensures particular individuals do not become too influential in what ideas are considered relevant or acceptable. There also exist personal benefits to becoming a reviewer, including an improved ability to critically assess research. Dr Anyogu for example found that reviewing the works of others helped her gain a better perspective of criticism received on her own work.

18527133_1077447019022466_6449209230504509579_o
Participants were encouraged to discuss the advantages and disadvantages of peer review and potential changes that could be made to address current weaknesses in the process. Photograph credited to Sense about Science.

One key message that I took away from the workshop is that peer review isn’t mechanical. Humans are at the heart of decisions. Dr Alam was particularly keen to stress that editors will listen to grievances and reconsider decisions if strong arguments are put forward. However, it also then follows that peer review is only as effective as those who participate in the process.  If the quality of reviewers is poor, then the quality of the review process will be poor. Hence it can be argued that we as members of the academic community have an obligation to maintain high standards, not least so that the public can be reassured the information we provide has been through a thorough quality control process. In a time when phrases such as ‘fake news’ are proliferating, it is crucial more than ever to maintain public trust in the scientific process.

I would like to thank all the panellists for giving up their time to contribute to this workshop; the organisations* who provided sponsorship and sent representatives; Informa for hosting the event; and Sense about Science for organising this unique opportunity to learn more about peer review.

*Cambridge University Press, Peer Review Evaluation, Hindawi, F1000Research, Medical Research Council, Portland Press, Sage Publishing, Publons, Elsevier, Publons Academy, Taylor and Francis Group, Wiley.