Hierarchies of Models

With thanks to Inna Polichtchouk.

General circulation models (GCMs) of varying complexity are used in atmospheric and oceanic sciences to study different atmospheric processes and to simulate response of climate to climate change and other forcings.

However, Held (2005) warned the climate community that the gap between understanding and simulating atmospheric and oceanic processes is becoming wider. He stressed the use of model hierarchies for improved understanding of the atmosphere and oceans (Fig. 1). Often at the bottom of the hierarchy lie the well-understood, idealized, one- or two-layer models.  In the middle of the hierarchy lie multi-layer models, which omit certain processes such as land-ocean-atmosphere interactions or moist physics. And finally, at the top of the hierarchy lie fully coupled atmosphere-ocean general circulation models that are used for climate projections. Such model hierarchies are already well developed in other sciences (Held 2005), such as molecular biology, where studying less complex animals (e.g. mice) infers something about the more complex humans (through evolution).

Model_hierarchies_Shaw_etal2016
Figure 1: Model hierarchy of midlatitude atmosphere (as used for studying storm tracks). The simplest models are on the left and the most complex models are on the right. Bottom panels show eddy kinetic energy (EKE, contours) and precipitation (shading) with increase in model hierarchy (left-to-right): No precipitation in a dry core model (left), zonally homogeneous EKE and precipitation in an aquaplanet model (middle), and zonally varying EKE and precipitation in the most complex model (right). Source: Shaw et al. (2016), Fig. B2.

Model hierarchies have now become an important research tool to further our understanding of the climate system [see, e.g., Polvani et al. (2017), Jeevanjee et al. (2017), Vallis et al. (2018)]. This approach allows us to delineate most important processes responsible for circulation response to climate change (e.g., mid-latitude storm track shift, widening of tropical belt etc.), to perform hypothesis testing, and to assess robustness of results in different configurations.

In my PhD, I have extensively used the model hierarchies concept to understand mid-latitude tropospheric dynamics (Fig. 1). One-layer barotropic and two-layer quasi-geostrophic models are often used as a first step to understand large-scale dynamics and to establish the importance of barotropic and baroclinic processes (also discussed in my previous blog post). Subsequently, more realistic “dry” non-linear multi-layer models with simple treatment for boundary layer and radiation [the so-called “Held & Suarez” setup, first introduced in Held and Suarez (1994)] can be used to study zonally homogeneous mid-latitude dynamics without complicating the setup with physical parametrisations (e.g. moist processes), or the full range of ocean-land-ice-atmosphere interactions. For example, I have successfully used the Held & Suarez setup to test the robustness of the annular mode variability (see my previous blog post) to different model climatologies (Boljka et al., 2018). I found that baroclinic annular mode timescale and its link to the barotropic annular mode is sensitive to model climatology. This can have an impact on climate variability in a changing climate.

Additional complexity can be introduced to the multi-layer dry models by adding moist processes and physical parametrisations in the so-called “aquaplanet” setup [e.g. Neale and Hoskins (2000)]. The aquaplanet setup allows us to elucidate the role of moist processes and parametrisations on zonally homogeneous dynamics. For example, mid-latitude cyclones tend to be stronger in moist atmospheres.

To study effects of zonal asymmetries on the mid-latitude dynamics, localized heating or topography can be further introduced to the aquaplanet and Held & Suarez setup to force large-scale stationary waves, reproducing the south-west to north-east tilts in the Northern Hemisphere storm tracks (bottom left panel in Fig. 1). This setup has helped me elucidate the differences between the zonally homogeneous and zonally inhomogeneous atmospheres, where the planetary scale (stationary) waves and their interplay with the synoptic eddies (cyclones) become increasingly important for the mid-latitude storm track dynamics and variability on different temporal and spatial scales.

Even further complexity can be achieved by coupling atmospheric models to the dynamic ocean and/or land and ice models (coupled atmosphere-ocean or atmosphere only GCMs, in Fig. 1), all of which bring the model closer to reality. However, interpreting results from such complex models is very difficult without having first studied the hierarchy of models as too many processes are acting simultaneously in such fully coupled models.  Further insights can also be gained by improving the theoretical (mathematical) understanding of the atmospheric processes by using a similar hierarchical approach [see e.g. Boljka and Shepherd (2018)].

References:

Boljka, L. and T.G. Shepherd, 2018: A multiscale asymptotic theory of extratropical wave–mean flow interaction. J. Atmos. Sci., 75, 1833–1852, https://doi.org/10.1175/JAS-D-17-0307.1 .

Boljka, L., T.G. Shepherd, and M. Blackburn, 2018: On the boupling between barotropic and baroclinic modes of extratropical atmospheric variability. J. Atmos. Sci., 75, 1853–1871, https://doi.org/10.1175/JAS-D-17-0370.1 .

Held, I. M., 2005: The gap between simulation and understanding in climate modeling. Bull. Am. Meteorol. Soc., 86, 1609 – 1614.

Held, I. M. and M. J. Suarez, 1994: A proposal for the intercomparison of the dynamical cores of atmospheric general circulation models. Bull. Amer. Meteor. Soc., 75, 1825–1830.

Jeevanjee, N., Hassanzadeh, P., Hill, S., Sheshadri, A., 2017: A perspective on climate model hierarchies. JAMES9, 1760-1771.

Neale, R. B., and B. J. Hoskins, 2000: A standard test for AGCMs including their physical parametrizations: I: the proposal. Atmosph. Sci. Lett., 1, 101–107.

Polvani, L. M., A. C. Clement, B. Medeiros, J. J. Benedict, and I. R. Simpson (2017), When less is more: Opening the door to simpler climate models. EOS, 98.

Shaw, T. A., M. Baldwin, E. A. Barnes, R. Caballero, C. I. Garfinkel, Y-T. Hwang, C. Li, P. A. O’Gorman, G. Riviere, I R. Simpson, and A. Voigt, 2016: Storm track processes and the opposing influences of climate change. Nature Geoscience, 9, 656–664.

Vallis, G. K., Colyer, G., Geen, R., Gerber, E., Jucker, M., Maher, P., Paterson, A., Pietschnig, M., Penn, J., and Thomson, S. I., 2018: Isca, v1.0: a framework for the global modelling of the atmospheres of Earth and other planets at varying levels of complexity. Geosci. Model Dev., 11, 843-859.

How does plasma from the solar wind enter Earth’s magnetosphere?

Earth’s radiation belts are a hazardous environment for the satellites underpinning our everyday life. The behaviour of these high-energy particles, trapped by Earth’s magnetic field, is partly determined by the existence of plasma waves. These waves provide the mechanisms by which energy and momentum are transferred and particle populations physically moved around, and it’s some of these waves that I study in my PhD.

However, I’ve noticed that whenever I talk about my work, I rarely talk about where this plasma comes from. In schools it’s often taught that space is a vacuum, and while it is closer to a vacuum than anything we can make on Earth, there are enough particles to make it a dangerous environment. A significant amount of particles do escape from Earth’s ionosphere into the magnetosphere but in this post I’ll focus on material entering from the solar wind. This constant outflow of hot particles from the Sun is a plasma, a fluid where enough of the particles are ionised that the behaviour of the fluid is then dominated by electric and magnetic fields. Since the charged particles in a plasma interact with each other, with external electric and magnetic fields, and also generate more fields by moving and interacting, this makes for some weird and wonderful behaviour.

magnetosphere_diagram
Figure 1: The area of space dominated by Earth’s magnetic field (the magnetosphere) is shaped by the constant flow of the solar wind (a plasma predominantly composed of protons, electrons and alpha particles). Plasma inside the magnetosphere collects in specific areas; the radiation belts are particularly of interest as particles there pose a danger to satellites. Credit: NASA/Goddard/Aaron Kaas

When explaining my work to family or friends, I often describe Earth’s magnetic field as a shield to the solar wind. Because the solar wind is well ionised, it is highly conductive, and this means that approximately, the magnetic field is “frozen in” to the plasma. If the magnetic field changes, the plasma follows this change. Similarly, if the plasma flows somewhere, the magnetic field is dragged along with it. (This is known as Alfvén’s frozen in theorem – the amount of plasma in a volume parallel to the magnetic field line remains constant). And this is why the magnetosphere acts as shield to all this energy streaming out of the Sun – while the magnetic field embedded in the solar wind is topologically distinct from the magnetic field of the Earth, there is no plasma transfer across magnetic field lines, and it streams past our planet (although this dynamic pressure still compresses the plasma of the magnetosphere, giving it that typical asymmetric shape in Figure 1).

Of course, the question still remains of how the solar wind plasma enters the Earth’s magnetic field if such a shielding effect exists. You may have noticed in Figure 1 that there are gaps in the shield that the Earth’s dipole magnetic field presents to the solar wind; these are called the cusps, and at these locations the magnetic field connects to the solar wind. Here, plasma can travel along magnetic field lines and impact us on Earth.

But there’s also a more interesting phenomenon occurring – on a small enough scale (i.e. the very thin boundaries between two magnetic domains) the assumptions behind the frozen-in theorem break down, and then we start to see one of the processes that make the magnetosphere such a complex, fascinating and dynamic system to study. Say we have two regions of plasma with opposing orientation of the magnetic field. Then in a middle area these opposing field lines will suddenly snap to a new configuration, allowing them to peel off and away from this tightly packed central region. Figure 2 illustrates this process – you can see that after pushing red and blue field lines together, they suddenly jump to a new configuration. As well as changing the topology of the magnetic field, the plasma at the centre is energised and accelerated, shooting off along the magnetic field lines. Of course even this is a simplification; the whole process is somewhat more messy in reality and I for one don’t really understand how the field can suddenly “snap” to a new configuration.

reconnection
Figure 2: Magnetic reconnection. Two magnetic domains of opposing orientation can undergo a process where the field line configuration suddenly resets. Instead of two distinct magnetic domains, some field lines are suddenly connected to both, and shoot outwards and away, as does the energised plasma.

In the Earth’s magnetosphere there are two main regions where this process is important (Figure 3). Firstly, at the nose of the magnetosphere. The dynamic pressure of the solar wind is compressing the solar wind plasma against the magnetospheric plasma, and when the interplanetary magnetic field is orientated downwards (i.e. opposite to the Earth’s dipole – about half the time) this reconnection can happen. At this point field lines that were solely connected to the Earth or in the solar wind are now connected to both, and plasma can flow along them.

magnetosphere_reconnection_sites
Figure 3: There are two main areas where reconnection happens in Earth’s magnetosphere. Opposing field lines can reconnect, allowing a continual dynamic cycle (the Dungey cycle) of field lines around the magnetosphere. Plasma can travel along these magnetic field lines freely. Credits: NASA/MMS (image) and NASA/Goddard Space Flight Center- Conceptual Image Lab (video)

Then, as the solar wind continues to rush outwards from the Sun, it drags these field lines along with it, past the Earth and into the tail of the magnetosphere. Eventually the build-up of these field lines reaches a critical point in the tail, and boom! Reconnection happens once more. You get a blast of energised plasma shooting along the magnetic field (this gives us the aurora) and the topology has rearranged to separate the magnetic fields of the Earth and solar wind; once more, they are distinct. These dipole field lines move around to the front of the Earth again, to begin this dramatic cycle once more.

Working out when and how these kind of processes take place is still an active area of research, let alone understanding exactly what we expect this new plasma to do when it arrives. If it doesn’t give us a beautiful show of the aurora, will it bounce around the radiation belts, trapped in the stronger magnetic fields near the Earth? Or if it’s not so high energy as that, will it settle in the cooler plasmasphere, to rotate with the Earth and be shaped as the magnetic field is distorted by solar wind variations? Right now I look out my window at a peaceful sunny day and find it incredible that such complicated and dynamic processes are continually happening so (relatively) nearby. It certainly makes space physics an interesting area of research.

Tropical Circulation viewed as a heat engine

Climate scientists have a lot of insight into the factors driving weather systems in the mid-latitudes, where the rotation of the earth is an important influence. The tropics are less well served, and this can be a problem for global climate models which don’t capture many of the phenomena observed in the tropics that well.

What we do know about the tropics however is that despite significant contrasts in sea surface temperatures (Fig. 1) there is very little horizontal temperature variation in the atmosphere (Fig. 2) – because the Coriolis force (due to the Earth’s rotation) that enables this gradient in more temperate climates is not present. We believe that the large-scale circulation acts to minimise the effect these surface contrasts have higher up. This suggests a model for vertical wind which cools the air over warmer surfaces and warms it where the surface is cool, called the Weak Temperature Gradient (WTG) Approximation, that is frequently used in studying the climate in the tropics.

GrSEMtest1_SST_map2-page-001
Fig.1 Sea surface temperatures (K) at 0Z on 1/1/2000 (ERA-Interim)
GrSEMtest1_T_map2-page-001
Fig.2 Temperatures at 500 hPa (K) at 0Z on 1/1/2000 (ERA-Interim)

 

 

 

 

 

Thermodynamic ideas have been around for some 200 years. Carnot, a Frenchman worried about Britain’s industrial might underpinning its military potential(!), studied the efficiency of heat engines and showed that the maximum mechanical work generated by an engine is determined by the ratio of the temperatures at which energy enters and leaves the system. It is possible to treat climate systems as heat engines – for example Kerry Emanuel has used Carnot’s idea to estimate the pressure in the eye of a hurricane. I have been building on a recent development of these ideas by Olivier Pauluis at New York University who shows how to divide up the maximum work output of a climate heat engine into the generation of wind, the lifting of moisture and a lost component, which he calls the Gibbs penalty, which is the energetic cost of keeping the atmosphere moist. Typically, 50% of the maximum work output is gobbled up by the Gibbs penalty, 30% is the moisture lifting term and only 20% is used to generate wind.

For my PhD, I have been applying Pauluis’ ideas to a modelled system consisting of two connected tropical regions (one over a cooler surface than the other), which are connected by a circulation given by the weak temperature gradient approximation. I look at how this circulation affects the components of work done by the system. Overall there is no impact – in other words the WTG does not distort the thermodynamics of the underlying system – which is reassuring for those who use it. What is perhaps more interesting however, is that even though the WTG circulation is very weak compared to the winds that we observe in the two columns, it does as much work as is done by the cooler column – in other words its thermodynamic importance is huge. This suggests that further avenues of study may help us better express what drives the climate in the tropics.

Should we be ‘Leaf’-ing out vegetation when parameterising the aerodynamic properties of urban areas?

Email: C.W.Kent@pgr.reading.ac.uk

When modelling urban areas, vegetation is often ignored in attempt to simplify an already complex problem. However, vegetation is present in all urban environments and it is not going anywhere… For reasons ranging from sustainability to improvements in human well-being, green spaces are increasingly becoming part of urban planning agendas. Incorporating vegetation is therefore a key part of modelling urban climates. Vegetation provides numerous (dis)services in the urban environment, each of which requires individual attention (Salmond et al. 2016). However, one of my research interests is how vegetation influences the aerodynamic properties of urban areas.

Two aerodynamic parameters can be used to represent the aerodynamic properties of a surface: the zero-plane displacement (zd) and aerodynamic roughness length (z0). The zero-plane displacement is the vertical displacement of the wind-speed profile due to the presence of surface roughness elements. The aerodynamic roughness length is a length scale which describes the magnitude of surface roughness. Together they help define the shape and form of the wind-speed profile which is expected above a surface (Fig. 1).

blogpostpic

Figure 1: Representation of the wind-speed profile above a group of roughness elements. The black dots represent an idealised logarithmic wind-speed profile which is determined using the zero-plane displacement (zd) and aerodynamic roughness length (z0) (lines) of the surface.

For an urban site, zd and z0 may be determined using three categories of methods: reference-based, morphometric and anemometric. Reference-based methods require a comparison of the site to previously published pictures or look up tables (e.g. Grimmond and Oke 1999); morphometric methods describe zd and z0 as a function of roughness-element geometry; and, anemometric methods use in-situ observations. The aerodynamic parameters of a site may vary considerably depending upon which of these methods are used, but efforts are being made to understand which parameters are most appropriate to use for accurate wind-speed estimations (Kent et al. 2017a).

Within the morphometric category (i.e. using roughness-element geometry) sophisticated methods have been developed for buildings or vegetation only. However, until recently no method existed to describe the effects of both buildings and vegetation in combination. A recent development overcomes this, whereby the heights of all roughness elements are considered alongside a porosity correction for vegetation (Kent et al. 2017b). Specifically, the porosity correction is applied to the space occupied and drag exerted by vegetation.

The development is assessed across several areas typical of a European city, ranging from a densely-built city centre to an urban park. The results demonstrate that where buildings are the dominant roughness elements (i.e. taller and occupying more space), vegetation does not obviously influence the calculated geometry of the surface, nor the aerodynamic parameters and the estimated wind speed. However, as vegetation begins to occupy a greater amount of space and becomes as tall as (or larger) than buildings, the influence of vegetation is obvious. Expectedly, the implications are greatest in an urban park, where overlooking vegetation means that wind speeds may be slowed by up to a factor of three.

Up to now, experiments such as those in the wind tunnel focus upon buildings or trees in isolation. Certainly, future experiments which consider both buildings and vegetation will be valuable to continue to understand the interaction within and between these roughness elements, in addition to assessing the parameterisation.

References

Grimmond CSB, Oke TR (1999) Aerodynamic properties of urban areas derived from analysis of surface form. J Appl Meteorol and Clim 38:1262-1292.

Kent CW, Grimmond CSB, Barlow J, Gatey D, Kotthaus S, Lindberg F, Halios CH (2017a) Evaluation of Urban Local-Scale Aerodynamic Parameters: Implications for the Vertical Profile of Wind Speed and for Source Areas. Boundary-Layer Meteorology 164: 183-213.

Kent CW, Grimmond CSB, Gatey D (2017b) Aerodynamic roughness parameters in cities: Inclusion of vegetation. Journal of Wind Engineering and Industrial Aerodynamics 169: 168-176.

Salmond JA, Tadaki M, Vardoulakis S, Arbuthnott K, Coutts A, Demuzere M, Dirks KN, Heaviside C, Lim S, Macintyre H (2016) Health and climate related ecosystem services provided by street trees in the urban environment. Environ Health 15:95.

Future of Cumulus Parametrization conference, Delft, July 10-14, 2017

Email: m.muetzelfeldt@pgr.reading.ac.uk

For a small city, Delft punches above its weight. It is famous for many things, including its celebrated Delftware (Figure 1). It was also the birthplace of one of the Dutch masters, Johannes Vermeer, who coincidentally painted some fine cityscapes with cumulus clouds in them (Figure 2). There is a university of technology with some impressive architecture (Figure 3). It holds the dubious honour of being the location of the first assassination using a pistol (or so we were told by our tour guide), when William of Orange was shot in 1584. To this list, it can now add hosting a one-week conference on the future of cumulus parametrization, and hopefully bringing about more of these conferences in the future.

Delftware_display

Figure 1: Delftware.

Vermeer-view-of-delft

Figure 2: Delft with canopy of cumulus clouds. By Johannes Vermeer, 1661.

Delft_AULA

Figure 3: AULA conference centre at Delft University of Technology – where we were based for the duration of the conference.

So what is a cumulus parametrization scheme? The key idea is as follows. Numerical weather and climate models work by splitting the atmosphere into a grid, with a corresponding grid length representing the length of each of the grid cells. By solving equations that govern how the wind, pressure and heating interact, models can then be used to predict what the weather will be like days in advance in the case of weather modelling. Or a model can predict how the climate will react to any forcings over longer timescales. However, any phenomena that are substantially smaller than this grid scale will not be “seen” by the models. For example, a large cumulonimbus cloud may have a horizontal extent of around 2km, whereas individual grid cells could be 50km in the case of a climate model. A cumulonimbus cloud will therefore not be explicitly modelled, but it will still have an effect on the grid cell in which it is located – in terms of how much heating and moistening it produces at different levels. To capture this effect, the clouds are parametrized, that is, the vertical profile of the heating and moistening due to the clouds are calculated based on the conditions in the grid cell, and this then affects the grid-scale values of these variables. A similar idea applies for shallow cumulus clouds, such as the cumulus humilis in Vermeer’s painting (Figure 2), or present-day Delft (Figure 3).

These cumulus parametrization schemes are a large source of uncertainty in current weather and climate models. The conference was aimed at bringing together the community of modellers working on these schemes, and working out which might be the best directions to go in to improve these schemes, and consequently weather and climate models.

Each day was a mixture of listening to presentations, looking at posters and breakout discussion groups in the afternoon, as well as plenty of time for coffee and meeting new people. The presentations covered a lot of ground: from presenting work on state-of-the-art parametrization schemes, to looking at how the schemes perform in operational models, to focusing on one small aspect of a scheme and modelling how that behaves in a high resolution model (50m resolution) that can explicitly model individual clouds. The posters were a great chance to see the in-depth work that had been done, and to talk to and exchange ideas with other scientists.

Certain ideas for improving the parametrization schemes resurfaced repeatedly. The need for scale-awareness, where the response of the parametrization scheme takes into account the model resolution, was discussed. One idea for doing this was the use of stochastic schemes to represent the uncertainty of the number of clouds in a given grid cell. The concept of memory also cropped up – where the scheme remembers if it had been active at a given grid cell in a previous point in time. This also ties into the idea of transitions between cloud regimes, e.g. when a stratocumulus layer splits up into individual cumulus clouds. Many other, sometimes esoteric, concepts were discussed, such as the role of cold pools, how much tuning of climate models is desirable and acceptable, how we should test our schemes, and what the process of developing the schemes should look like.

In the breakout groups, everyone was encouraged to contribute, which made for an inclusive atmosphere in which all points of view were taken on board. Some of the key points of agreement from these were that it was a good idea to have these conferences, and we should do it more often! Hopefully, in two years’ time, another PhD student will write a post on how the next meeting has gone. We also agreed that it would be beneficial to be able to share data from our different high resolution runs, as well as to be able to compare code for the different schemes.

The conference provided a picture of what the current thinking on cumulus parametrization is, as well as which directions people think are promising for the future. It also provided a means for the community to come together and discuss ideas for how to improve these schemes, and how to collaborate more closely with future projects such as ParaCon and HD(CP)2.

Peer review: what lies behind the curtains?

Email: a.w.bateson@pgr.reading.ac.uk

Twitter: @a_w_bateson

For young researchers, one of the most daunting prospects is the publication of their first paper.  A piece of work that somebody has spent months or even years preparing must be submitted for the process of peer review. Unseen gatekeepers cast their judgement and work is returned either accepted, rejected or with required revisions. I attended the Sense about Science workshop entitled ‘Peer review: the nuts and bolts’, targeted at early career researchers (ECRs), with the intention of looking behind these closed doors. How are reviewers selected? Who can become a reviewer? Who makes the final decisions? This workshop provided an opportunity to interact directly with both journal editors and academics involved in the peer review process to obtain answers to such questions.

This workshop was primarily structured around a panel discussion consisting of Dr Amarachukwu Anyogu, a lecturer in microbiology at the University of Westminster; Dr Bahar Mehmani, a reviewer experience lead at Elsevier; Dr Sabina Alam, an editorial director at F1000Research; and Emily Jesper-Mir, the head of partnerships and governance at Sense about Science. In addition, there were also small group discussions amongst fellow attendees regarding advantages and disadvantages of peer review, potential alternatives, and the importance of science communication.

18527387_1077446359022532_4975831821751623706_o
The panel of (L-R) Dr Sabina Alam, Dr Amarachukwu Anyogu, Dr Bahar Mehmani and Emily Jesper-Mir provided a unique insight into the peer review process from the perspective of both editor and reviewer. Photograph credited to Sense about Science.

Recent headlines have highlighted fraud cases where impersonation and deceit have been used to manipulate the peer review process. Furthermore, fears regarding bias and sexism remain high amongst the academic community. It was hence encouraging to see such strong awareness from both participants and panellists regarding the flaws of the peer review. Post-publication review, open (named) reviews, and the submission of methods prior to the experiment are all ways either in use currently or proposed to increase the accountability and transparency of peer review. Each method brings its own problems however; for example, naming reviewers risks the potential for less critical responses, particularly from younger researchers not wanting to alienate more experienced academics with influence over their future career progression.

One key focus of the workshop was to encourage ECRs to become involved in the peer review process. In the first instance this seems counterintuitive; surely the experience of academics further into their career is crucial to provide high quality reviews? However, ECRs do have the knowledge necessary. We work day to day with the same techniques, using the same analysis as the papers we would then review. In addition, a larger body of reviewers reduces the individual workload and will improve the efficiency of the process, particularly as ECRs do not necessarily have the same time pressures. Increased participation ensures diversity of opinion and ensures particular individuals do not become too influential in what ideas are considered relevant or acceptable. There also exist personal benefits to becoming a reviewer, including an improved ability to critically assess research. Dr Anyogu for example found that reviewing the works of others helped her gain a better perspective of criticism received on her own work.

18527133_1077447019022466_6449209230504509579_o
Participants were encouraged to discuss the advantages and disadvantages of peer review and potential changes that could be made to address current weaknesses in the process. Photograph credited to Sense about Science.

One key message that I took away from the workshop is that peer review isn’t mechanical. Humans are at the heart of decisions. Dr Alam was particularly keen to stress that editors will listen to grievances and reconsider decisions if strong arguments are put forward. However, it also then follows that peer review is only as effective as those who participate in the process.  If the quality of reviewers is poor, then the quality of the review process will be poor. Hence it can be argued that we as members of the academic community have an obligation to maintain high standards, not least so that the public can be reassured the information we provide has been through a thorough quality control process. In a time when phrases such as ‘fake news’ are proliferating, it is crucial more than ever to maintain public trust in the scientific process.

I would like to thank all the panellists for giving up their time to contribute to this workshop; the organisations* who provided sponsorship and sent representatives; Informa for hosting the event; and Sense about Science for organising this unique opportunity to learn more about peer review.

*Cambridge University Press, Peer Review Evaluation, Hindawi, F1000Research, Medical Research Council, Portland Press, Sage Publishing, Publons, Elsevier, Publons Academy, Taylor and Francis Group, Wiley.