Exploring the impact of variable floe size on the Arctic sea ice

Email: a.w.bateson@pgr.reading.ac.uk

The Arctic sea ice cover is made up of discrete units of sea ice area called floes. The size of these floes has an impact on several sea ice processes including the volume of melt produced at floe edges, the momentum exchange between the sea ice, ocean, and atmosphere, and the mechanical response of the sea ice to stress. Models of the sea ice have traditionally assumed that floes adopt a uniform size, if floe size is explicitly represented at all in the model. Observations of floes show that floe size can span a huge range, from scales of metres to tens of kilometres. Generally, observations of the floe size distribution (FSD) are fitted to a power law or a combination of power laws (Stern et al., 2018a).

The Los Alamos sea ice model, hereafter referred to as CICE, usually assumes a fixed floe size of 300 m. We can impose a simple FSD model into CICE derived from a power law to explore the impact of variable floe size on the sea ice cover. Figure 1 is a diagram of the WIPoFSD model (Waves-in-Ice module and Power law Floe Size Distribution model), which assumes a power law with a fixed exponent, \alpha, between a lower floe size cut-off, d_{min}, and an upper floe size cut-off, d_{max}. The model also incorporates a floe size variable, l_{var}, to capture the effects of processes that can influence floe size. The processes represented are wave break-up of floes, melting at the floe edge, winter floe growth, and advection. The model includes a wave advection and attenuation scheme so that wave properties can be determined within the sea ice field to enable the identification of wave break-up events. Full details of the WIPoFSD model and its implementation into CICE are available in Bateson et al. (2020). For the WIPoFSD model setup considered here, we explore the impact of the FSD on the lateral melt rate, which is the melt rate at the edge surfaces of floes. It is useful to define a new FSD metric that can be used to characterise the impact of the FSD on lateral melt. To do this we note that the lateral melt volume produced by a floe is proportional to the perimeter of the floe. The effective floe size, l_{eff}, is defined as a fixed floe size that would produce the same lateral melt rate as a given FSD, for a fixed total sea ice area.

Figure 1: A schematic of the imposed FSD model. This model is initiated by prescribing a power law with an exponent, \alpha, and between the limits d_{min} and d_{max}. Within individual grid cells the variable FSD tracer, l_{var}, varies between these two limits. l_{var} evolves through lateral melting, wave break-up events, freezing, and advection.

Here we will compare a CICE simulation incorporating the WIPoFSD model, hereafter referred to as stan-fsd, to a reference case, ref, using the CICE standard fixed floe size of 300 m. For the WIPoFSD model, d_{min} = 10 m, d_{max} = 30 km, and \alpha = -2.5. These values have been selected as representative values from observations. The reference setup is initiated in 1990 and spun-up until 2005, when either continued as ref or the WIPoFSD model imposed for stan-fsd before being evaluated from 2006 – 2016. All figures in this post are given as a mean over 2007 – 2016, such that 2005 – 2006 is a period of spin-up for the incorporated WIPoFSD model.

In Figure 2, we show the percentage reduction in the Arctic sea ice extent and volume of stan-fsd relative to ref. The differences in both extent and volume over the pan-Arctic scale evolve over an annual cycle, with maximum differences of -1.0 % in August and -1.1 % in September respectively. The annual cycle corresponds to periods of melting and freeze-up and is a product of the nature of the imposed FSD. Lateral melt rates are a function of floe size, but freeze-up rates are not, hence model differences only increase during periods of melting and not during periods of freeze-up. The difference in sea ice extent reduces rapidly during freeze-up because this freeze-up is predominantly driven by ocean surface properties, which are strongly coupled to atmospheric conditions in areas of low sea ice extent. In comparison, whilst atmospheric conditions initiate the vertical sea ice growth, this atmosphere-ocean coupling is rapidly lost due to insulation of the warmer ocean from the cooler atmosphere once sea ice extends across the horizontal plane. Hence a residual difference in sea ice thickness and therefore volume propagates throughout the winter season. The interannual variability shows that the impact of the WIPoFSD model with standard parameters varies significantly depending on the year.

Figure 2: Difference in sea ice extent (solid, red ribbon) and volume (dashed, blue ribbon) between stan-fsd relative to ref averaged over 2007–2016. The ribbon shows the region spanned by the mean value plus or minus 2 times the standard deviation for each simulation. This gives a measure of the interannual variability over the 10-year period.

Although the pan-Arctic differences in extent and volume shown in Figure 2 are marginal, differences are larger when considering smaller spatial scales. Figure 3 shows the spatial distribution in the changes in sea ice concentration and thickness in March, June, and September for stan-fsd relative to ref in addition to the spatial distribution in l_{eff} for stan-fsd for the same months. Reductions in the sea ice concentration and thickness of up to 0.1 and 50 cm observed respectively in the September marginal ice zone (MIZ). Within the pack ice, increases in the sea ice concentration of up to 0.05 and ice thickness of up to 10 cm can be seen. To understand the non-uniform spatial impacts of the FSD, it is useful to look at the behaviour of l_{eff}. Regions with an l_{eff} greater than 300 m will experience less lateral melt than the equivalent location in ref (all other things being equal) whereas locations with an l_{eff} below 300 m will experience more lateral melt. In Figure 3 we see the transition to values of l_{eff} smaller than 300 m in the MIZ, hence most of the sea ice cover experiences less lateral melting for stan-fsd compared to ref.

Figure 3: Difference in the sea ice concentration (top row, a-c) and thickness (middle row, d-f) between stan-fsd and ref and l_{eff} (bottom row, g-i) for stan-fsd averaged over 2007 – 2016. Results are presented for March (left column, a, d, g), June (middle column, b, e, h) and September (right column, c, f, i). Values are shown only in locations where the sea ice concentration exceeds 5 %.

For Figures 2-3, the parameters used to define the FSD have been set to fixed, standard values. However, these parameters vary significantly between different observed FSDs. It is therefore useful to explore the model sensitivity to these parameters. For α values of -2, -2.5, -3 and -3.5 have been selected to span the general range of values reported in observations (Stern et al., 2018a). For d_{min} values of 1 m, 20 m and 50 m are selected to reflect the different behaviours reported in studies, with some showing power law behaviour extending to 1 m (Toyota et al., 2006) and others showing a tailing off at an order of 10 s of metres (Stern et al., 2018b). For the upper cut-off, d_{max}, values of 1000 m, 10,000 m, 30,000 m and 50,000 m are selected, again to represent the distributions reported in different studies. 50 km is taken as the largest value for d_{max} as this serves as an upper limit to what can be resolved within an individual grid cell on a CICE 1^{\circ} grid. A total of 19 sensitivity studies have been completed used different permutations of the stated values for the FSD model parameters. Figure 4 shows the change in mean September sea ice extent and volume relative to ref plotted against mean annual l_{eff}, averaged over the sea ice extent, for each of these sensitivity studies. The impacts range from a small increase in extent and volume to large reductions of -22 % and -55 % respectively, even within the parameter space defined by observations. Furthermore, there is almost a one-to-one mapping between mean l_{eff} and extent and volume reduction. This suggests l_{eff} is a useful diagnostic tool to predict the impact of a given set of floe size parameters. The system varies most in response to the changes in the α, but it is also particularly sensitive to d_{min}.

Figure 4: Relative change (%) in mean September sea ice volume from 2007 – 2016 respectively, plotted against mean l_{eff} for simulations with different selections of parameters relative to ref. The mean l_{eff} is taken as the equally weighted average across all grid cells where the sea ice concentration exceeds 15%. The colour of the marker indicates the value of the \alpha, the shape indicates the value of d_{min}, and the three experiments using standard parameters but different d_{max} (1000 m, 10000 m and 50000 m) are indicated by a crossed red square. The parameters are selected to be representative of a parameter space for the WIPoFSD model that has been constrained by observations.

There are several advantages to the assumption of a fixed power law in modelling the sea ice floe size distribution. It provides a simple framework to explore the potential impact of an observed FSD on the sea ice mass balance, given observations of the FSD are generally fitted to a power law. In addition, the use of a simple model makes it easier to constrain the mechanism of how the model changes the sea ice cover. However, there are also significant disadvantages including the high model sensitivity to poorly constrained parameters, as shown in Figure 4. In addition, there is evidence both that the exponent evolves over an annual cycle and is not a fixed value (Stern et al., 2018b) and that the power law is not a statistically valid description of the FSD over all floe sizes (Horvat et al., 2019). An alternative approach to modelling the FSD is the prognostic model of Roach et al. (2018, 2019). The prognostic model avoids any assumptions about the shape of the distribution and instead assigns sea ice area to a set of adjacent floe size categories, with individual processes parameterised at floe scale. This approach carries its own set of challenges. If important physical processes are missing from the model it will not be possible to simulate a physically realistic distribution. In addition, the prognostic model has a significant computational cost. In practice, the choice of FSD modelling approach will depend on the application.

Further reading
Bateson, A. W., Feltham, D. L., Schröder, D., Hosekova, L., Ridley, J. K. and Aksenov, Y.: Impact of sea ice floe size distribution on seasonal fragmentation and melt of Arctic sea ice, Cryosphere, 14, 403–428, https://doi.org/10.5194/tc-14-403-2020, 2020.

Horvat, C., Roach, L. A., Tilling, R., Bitz, C. M., Fox-Kemper, B., Guider, C., Hill, K., Ridout, A., and Shepherd, A.: Estimating the sea ice floe size distribution using satellite altimetry: theory, climatology, and model comparison, The Cryosphere, 13, 2869–2885, https://doi.org/10.5194/tc-13-2869-2019, 2019. 

Stern, H. L., Schweiger, A. J., Zhang, J., and Steele, M.: On reconciling disparate studies of the sea-ice floe size distribution, Elem. Sci. Anth., 6, p. 49, https://doi.org/10.1525/elementa.304, 2018a. 

Stern, H. L., Schweiger, A. J., Stark, M., Zhang, J., Steele, M., and Hwang, B.: Seasonal evolution of the sea-ice floe size distribution in the Beaufort and Chukchi seas, Elem. Sci. Anth., 6, p. 48, https://doi.org/10.1525/elementa.305, 2018b. 

Roach, L. A., Horvat, C., Dean, S. M., and Bitz, C. M.: An Emergent Sea Ice Floe Size Distribution in a Global Coupled Ocean-Sea Ice Model, J. Geophys. Res.-Oceans, 123, 4322–4337, https://doi.org/10.1029/2017JC013692, 2018. 

Roach, L. A., Bitz, C. M., Horvat, C. and Dean, S. M.: Advances in Modeling Interactions Between Sea Ice and Ocean Surface Waves, J. Adv. Model. Earth Syst., 11, 4167–4181, https://doi.org/10.1029/2019MS001836, 2019.

Toyota, T., Takatsuji, S., and Nakayama, M.: Characteristics of sea ice floe size distribution in the seasonal ice zone, Geophys. Res. Lett., 33, 2–5, https://doi.org/10.1029/2005GL024556, 2006. 

Evidence week, or why I chatted to politicians about evidence.

Email: a.w.bateson@pgr.reading.ac.uk

Twitter: @a_w_bateson

On a sunny Tuesday morning at 8.30 am I found myself passing through security to enter the Palace of Westminster. The home of the MPs and peers is not obvious territory for a PhD student. However, I was here as a Voice of Young Science (VoYS) volunteer for the Sense about Science Evidence WeekSense about Science in an independent charity that aims to scrutinise the use of evidence in the public domain and to challenge misleading or misrepresented science. I have written previously here about attending one of their workshops about peer review, and also here about contributing to a campaign aiming to assess the transparency of evidence used in government policy documents.

The purpose of evidence week was to bring together MPs, peers, parliamentary services and people from all walks of life to generate a conversation about why evidence in policy-making matters. The week was held in collaboration with the House of Commons Library, Parliamentary Office of Science and Technology and House of Commons Science and Technology Committee, in partnership with SAGE Publishing. Individual events and briefings were contributed to by further organisations including the Royal Statistical Society, Alliance for Useful Evidence and UCL. Each day had a different theme to focus on including ‘questioning quality’ and ‘wicked problems’ i.e. superficially simple problems which turn out to be complex and multifaceted.

DgiC0YjX4AEtZeH
Throughout the week both MPs, parliamentary staff and the public were welcomed to a stand in the Upper Waiting Hall to have conversations about why evidence is important to them. Photo credit to Sense about Science.

Throughout the parliamentary week, which lasts from Monday to Thursday, Sense about Science had a stand in the Upper Waiting Hall of Parliament. This location is right outside committee rooms where members of the public will give evidence to one of the many select committees. These are collections of MPs from multiple parties whose role it is to oversee the work of government departments and agencies, though their role in gathering evidence and scrutiny can sometimes have significance beyond just UK policy-making (for example this story documenting one committee’s role in investigating the relationship between Facebook, Cambridge Analytica and the propagation of ‘fake news’). The aim of this stand was to catch the attention of both the public, parliamentary staff, and MPs, and to engage them in conversations about the importance of evidence. Alongside the stand, a series of events and briefings were held within Parliament on the topic of evidence. Titles included ‘making informed decisions about health care’ and ‘it ain’t necessarily so… simple stories can go wrong’.

Each day brought a new set of VoYS volunteers to the campaign, both to attend to the stand and to document and help out with the various events during the week. Hence I found myself abandoning my own research for a day to contribute to Day 2 of the campaign, focusing on navigating data and statistics. I had a busy day; beyond chatting to people at the stand I also took over the VoYS Twitter account to document some of the day’s key events, attended a briefing about the 2021 census, and provided a video roundup for the day (which can be viewed here!). For conversations that we had at the stand we were asked to particularly focus on questions in line with the theme of the day including ‘if a statistic is the answer, what was the question?’ and ‘where does this data come from?’

DgoFUzoXkAAVW8m
MP for Bath, Wera Hobhouse, had a particular interest in the pollution data for her constituency and the evidence for the most effective methods to improve air quality.  Photo credit to Sense about Science.

Trying to engage people at the stand proved to be challenging; the location of the stand meant people passing by were often in a rush to committee meetings. Occasionally the division bells, announcing a parliamentary vote, would also ring and a rush of MPs would flock by, great for trying to spot the more well-known MPs but less good for convincing them to stop to talk about data and statistics. In practice this meant I and other VoYS members had to adopt a very assertive approach in talking to people, a style that is generally not within the comfort zone of most scientists! However this did lead to some very interesting conversations, including with a paediatric surgeon who was advocating to the health select committee for increasing the investment in research to treat tumours in children. He posed a very interesting question: given a finite amount of funding for tumour research, how much of this should be specifically directed towards improving the survival outcomes of younger patients and how much to older patients? We also asked MPs and members of the public to add any evidence questions they had to the stand. A member of the public wondered, ‘are there incentives to show what doesn’t work?’ and Layla Moran, MP for Oxford West and Abingdon, asked ‘how can politicians better understand uncertainty in data?’

DgitoU7W4AEvohS
Visitors to the stand, including MPs and Peers, were asked to add any burning questions they had about evidence to the stand. Photo credit to Sense about Science.

The week proved to be a success. Over 60 MPs from across parliamentary parties, including government ministers, interacted with some aspect of evidence week, accounting for around 10% of the total number of MPs. Also, a wider audience who engaged with the stand included parliamentary staff and members of the public. Sense about Science highlighted two outcomes after the event: one was the opening event where members of various community groups met with over 40 MPs and peers and had the opportunity to explain why evidence was important to them, whether their interest was in beekeeping, safe standing at football matches or IVF treatment; the second was the concluding round-table event regarding what people require from evidence gathering. SAGE will publish an overview of this round-table as a discussion paper in Autumn.

On a personal level, I had a very valuable experience. Firstly, it was great opportunity to visit somewhere as imposing and important as the Houses of Parliament and to contribute to such an exciting and innovate week. I was able to have some very interesting conversations with both MPs and members of the public. I found that generally everybody was enthusiastic about the need for increased use and transparency of evidence in policy-making. The challenge, instead, is to ensure that both policy-makers and the general public have the tools they need to collect, assess and apply evidence.

Can scientists improve evidence transparency in policy making?

Email: a.w.bateson@pgr.reading.ac.uk

Twitter: @a_w_bateson

Politics. Science. They are two worlds apart. One is about trying to understand and reveal the true nature of the universe using empirical evidence. The other is more invested in constructing its own reality; cherry-picking evidence which conforms to the desired perception of the universe. Ok, so this is a gross simplification. Politicians have by no means an easy task. They are expected to make huge decisions on limited evidence and understanding. Meanwhile, whilst we all like the romantic idea that the science we do is empirical and non-biased, there are frequent examples (such as the perils of the impact factor or sexism in peer review) to counter this. We do understand, however, that evidence lies at the core of what we do. A good research paper will highlight what evidence has led to a conclusion or outcome, how that evidence was collected, and any uncertainties or limitations of the evidence. This is essential for transparency and reproducibility. What if we could introduce the same tools to politics?

 

jamie-street-136939-unsplash(1)
For effective public scrutiny of policies, transparency in how evidence is used is essential. Credit for photo: Jamie Smith, Unsplash

In October 2017 I spent multiple hours reviewing government policy documents to assess just how well they were using evidence. I was contributing to the Sense about Science publication transparency of evidence: spot check. This document is the product of a collaboration in 2015 between Sense about Science, the Institute for Government and the Alliance for Useful Evidence wherein the evidence transparency framework was proposed. This framework aims to encourage government to be transparent in their use of evidence. In November 2016, Sense about Science published the original transparency of evidence report which was a trial use of this framework applied to a random selection of ‘publicly-available policy documents’. After feedback from the departments and participants involved, the framework has been refined to produce the spot check.

The review involved a team of young scientists, including me, each assessing how a subset of around 10 of these policies is using evidence. At this stage the quality of this evidence, or whether the policy has merit based on the presented evidence, was not considered. The priority is to assess the transparency in how evidence is being used to shape policy. We scored each policy in four key areas (with a score out of 3 given for each area):

  • Diagnosis: The policymakers should outline all they know about a particular issue including its causes, impacts and scale with supporting evidence. Any uncertainties or weaknesses in the evidence base should be highlighted.
  • Proposal: The policy should outline the chosen intervention with a clear statement of why this approach has been selected as well as any negatives. It should also be made clear why other approaches have not been used, and if the chosen intervention has not been fully decided on how the Government intends to make that decision. Once again the strengths and weaknesses of the evidence base should be acknowledged and discussed.
  • Implementation: If the method for implementing the proposal has not been made, what evidence will be used to make that decision? If it has, why has this approach been selected over alternatives, and what negatives exist? As previously, supporting evidence should be provided and assessed for its quality.
  • Testing and Evaluation: Will there be a pilot / trial of the policy and if not why not? How will the impacts and outcomes of the policy be assessed? The testing methods and criteria for success should be made clear, with an accompanying timetable.

For full details of this framework refer to Appendix 1 of the transparency of evidence: spot check publication. Whilst the framework is fairly explicit, it was nevertheless challenging as a reviewer to provide a fair assessment of each policy. The policies ranged in content from cyber-security to packaging waste; some were a few pages long, some closer to 100 pages; some were still at the consultation stage and others were ready to implement. Furthermore, sometimes values and pragmatism are as important in policy making as the available evidence. Policies based on such values can still be scored highly provided it is explicit and justified why these values have taken priority over any available contradictory evidence.

The findings discussed within the report are consistent with what I found when reviewing the policies. In particular, whilst inclusion of supporting evidence has improved since the original assessment, an approach of “info-dumping” seems to have been adopted whereby evidence is provided without being explicit about why it is relevant or it has been used. Similarly often references are cited without it being clear why. Many policies also failed to make the chain of reasoning from diagnosis to testing and evaluation of a policy clear. These complaints should not be unfamiliar to scientists! Finally, very few documents discussed how policies would be tested and evaluated. I am hoping by this point it should be clear why we as scientists can have a positive input. The same skills we use to produce high quality research and papers can be used to produce transparent and testable policies.

We have established why a scheme to engage young researchers in assessing and improving use of evidence in policy making has value, however perhaps you may still be wondering why we should care? Linking back to the theme of this blog, in the next few years we are going to see a raft of policies worldwide designed to combat climate change in response to the Paris Agreement. As the people providing the evidence, climate scientists will have a role in scrutinising these policies and ensuring they will achieve the predicted outcomes. For this to happen, transparency of evidence is essential. Furthermore, we all exist as citizens outside of our research, and as citizens we should want the ability to properly hold government and other policy makers accountable.

Peer review: what lies behind the curtains?

Email: a.w.bateson@pgr.reading.ac.uk

Twitter: @a_w_bateson

For young researchers, one of the most daunting prospects is the publication of their first paper.  A piece of work that somebody has spent months or even years preparing must be submitted for the process of peer review. Unseen gatekeepers cast their judgement and work is returned either accepted, rejected or with required revisions. I attended the Sense about Science workshop entitled ‘Peer review: the nuts and bolts’, targeted at early career researchers (ECRs), with the intention of looking behind these closed doors. How are reviewers selected? Who can become a reviewer? Who makes the final decisions? This workshop provided an opportunity to interact directly with both journal editors and academics involved in the peer review process to obtain answers to such questions.

This workshop was primarily structured around a panel discussion consisting of Dr Amarachukwu Anyogu, a lecturer in microbiology at the University of Westminster; Dr Bahar Mehmani, a reviewer experience lead at Elsevier; Dr Sabina Alam, an editorial director at F1000Research; and Emily Jesper-Mir, the head of partnerships and governance at Sense about Science. In addition, there were also small group discussions amongst fellow attendees regarding advantages and disadvantages of peer review, potential alternatives, and the importance of science communication.

18527387_1077446359022532_4975831821751623706_o
The panel of (L-R) Dr Sabina Alam, Dr Amarachukwu Anyogu, Dr Bahar Mehmani and Emily Jesper-Mir provided a unique insight into the peer review process from the perspective of both editor and reviewer. Photograph credited to Sense about Science.

Recent headlines have highlighted fraud cases where impersonation and deceit have been used to manipulate the peer review process. Furthermore, fears regarding bias and sexism remain high amongst the academic community. It was hence encouraging to see such strong awareness from both participants and panellists regarding the flaws of the peer review. Post-publication review, open (named) reviews, and the submission of methods prior to the experiment are all ways either in use currently or proposed to increase the accountability and transparency of peer review. Each method brings its own problems however; for example, naming reviewers risks the potential for less critical responses, particularly from younger researchers not wanting to alienate more experienced academics with influence over their future career progression.

One key focus of the workshop was to encourage ECRs to become involved in the peer review process. In the first instance this seems counterintuitive; surely the experience of academics further into their career is crucial to provide high quality reviews? However, ECRs do have the knowledge necessary. We work day to day with the same techniques, using the same analysis as the papers we would then review. In addition, a larger body of reviewers reduces the individual workload and will improve the efficiency of the process, particularly as ECRs do not necessarily have the same time pressures. Increased participation ensures diversity of opinion and ensures particular individuals do not become too influential in what ideas are considered relevant or acceptable. There also exist personal benefits to becoming a reviewer, including an improved ability to critically assess research. Dr Anyogu for example found that reviewing the works of others helped her gain a better perspective of criticism received on her own work.

18527133_1077447019022466_6449209230504509579_o
Participants were encouraged to discuss the advantages and disadvantages of peer review and potential changes that could be made to address current weaknesses in the process. Photograph credited to Sense about Science.

One key message that I took away from the workshop is that peer review isn’t mechanical. Humans are at the heart of decisions. Dr Alam was particularly keen to stress that editors will listen to grievances and reconsider decisions if strong arguments are put forward. However, it also then follows that peer review is only as effective as those who participate in the process.  If the quality of reviewers is poor, then the quality of the review process will be poor. Hence it can be argued that we as members of the academic community have an obligation to maintain high standards, not least so that the public can be reassured the information we provide has been through a thorough quality control process. In a time when phrases such as ‘fake news’ are proliferating, it is crucial more than ever to maintain public trust in the scientific process.

I would like to thank all the panellists for giving up their time to contribute to this workshop; the organisations* who provided sponsorship and sent representatives; Informa for hosting the event; and Sense about Science for organising this unique opportunity to learn more about peer review.

*Cambridge University Press, Peer Review Evaluation, Hindawi, F1000Research, Medical Research Council, Portland Press, Sage Publishing, Publons, Elsevier, Publons Academy, Taylor and Francis Group, Wiley.