Extra conference funding: how to apply and where to look

Shannon Jones – s.jones2@pgr.reading.ac.uk

The current PhD travel budget of £2000 doesn’t go far, especially if you have your eye on attending the AGU Fall Meeting in San Francisco. If the world ever goes back to normal (and fingers crossed it will – though hopefully with more greener travel options, and remote participation in shorter conferences?) you might wonder how you are ever going to afford the conferences your supervisors suggest. Luckily, there are many ways you can supplement your budget. Receiving travel grants not only means more conferences (and more travel!), but it also looks great on your CV. In this blog post I share what I have learnt about applying for conference grants and list the main places to apply.

Sources of funding include…

Graduate School Travel Support Scheme

  • Open to 2nd and 3rd year PhD students at the university (or equivalent year if part-time) 
  • 1 payment per student of up to £200 
  • Usually 3 deadlines throughout the year 

There are two schemes open to all PhD students who are members of the IOP (any PhD student who has a degree in physics or a related subject can apply to become a member)

Research Student Conference Fund

  • Unlimited payments until you have received £300 in total
  • 4 deadlines throughout the year: 1st March, 1st June, 1st September and 1st December 
  • Note: you apply for funding from an IOP group, and the conference must be relevant to the group. For example, most meteorology PhD students would apply for conference funding from the Environmental Physics group. You get to choose which groups to join when you become an IOP member. 

CR Barber Trust

  • 1 payment per student of £100-£300 for an international conference depending on the conference location 
  • Apply anytime as long as there is more than a month before the proposed conference 

Legacies Fund

Conference/Meeting Travel Subsistence

From the conference organiser

  • Finally, many conferences offer their own student support, so it’s always worth checking the conference website to see 
  • Both EGU and AGU offer grants to attend their meetings each year 

Application Tips

Apply early!!!

Many of these schemes take months to let you know whether you have been successful. Becoming a member can also take a while, especially when societies only approve new members at certain times of the year. So, it’s good to talk to your supervisor and make a conference plan early on in your PhD, so you know when to apply. 

Writing your application

Generally, these organisations are keen to give away their funds, you just have to write a good enough application. Keep it simple and short: remember the person reading the application is very unlikely to be an expert in your research. It can be helpful to ask someone who isn’t a scientist (or doesn’t know your work well) to read it and highlight anything that doesn’t make sense to them. 

Estimating your conference expenses

You are usually expected to provide a breakdown of the conference costs with every application. The main costs to account for are: 

  • Accommodation: for non-UK stays must apply for a quote through the university travel agent 
  • Travel: UK train tickets over £100 and all international travel must be booked by university too 
  • Subsistence: i.e. food! University rules used to say this could be a maximum of £30 per day – check current guidelines 
  • Conference Fees: the conference website will usually list this 

The total cost will depend on where the conference is. You are generally expected to choose cheaper options, but there is some flexibility. As a rough guide: a 4-day conference within the UK cost me around £400 (in 2019) and a 5-night stay in San Francisco to attend AGU cost me around £2200 (in 2019).  

Reading PhD students at Union Square, San Francisco for AGU! 

Good luck! Feel free to drop me an email at s.jones2@pgr.reading.ac.uk if you have any questions 😊 

ECMWF/EUMETSAT NWP SAF Workshop on the treatment of random and systematic errors in satellite data assimilation for NWP

Devon Francis – d.francis@pgr.reading.ac.uk

The ECMWF/EUMETSAT NWP SAF Workshop (European Centre for Medium-Range Weather Forecasts/European Organisation for the Exploitation of Meteorological Satellites Numerical Weather Prediction Satellite Application Facilities Workshop) was originally to be held at the ECMWF centre in Reading, but as with everything else in 2020 was moved online. The workshop was designed to be a place to share new ideas and theories for dealing with errors in satellite data assimilation: encompassing the treatment of random errors; biases in observations; and biases in the model.

Group photo of attendees of ECMWF/EUMETSAT NWP SAT Workshop – Virtual Event: ECMWF/EUMETSAT NWP SAF Workshop on the treatment of random and systematic errors in satellite data assimilation for NWP.

It was held over four days: consisting of oral and poster presentations; panel discussions; and concluded on the final day with the participants split into groups to discuss what methods are currently in use and what needs to be addressed in the future.

Oral Presentations

The oral presentations were split into four sessions: scene setting talks; estimating uncertainty; correction of model and observation biases; and observation errors. The talks were held over Zoom for the main presenters and shown via a live broadcast on the workshop website. This worked well as the audience could only see the individual presenter and their slides, without having the usual worry of checking that mics and videos were off for other people in the call!

Scene Setting Talks

I found the scene setting talks by Niels Bormann (ECMWF) and Dick Dee (Joint Center for Satellite Data Assimilation – JCSDA) very useful as they gave overviews of observation errors and biases respectively: both explaining the current methods as well as the evolution of different methods over the years. Both Niels and Dick are prominent names amongst data assimilation literature, so it was interesting to hear explanations of the underlying theories from the experts in the field before moving onto the more focused talks later in the day.

Correction of Model and Observation Biases

The session about the correction of model and observation biases, was of particular interest to me as it discussed many new theoretical methods for disentangling model and observation biases which are beginning to be used in operational NWP.

The first talk by Patrick Laloyaux (ECMWF) was titled Estimation of Model Biases and Importance of Scale Separation and looked at weak-constraint 4D-Var: a variational bias correction technique that includes an error term in the model, such that solving the cost function involves varying three variables: the state; the observation bias correction coefficient; and the model error. When the background and model errors have different spatial scales and when there are sufficient reference observations, it has been shown in a simplified model that weak-constraint 4D-Var can accurately correct model and initial state errors. They argue that the background error covariance matrix contains small spatial scales, and the model error covariance matrix contains large spatial scales, which means that the errors can be disentangled in the system. However, without this scale difference, separating the errors would be much harder, so this method can only be considered when there are vast differences within the spatial scales.

On the other hand, the talk by Mark Buehner (Environment and Climate Change Canada) discussed an offline technique that performs 3D-Var analysis every six hours using only unbiased, also known as “anchor”, observations to reduce the effects of model bias. These analyses can then be used as reference states in the main 4D-EnVar assimilation cycle to estimate the bias in the radiance observations. This method was much discussed over the course of the workshop, as it is yet to be used operationally and was very interesting to see a completely different bias correction technique to tackle the problem of disentangling model and observation biases.  

Posters

Poster presentations were shown via individual pages on the workshop website, with a comments section for small questions and virtual rooms – where presenters were available for a set two hours over the week. There were 12 poster presentations available, ranging from the theoretical statistics behind errors as well as operational techniques to tackle these errors.

My poster, focused on figures 1 and 2 which show the scalar state analysis error variances when (1) we vary the state background error variance accuracy for (a) underestimating and (b) overestimating the bias background error variance; (2) we vary the bias background error variance accuracy for (a) underestimating and (b) overestimating the state background error variance.

I presented a poster on work that I had been focussing on for the past few months titled Sensitivity of VarBC to the misspecification of background error covariances. My work focused on the effects of wrongly specifying the state and bias background error covariances on the analysis error covariances for the state and the bias. This was the first poster that I had ever presented so was a fast learning curve in how to clearly present detailed work in an aesthetic way. It was a useful experience as it gave me a hard deadline to conclude my current work and I had to really think about my next steps as well as why my work was important. Presenting online was a very different experience to presenting in person as it involved a lot of waiting around in a virtual room by myself, but when people did come, I was able to have some useful conversations, as well as the added bonus of being able to share my screen to share relevant papers.

Working Groups

On the final day we split ourselves into four working groups to discuss two different topics: the treatment of biases and the treatment of observation errors. The goal was to discuss current methods, as well as what we thought needed to be researched in the future or potential challenges that we would come across. This was hosted via the BlueJeans app, which provided a good space to talk as well as share screens and had the useful option to choose the ratio of viewing people’s videos, to viewing the presenter’s screen. Although I wasn’t able to contribute much, this was a really interesting day as I was able to listen to views of the experts in the field and listen to their discussions on what they believed to be the most important current issues, such as increasing discussion between data centres receiving the data and numerical weather prediction centres assimilating the data; and disentangling biases from different sources. Unfortunately for me, some of them felt that we were focussing too much on the theoretical statistics behind NWP and not enough on the operational testing, but I guess that’s experimentalists for you!

Final Thoughts

Although I was exhausted by the end of the week, the ECMWF/EUMETSAT NWP SAF Workshop was a great experience and I would love to attend next time, regardless of whether it is virtual or in person. As much as I missed the opportunity to talk to people face to face, the organisers did a wonderful job of presenting the workshop online and there were many opportunities to talk to the presenters. There were also some benefits of the virtual workshop: people from across the globe could easily join; the presentations were recorded, so can easily be re-watched (all oral and poster presentations can be found via this link – https://events.ecmwf.int/event/170/overview); and resource sharing was easy via screen sharing. I wonder whether future workshops and conferences could be a mixture of online as well as in person, in order to get the best of both worlds? I would absolutely recommend this workshop, both for people who are just starting out in DA as well as for researchers with years of experience, as it encompassed presentations from big names who have been working in error estimation for many years as well as new presenters and new ideas from worldwide speakers.

Demonstrating as a PhD student in unprecedented times

Brian Lo – brian.lo@pgr.reading.ac.uk 

Just over a month ago in September 2020, I started my journey as a PhD student. Since then, have I spent all of my working hours solely on research – plotting radar scans of heavy rainfall events and coding up algorithms to analyse the evolution of convective cells?  Surely not! Outside my research work, I have also taken on the role of demonstrating this academic year. 

What is demonstrating? In the department, PhD students can sign up to facilitate the running of tutorials and problems, synoptic, instrument, and computing laboratory classes. Equipped with a background in Physics and having taken modules as an MSc student at the department in the previous academic year, I signed up to run problem classes for this year’s Atmospheric Physics MSc module. 

I have observed quite a few lectures during my undergraduate education at Cambridge, MSc programme at Reading and also a few Massive Open Online Courses (MOOCs) as a student. Each had their unique mode of teaching. At Cambridge, equations were often presented on a physical blackboard in lectures, with problem sheet questions handed in 24 hours before each weekly one-hour “supervision” session as formative assessment. At Reading, there have been less students in each lecture, accompanied by problem classes that are longer and more relaxed, allowing for more informal discussion on problem sheet questions between students. These different forms of teaching were engaging to me in their own ways. I have also given a mix of good and not-as-good tutorial sessions for Year 7s to 13s. Good tutorials included interactive demonstrations, such as exploring parametric equations on an online graphing calculator, whereas the not-as-good ones had content that were pitched at too high of a level. Based on these experiences and having demonstrated for 10 hours, I hopefully can share some tips on demonstrating through describing what one would call a “typical” 9am Atmospheric Physics virtual problems class. 

PhD Demonstrating 101 

You, a PhD student, have just been allocated the role as demonstrator on Campus Jobs and are excited about the £14.83 per hour pay. With the first problems class happening in just a week’s time, you start thinking about tools you will need to give these MSc students the best learning experience. A pencil, paper, calculator and that handy Thermal Physics of the Atmosphere textbook would certainly suffice for face-to-face classes. The only difference this year: You will be running virtual classes! This means that moist-adiabatic lapse rate equation you have quickly scribbled down on paper may not show well on a pixelated video call due to a “poor (connection) experience” from Blackboard. How are you going to prevent this familiar situation from happening? 

Figure 1: Laptop with an iPad with a virtual whiteboard for illustrating diagrams and equations to be shown on Blackboard Collaborate. 

In my toolbox, I have an iPad and an Apple pencil for me to draw diagrams and write equations. The laptop’s screen is linked to the iPad with Google Jamboard running and could be shared on Blackboard Collaborate. Here I offer my first tip: 

  1. Explore tools available to design workflows for content delivery and decide on one that works well 

Days before the problems class, you wonder whether you have done enough preparation. Have you read through and completed the problem sheet; ready to answer those burning questions from the students you will be demonstrating for? It is important you… 

Figure 2: Snippet of type-written worked solutions for the Atmospheric Physics MSc module. 

  1. Have your worked solutions to refer to during class 

A good way to ensure you are able to resolve queries about problem sheet questions is to have a version of your own working. This could be as simple as some written out points, or in my case, fully type-written solutions, just so I have details of each step on hand. In some of my fully worked solutions, I added comments for steps where I found the learning curve was quite steep and annotated places where students may run into potential problems. 

Students seem to take interest in these worked solutions, but here I must recommend… 

  1. Do not send out or show your entire worked solutions 

It is arguable whether worked solutions will help students who have attempted all problems seriously, but the bigger issue lies in those who have not even given the problems a try. As a demonstrator, I often explain the importance of struggling through the multiple steps needed to solve and understand a physics problem. My worked solutions usually present what I consider to be the quick and more refined way to the numerical solution, but usually are not the most intuitive route. On that note, how then are you supposed to help someone stuck on a problem? 

It may be tempting to show snippets of your solutions to help someone stuck on a certain part of a problem. Unfortunately, I found this did not work very well. Students can end up disregarding their own attempt and copy down what they regard as the “model answer”. (A cheeky student would have taken multiple screenshots while I scrolled through my worked solutions on the shared screen…) What I found worked better in breakout groups was for the student(s) to explain how they got stuck.  

For example, I once had a few students ask me how they should work out the boiling temperature from saturated vapour pressure using Tetens’ formula. However, my worked solutions solved this directly using the Clausius-Clapeyron equation. Instead of showing them my answer, I arrived at the point where they got stuck (red in Figure 3), essentially putting myself in their shoes. From that point, I was able to give small hints in the correct direction. Using their method, we worked together towards a solution for the problem (black in Figure 3). Here is another tip: 

  1. Work through the problem from your students’ perspective 

Figure 3: Google Jamboard slide showing how Tetens’ formula is rearranged. Red shows where some students got up to in the question, whereas black is further working to reach a solution. 

This again illustrates the point on there being no “model answer”. As in many scientific fields, there exist multiple path functions that get you from a problem to a plausible solution, and the preference for such a path is unique to us all. 

There will always be a group of diligent students who gave the problem sheet a serious attempt prior to the class. You will find they only take less than 30 minutes to check their understanding and numerical solutions with you, and they might do their own thing afterwards. This is the perfect opportunity to… 

  1. Present bonus material to stretch students further 

Some ideas include asking for a physical interpretation from their mathematical result, or looking for other (potentially more efficient) methods of deriving their result. For example, I asked students to deduce a cycle describing the Stirling engine on a TS diagram, instead of the pV diagram they had already drawn out as asked by the problem sheet.  

Figure 4: A spreadsheet showing the content coverage of each past exam question 

I also have a table of past exam questions, with traffic light colours indicating which parts of the syllabus they cover. If a student would like to familiarise themselves with the exam style, I could recommend one or two questions using this spreadsheet. 

On the other hand, there may be the occasional group who have no idea where equation (9.11) on page 168 of the notes came from, or a student who would like the extra-reassurance of more mathematical help on a certain problem. As a final tip, I try to cater to these extra requests by… 

  1. Staying a little longer to answer a final few questions 

The best demonstrators are approachable, and go the extra mile to cater to the needs of the whole range of students they teach, with an understanding of their perspectives. After all, being a demonstrator is not only about students’ learning from teaching, but also your learning by teaching! 

I would welcome your ideas about demonstrating as a PhD. Feel free to contact me at brian.lo@pgr.reading.ac.uk if you would like to discuss! 

Visiting Scientist Week Preview: Laure Zanna

Kaja Milczewska – k.m.milczewska@pgr.reading.ac.uk

As per annual tradition in the Meteorology Department, PhD students have chosen a distinguished scientist to visit the department for one week. Previous years’ visitors include Prof. Tapio Schneider (Caltech), Prof. Olivia Romppainmen-Martius (University of Bern), and Prof. Cecilia Bitz (University of Washington). This year’s winning vote was New York University’s Prof. Laure Zanna, who will be visiting the department virtually1 between 2 – 6th November. 

Laure is an oceanographer and climate scientist whose career so far has spanned three continents, won her an American Meteorological Society (AMS) Early Careers’ award for “exceptionally creative” science this year, and netted her 600 citations in the last two years.  Her research interests encompass ocean turbulence, climate dynamics, predictability, machine learning and more. Some of the many topics of her published papers include the uncertainty in projections of ocean heat uptake; ocean turbulence parametrisations; predictions of seasonal to decadal sea surface temperatures in the Atlantic using simple statistical models and machine learning to inform prediction of extreme events. Besides being an exceptional scientist, speaker and educator, Laure is a down-to-Earth and friendly person, described by the Climate Scientists podcast’s Dan Jones as ‘a really great person who helps to tie the whole community together’.

As someone who had received their PhD only just over a decade ago, we thought Laure would be the perfect candidate to inspire us and our science through sharing some of her academic experiences with us. Before her visit next week, Laure kindly answered some interview-style questions for this week’s Social Metwork blog post.

Q: What inspired you to research oceanography and climate in the first place?

A: I always enjoyed math and physics. The possibility of using these disciplines to study scientific problems that I could “see” was very appealing.

Q: Why were you drawn to machine learning?

A: The power of machine learning (ML) to advance fields such as natural processing language or computer science is indisputable. I was excited by the premise of ML for climate science. In particular, can ML help deepen our understanding of certain aspects of the climate systems (e.g. interactions between scales or interactions between the ocean and atmosphere)? Can ML improve the representation of small-scale processes in climate models? ML, by itself, is not enough but combined with our physical understanding of the climate system could push the field forward.

Q: Can you give us an idea of what’s the most exciting research you are working on right now?

A: This is impossible. I work on 2 main areas of research right now: understanding and parameterizing ocean mesoscale eddies and understanding the role of the oceans in climate. I am passionate and excited about both topics. Hopefully, you will hear about both of them during the week.

Q: When did you realise/decide you were going to remain in academia?

A: I decided that I wanted to try and stay in academia in the last year of my PhD.  I was lucky enough to be able to.

Q: What is your favourite part of your job?

A: Working with my group!  The students and postdocs in the group have different expertise but all are passionate about their research. They make the work and the research more fun, more challenging, and more inspiring.

We are honoured to have our invitation accepted by Laure and are eagerly anticipating answers to more of these kind of questions throughout next week’s conversations.  Laure will be presenting a seminar titled, “Machine learning for physics-discovery and climate modelling” during the Monday Departmental Seminar series, as well as another seminar in the Climate and Ocean Dynamics research group, titled “Understanding past and future ocean warming”. She will also give a career-focused session at PhD group and, of course, engage with both the PhD students and staff on an individual basis during one-to-one meetings. We are grateful and delighted to be able to welcome Laure to the Meteorology department despite the various difficulties the year 2020 has posed on everyone, so come along to next week’s events!


1In true 2020 curve-ball style, of course.

Organising a virtual conference

Gwyneth Matthews – g.r.matthews@pgr.reading.ac.uk

A Doctoral Training Programme (DTP) provides funding, training, and opportunities for many PhD students in our department. Every year three environmentally focused DTPs: the SCENARIO NERC DTP, the London NERC DTP, and the Science and Solutions for a Changing Planet (SSCP) DTP, combine forces to hold a conference bringing together hundreds of PhD students to present their work and to network. As for many conferences in 2020, COVID19 disrupted our plans for the Joint DTP conference.  Usually the conference is hosted at one of the universities involved with a DTP however, this year it was held virtually using a mixture of Zoom and Slack. 

The decision to go virtual was difficult. We had to decide early in the pandemic when we didn’t know how long the lockdown would last nor what restrictions would be in place in September. If possible, we wanted to keep the conference in-person so that attendees got the full experience as it’s often the first time the new cohort meet and one of the few chances for the DTPs to mingle. However, as meeting and mingling was, and is, very much discouraged, making the decision to go virtual early on meant we had time to re-organise.  

Figure 1 – It was initially planned to hold the conference at the University of Surrey campus, which is located in Guildford, Surrey and hosts some students from the SCENARIO NERC DTP. The conference was instead held on Slack, an online communication platform that allows content to be divided into channels, and presentation sessions were hosted on Zoom.

When we thought we were organising a conference to be held at the University of Surrey, the main theme was “Engaging Sustainability” with the aim of making the conference as sustainable as possible. Since one of the often-made criticisms of conferences, especially those within the environmental fields, is the impact of large numbers of people travelling to one place, a virtual conference has obvious environmental benefits. An additional benefit was that we could invite guest speakers, such as Mya-Rose Craig (aka Bird Girl @birdgirluk), who may not have been able to attend if the event was held in person. It was also easier for some participants who had other commitments, such as childcare, to attend, although poor internet connection was an issue for others. 

The pandemic exposed, and often enhanced, many issues within academia and society in general. A questionnaire sent out before the event showed that most attendees were finding working from home and all other pandemic induced changes exhausting and mentally challenging. The recent Black Lives Matter protests around the world and the disproportionate impact of COVID on ethnic minority communities highlighted both the overt and systemic racism that is still prevalent in society. The UK Research and Innovation COVID funding controversy, and an increased focus on the challenges faced by the LGBTQ+ researchers emphasised the inequalities and poor representation specifically experienced in academia. Scientists working at the forefront of the pandemic response faced the challenge of providing clear information to enable people and policy makers to take life-disrupting actions before they are directly impacted; a challenge familiar to climate and environmental scientists. These issues gave us our topics for the external sessions which focused on wellbeing, inclusivity and diversity in academia, and communicating research.  

Barring technical difficulties, oral presentations are easy to replicate online, however, virtual conferences held earlier this year often had issues with recreating the poster sessions. Attempting to learn from these snags, instead of replicating an in-person poster session and possibly producing a poor-quality knock-off, participants were asked to create an animated “Twitter poster”. These were required to describe the key points of their research in a simple format that could be shared on social media and that was accessible to a non-expert. The posters were available for comments and questions throughout the two days in one easy-to-find location. Many of the participants shared their posters on Twitter after the conference using the conference hashtag #JointDTPCon.  

Another issue we faced was how to run a social and networking event. We kept the social event simple. A quiz. A pandemic classic with a fantastic double act as hosts. Randomly assigned teams meant that new connections could be made. However, the quiz was held online and after a full day of video calls most people didn’t want to spend their evenings also starring at a screen.  

Fig 2 – Jo Herschan and Lucinda King, members of the SCENARIO DTP and on the conference organising committee, hosted an entertaining quiz on the first night of the conference. An ethical objects photo round linked the quiz to the conference’s main theme.

With everyone having stayed at home and everything being conducted virtually for a few months by the time of our conference, Zoom fatigue was an issue we were aware could occur and tried to counter as much as possible during the day without losing any of the exciting new research being presented. In the weeks running up to the conference we had several discussions about how to encourage people to move throughout the two days without missing any of the sessions they wanted to attend. We decided on two ideas: a yoga session and a walking challenge. The yoga session was a success and not only gave participants an opportunity to stretch in the middle of the day but also linked strongly to our theme of researcher wellbeing. The walking challenge was not as successful. The aim was that collectively the conference participants would walk the distance from Land’s End to John O’Groats. We did not make it that far; but we did make it out of Cornwall. 

Fig 3 – Using World Walking to track the distance, we intended to collectively walk the 1576km (or 2,299,172 steps) from Land’s End to John O’Groats. This may have been an optimistic endeavour as we only achieved 235km (343, 311 steps).  

Helping to organise a virtual conference as part of an enthusiastic committee was a lot of fun and attending the conference and learning about the research being undertaken (from fungi in Kew Gardens to tigers in North Korea) was even more fun. There is still enormous room for improvement in virtual conferences, but since they aren’t as well established as traditional in-person conferences there’s also a lot of flexibility for each conference to be designed differently. Once we’re through the pandemic and in-person conferences return it’d be nice for some of these benefits to be maintained as hybrid conferences are designed.   

The visual complexity of coronal mass ejections follows the solar cycle

Shannon Jones – s.jones2@pgr.reading.ac.uk

Coronal Mass Ejections (CMEs), or solar storms, are huge eruptions of particles and magnetic field from the Sun. With the help of 4,028 citizen scientists, my supervisors and I have just published a paper, showing that the appearance of CMEs changes over the solar cycle, with CMEs appearing more visually complex towards solar maximum.

We created a Zooniverse citizen science project in collaboration with the UK Science Museum called ‘Protect our planet from solar storms’, where we showed pairs of images of CMEs from the Heliospheric (wide-angle white-light) Imagers on board the twin STEREO spacecraft, and asked participants to decide whether the left or right CME looked most complicated, or complex (Jones et al. 2020)  We used these data to rank 1,110 CMEs in order of their relative visual complexity, by fitting a Bradley-Terry model. This is a statistical model widely used by psychologists to rank items by human preference. Figure 1 shows three example storms from across the ranking (see figshare for an animation with all CMEs). When we asked the citizen scientists how they chose the most complex CME, they described complex CMEs as “big”, “messy” and “bright” with complicated “waves”, “patterns” and “shading”.

Figure 1. Example images showing three example CMEs in ranked order of subjective complexity increasing from low (left-hand image) through to high (right-hand image).

Figure 2 shows the relative complexity of all 1,110 CMEs, with CMEs observed by STEREO-A shown by pink dots, and CMEs observed by STEREO-B shown by blue dots. The lower panel shows the daily sunspot number over the same time period, using data from SILSO World Data Center. This shows that the annual average complexity values follow the solar cycle, and that the average complexity of CMEs observed by STEREO-B is consistently lower that the complexity of CMEs observed by STEREO-A. This might be due to slight differences between the imagers: STEREO-B is affected by pointing errors, which might blur smaller-scale features within the images.

Figure 2. Top panel: relative complexity of every CME in the ranking plotted against time. Pink points represent STEREO-A images, while blue points represent STEREO-B images. Annual means and standard deviations are over plotted for STEREO-A (red dashed line) and STEREO-B (blue dashed line) CMEs. Bottom panel: Daily total sunspot number from SILSO shown in yellow, with annual means over plotted (orange dashed line).

If a huge CME were to hit Earth, there could be serious consequences such as long-term power cuts and satellite damage. Many of these impacts could be reduced if we had adequate warning that a CME was going to hit. Our results suggest that there is some predictability in the structure of CMEs, which may help to improve future space weather forecasts.

We plan to continue our research and quantitatively determine which CME characteristics are associated with visual complexity. We also intend to investigate what is causing the CMEs to appear differently. Possible causes include: the complexity of the magnetic field at the CME source region on the Sun; the structure of the solar wind the CME passes through; or multiple CMEs merging, causing a CME to look more complex.

Please see the paper for more details, or email me at s.jones2@pgr.reading.ac.uk if you have any questions!

Jones, S. R., C. J. Scott, L. A. Barnard, R. Highfield, C. J. Lintott and E. Baeten (2020): The visual complexity of coronal mass ejections follows the solar cycle. Space Weather, https://doi.org/10.1029/2020SW002556.

My journey to Reading: Going from application to newly minted SCENARIO PhD student

George Gunn – g.f.gunn@pgr.reading.ac.uk 

Have you been thinking ‘I’ll never be good enough for a PhD’? Or perhaps you’ve been set on the idea of joining those who push the bounds of knowledge for quite some time, but are feeling daunted by the process? Well, keep reading. 

I started university with the hopes of stretching myself academically and gaining an undergraduate degree. As the degree progressed, I found myself increasingly improving in my marks and abilities. I enjoyed the coursework – researching a topic and the sense of discovery brought about by it. I became deeply interested in climate change and the impact humans have on the environment and was able to begin my dissertation research a year early because I was so motivated within my subject. 

In my final year of undergraduate studies, much of my time was pre-occupied with my role as Student President. Attending social events, board meetings, and lots of other things that didn’t involve a darkened room and a pile of books. I was very much a student who turned up, put the effort in, and then spent the rest of my time as I wished.  

Giving a speech at the Global Youth Strike for Climate, Inverness, as Student President. Extracurricular activities are a worthwhile addition to your application and were considered a lot during the interview! 

I began to look for opportunities for research degrees online, as well as asking almost anyone and everyone I knew academically if they had any ideas. Nothing came to fruition. That was until I received a Twitter notification from my lecturer drawing my attention to what looked to be an ideal PhD studentship. The snag? Applications were due to close within 3 hours of me checking the notification. 

By the time I had read the project particulars, accessed the cited literature and paced around my living room more than a few times, I had around 2 hours to submit an application. Due to my prior unsuccessful searches, I hadn’t previously submitted a PhD application and so had nothing to refer to – but proceed I did.  

Thankfully, the application was relatively straightforward. Standard job application information, details of the grades I had achieved and was predicted to achieve, and two academic references (for me, my personal academic tutor and climate change lecturer). What took time (I would advise anyone considering an application to prepare these earlier than I did!) was the statement of research interest and academic CV. My university careers service had excellent advice and resources to assist in that regard. 

Within minutes of the deadline, my application was in. I had almost forgotten about it by the time a week-or-so later I received an e-mail inviting me to Reading for an interview day. Shocked and excited were the emotions – little old me from the Highlands of Scotland, who hadn’t yet finished his undergraduate degree, was somehow being invited to one of the best Meteorology departments in the world to interview for a PhD studentship.  

No time to spare, my travel to and from Reading was booked. For the next couple of weeks, all I now had to worry about was how to do a PhD interview – though as will become clear, I need not have worried. I sought the advice of academic friends and colleagues (a calming influence for sure) and countless websites and forums (generally a source of unnecessary worry). 

Given the level of conflicting advice on PhD interviews, on arrival at Reading I wasn’t sure what to expect. At the front door I was provided with all the information that I needed for the day. I then made my way to a room with all the other candidates for a welcome talk and the opportunity to learn more about other projects on offer over lunch. 

The interview itself was very relaxed. No ‘stock’ PhD interview questions here – it was very much an opportunity to discuss my previous work and abilities, and how that might fit with the project. Importantly, it was an opportunity to meet my potential supervisors and ‘interview’ them too. If you’re going to spend 3-4 years working together, the connection needs to work well both ways. So, whilst the 30-minute interview slot seemed daunting on paper, the time flew by and it was soon time to leave. 

Fast forward a week or so and I was very surprised to receive an e-mail offering me the studentship that I had applied for: Developing an urban canopy model for improved weather forecasts in cities. And the rest, as they say, is history. 

At my desk in the Department of Meteorology, University of Reading. 

I hope that this blog post has helped you to feel less daunted to begin your PhD journey. Please feel free to get in touch with me by e-mail if you would like to chat further about beginning a PhD, or indeed to let me know how your own interview goes. Good luck! 

The Scandinavia-Greenland Pattern: something to look out for this winter

Simon Lee, s.h.lee@pgr.reading.ac.uk

The February-March 2018 European cold-wave, known widely as “The Beast from the East” occurred around 2 weeks after a major sudden stratospheric warming (SSW) event on February 12th. Major SSWs typically occur once every other winter, involving significant disruption to the stratospheric polar vortex (a planetary-scale cyclone which resides over the pole in winter). SSWs are important because their occurrence can influence the type and predictability of surface weather on longer timescales of between 2 weeks to 2 months. This is known as subseasonal-to-seasonal (S2S) predictability, and “bridges the gap” between typical weather forecasts and seasonal forecasts (Figure 1).  

Figure 1: Schematic of medium-range, S2S and seasonal forecasts and their relative skill. [Figure 1 in White et al. (2017)] 

In general, S2S forecasts suffer from relatively low skill. While medium-range forecasts are an initial value problem (depending largely on the initial conditions of the forecast) and seasonal forecasts are a boundary value problem (depending on slowly-varying constraints to the predictions, such as the El Niño-Southern Oscillation), S2S forecasts lie somewhere between the two. However, certain “windows of opportunity” can occur that have the potential to increase S2S skill – and a major SSW is one of them. Skilful S2S forecasts can be of particular benefit to public health planners, the transport sector, and energy demand management, among many others.  

Following an SSW, the eddy-driven jet stream tends to weaken and shift equatorward. This is characteristic of the negative North Atlantic Oscillation (NAO) and negative Arctic Oscillation (AO), and during these patterns the risk of cold air outbreaks significantly increases in places like northwest Europe. So, by knowing this, S2S forecasts issued during the major SSW were able to highlight the increased risk of severely cold weather.  

Given that we know that following an SSW certain weather types are more likely for several weeks, and forecasts may be more skilful, it might seem advantageous to know an SSW was coming at a long lead-time in order to really push the boundaries of S2S prediction. So, what about in 2018?  

In the first paper from my PhD, published in July 2019 in JGR-Atmospheres, we explored the onset of predictions of the February 2018 SSW. We found that, until about 12 days beforehand, extended-range forecasts that contribute to the S2S database (an international collaboration of extended-range forecast data) did not accurately predict the event; in fact, most predictions indicated the vortex would remain unusually strong! 

We diagnosed that anticyclonic wave breaking in the North Atlantic was a crucial synoptic-scale “trigger” event for perturbing the stratospheric vortex, by enhancing vertically propagating Rossby waves (which weaken the vortex when they break in the stratosphere). Forecasts struggled to predict this event far in advance, and thus struggled to predict the SSW. We called the pattern the “Scandinavia-Greenland (S-G) dipole” – characterised by an anticyclone over Scandinavia and a low over Greenland (Figure 2), and we found it was present before 35% of previous SSWs (1979-2018). The result agrees with several previous studies highlighting the role of blocking in the Scandinavia-Urals region, but was the first to suggest such a significant impact of a single tropospheric event.  

Figure 2: Correlation between mean sea level pressure forecasts over 3-5 February 2018 and subsequent forecasts of 10 hPa 60°N zonal-mean zonal wind on 9-11 February, in (a) NCEP and (b) ECMWF ensembles launched between 29 January and 1 February 2018. White lines (dashed negative) indicate correlations exceeding +/- 0.7, while the black dashed lines indicate the nodes of the S-G dipole. [Figure 3 in Lee et al. (2019)] 

So, we had established the S-G dipole was important in the predictability onset in 2018, and important in previous cases – but how well do S2S models generally capture the pattern?  

That was the subject of our recent (open-access) paper, published in August in QJRMS. We define a more generalised pattern by performing empirical orthogonal function (EOF) analysis on mean sea-level pressure anomalies in a region of the northeast Atlantic during November-March in ERA5 reanalysis (Figure 3).  While the leading EOF (the “zonal pattern”) resembles the NAO, the 2nd EOF resembles the S-G dipole from our previous paper – so we call it the “S-G pattern”.  

Figure 3: The first two leading EOFs of MSLP anomalies in the northeast Atlantic during November-March in ERA5, expressed as hPa per standard deviation of the principal component timeseries. The percentage of variance explained by the EOF is also shown. [Figure 1 in Lee et al. (2020) 

We then establish, through lagged linear regression analysis, that the S-G pattern is associated with enhanced vertically propagating wave activity (measured by zonal-mean eddy heat flux) into the stratosphere, and a subsequently weakened stratospheric vortex for the next 2 months. Thus, it supports our earlier work, and motivates considering how the pattern is represented in S2S models. To do this, we look at hindcasts – forecasts initialised for dates in the past – from 10 different prediction systems from around the world.  

We find that while all the S2S models represent the spatial pattern of these two EOFs very well, some have biases in the variance explained by the EOFs, particularly at weeks 3 and 4 (Figure 4). Broadly, all the models have more variance explained by their first EOF compared with ERA5, and less by the second EOF – but this bias is particularly large for the three models with the lowest horizontal resolution (BoM, CMA, and HMCR).  

Figure 4: Weekly-mean ratio between the variance explained by the EOFs in each model and the ERA5 EOF. [Figure 6 in Lee et al. (2020)] 

Additionally, we find that the deterministic prediction skill in the S-G pattern (measured by the ensemble-mean correlation) can be as small as 5-6 days for the BoM model – and only as high as 11 days in the higher resolution models. Extending this to probabilistic skill in weeks 3 and 4, we find models have only limited (if any) skill above climatology in weeks 3 and 4 (and much less than the skill in the leading EOF, the NAO-like pattern).  

Furthermore, we find that the relationship between the S-G pattern and the enhanced heat flux in the stratosphere decays with lead-time in most S2S models, even in the higher-resolution models (Figure 5). Thus, this suggests that the dynamical link between the troposphere and stratosphere weakens with lead time in these models – so even a correct tropospheric prediction may not, in these cases, have a subsequently accurate extended-range stratospheric forecast. 

Figure 5: Weekly mean regression coefficients between the S–G index and the corresponding eddy heat flux anomalies at (a) 300 hPa on the same day, (b) 100 hPa three days later, and (c) 50 hPa four days later. The lags correspond to days with maximum correlation in ERA5. Stippled bars indicate a significant difference from ERA5 at the 95% confidence level. [Figure 11 in Lee et al. (2020)] 

So, when taking this all together, we have: 

  • The S-G pattern is the second-leading mode of MSLP variability in the northeast Atlantic during winter. 
  • It is associated with enhanced vertically propagating wave activity into the stratosphere and a weakened polar vortex in the following weeks to months. 
  • S2S models represent the spatial patterns of the two leading EOFs well. 
  • Most S2S models have a zonal variability bias, with relatively more variance explained by the leading EOF and correspondingly less in the second EOF.  
  • This bias is largest in the lowest-resolution models in weeks 3 and 4.  
  • Extended range skill in the S-G pattern is low, and lower than for the NAO-like zonal pattern. 
  • The linear relationship between the S-G pattern and eddy heat flux in the stratosphere decays with lead-time in most S2S models.  

The zonal variance bias is consistent with S2S model biases in Rossby wave breaking and blocking, while these biases have been widely found to be largest in the lowest resolution models. The results suggest that the poor prediction of the S-G event in February 2018 is not unique to that case, but a more generic issue. Overall, the combination of the variability biases, the poor extended-range predictability, and the poor representation of its impact on the stratospheric vortex at longer lead-times likely contributes to limiting skill at predicting major SSWs on S2S timescales – which remains low, despite the stratosphere’s much longer timescales. Correcting the biases outlined here will likely contribute to improving this skill, and subsequently increasing how far we are able to predict real-world weather.   

How Important are Post-Tropical Cyclones to European Windstorm Risk?

Elliott Sainsbury, e.sainsbury@pgr.reading.ac.uk

To date, the 2020 North Atlantic hurricane season has been the most active on record, producing 20 named storms, 7 hurricanes, and a major hurricane which caused $9 billion in damages across the southern United States. With the potential for such destructive storms, it is understandable that a large amount of attention is paid to the North Atlantic basin at this time of year. Whilst hurricanes have been known to cause devastation in the tropics for centuries, until recently there was little appreciation for the destructive potential of these systems across Europe.

As tropical cyclones such as hurricanes move poleward – away from the tropics and into regions of lower sea surface temperatures and higher vertical wind shear, they undergo a process called extratropical transition (Klein et al., 2000): Over a period of time, the cyclones change from symmetric, warm cored systems into asymmetric cold core systems fuelled by horizontal temperature gradients, as opposed to latent heat fluxes (Evans et al., 2017). These systems, so-called post-tropical cyclones (PTCs), often reintensify in the mid-latitude Atlantic with consequences for land masses downstream – often Europe. This was highlighted in 2017, when ex-hurricane Ophelia impacted Ireland, bringing with it the strongest winds Ireland had seen in 50 years (Stewart, 2018). 3 people were killed, and 360,000 homes were without power.

In a recent paper, we quantify the risk associated with PTCs across Europe relative to mid-latitude cyclones (MLCs) for the first time – in terms of both the absolute risk (i.e. what fraction of high impact wind events across Europe are caused by PTCs?) and also the relative risk (for a given PTC, how likely is it to be associated with high-impact winds, and how does this compare to a given MLC?). By tracking all cyclones impacting a European domain (36-70N, 10W-30E) in the ERA5 reanalysis (1979-2017) using a feature tracking algorithm (Hodges, 1994, 1995, 1999), we identify the post-tropical cyclones using spatiotemporal matching (Hodges et al., 2017) with the observational record, IBTrACS (Knapp et al., 2010).

Figure 1: Distributions of the maximum intensity (maximum wind speed, minimum MSLP) attained by each PTC and MLC inside (a-c) the whole European domain (36-70N, 10W-30E), (d-f) the Northern Europe domain (48-70N, 10W-30E) and (g-i) the Southern Europe domain (36-48N, 10W-30E), using cyclones tracked through the ERA5 reanalysis all year round, 1979-2017. [Figure 2 in Sainsbury et al. 2020].

Figure 1 shows the distributions of maximum intensity for PTCs and MLCs across the entire European domain (top), Northern Europe (48-70°N, 10°W-30°E; middle) and Southern Europe (36-48°N, 10°W-30°E; bottom), using all cyclone tracks all year round. The distribution of PTC maximum intensities is higher (in terms of both wind speed and MSLP) than MLCs, particularly across Northern Europe. The difference between the maximum intensity distributions of PTCs and MLCs across Northern Europe is statistically significant (99%). PTCs are also present in the highest of intensity bins, indicating that the strongest PTCs have intensities comparable to strong wintertime MLCs.

Whilst Figure 1 shows that PTCs are stronger than MLCs even when considering MLCs forming all year round (including the often much stronger wintertime MLCs), it is also useful to compare the risks posed by PTCs relative to MLCs forming at the same time of the year – during the North Atlantic hurricane season (June 1st-November 30th).

Figure 2 shows the fraction of all storms, binned by their maximum intensity in their respective domains, which are PTCs. For storms with weak-moderate maximum winds (first three bins in the figure), <1% of such events are caused by PTCs (with the remaining 99% caused by MLCs). For stronger storms, particularly those of storm force (>25 ms-1 on the Beaufort scale), this percentage is much higher. Despite less than 1% of all storms impacting Northern Europe during hurricane season being PTCs, almost 9% of all storms with storm-force winds which impact the region are PTCs, suggesting that a disproportionate fraction of high-impact windstorms are PTCs. 8.2% of all Northern Europe impacting PTCs which form during hurricane season impact the region with storm-force winds. This fraction is only 0.8% for MLCs, suggesting that the fraction of PTCs impacting Northern Europe with storm-force winds is approximately 10 times greater than MLCs.

Figure 2: The fraction of cyclones impacting Europe which are PTCs as a function of their maximum 10m wind speed in their respective domain. Lower bound of wind speed is shown on the x axis, bin width = 3. Error bars show the 95% confidence interval. All cyclone tracks forming during the North Atlantic hurricane season are used. [Figure 4 in Sainsbury et al. 2020].

Here we have shown that PTCs, at their maximum intensity over Northern Europe, are stronger than MLCs. However, the question still remains as to why this is the case. Warm-seclusion storms post-extratropical transition have been shown to have the fastest rates of reintensification (Kofron et al., 2010) and typically have the lowest pressures upon impacting Europe (Dekker et al., 2018). Given the climatological track that PTCs often take over the warm waters of the Gulf stream, along with the contribution of both baroclinic instability and latent heat release for warm-seclusion development (Baatsen et al., 2015), one hypothesis may be that PTCs are more likely to develop into warm seclusion storms than the broader class of mid-latitude cyclones, potentially explaining the disproportionate impacts they cause across Europe. This will be the topic of future work.

Despite PTCs disproportionately impacting Europe with high intensities, they are a relatively small component of the total cyclone risk in the current climate. However, only small changes are expected in MLC number and intensity under RCP 4.5 (Zappa et al., 2013). Conversely, the number of hurricane-force (>32.6 ms-1) storms impacting Norway, the North Sea and the Gulf of Biscay has been projected to increases by a factor of 6.5, virtually all of which originate in the tropics (Haarsma et al., 2013). Whilst the absolute contribution of PTCs to hurricane season windstorm risk is currently low, PTCs may make an increasingly significant contribution to European windstorm risk in a future climate.

Interested to read more? Read our paper, published in Geophysical Research Letters.

Sainsbury, E. M., R. K. H. Schiemann, K. I. Hodges, L. C. Shaffrey, A. J. Baker, K. T. Bhatia, 2020: How Important Are Post‐Tropical Cyclones for European Windstorm Risk? Geophysical Research Letters, 47(18), e2020GL089853, https://doi.org/10.1029/2020GL089853

References

Baatsen, M., Haarsma, R. J., Van Delden, A. J., & de Vries, H. (2015). Severe Autumn storms in future Western Europe with a warmer Atlantic Ocean. Climate Dynamics, 45, 949–964. https://doi.org/10.1007/s00382-014-2329-8

Dekker, M. M., Haarsma, R. J., Vries, H. de, Baatsen, M., & Delden, A. J. va. (2018). Characteristics and development of European cyclones with tropical origin in reanalysis data. Climate Dynamics, 50(1), 445–455. https://doi.org/10.1007/s00382-017-3619-8

Evans, C., Wood, K. M., Aberson, S. D., Archambault, H. M., Milrad, S. M., Bosart, L. F., et al. (2017). The extratropical transition of tropical cyclones. Part I: Cyclone evolution and direct impacts. Monthly Weather Review, 145(11), 4317–4344. https://doi.org/10.1175/MWR-D-17-0027.1

Haarsma, R. J., Hazeleger, W., Severijns, C., De Vries, H., Sterl, A., Bintanja, R., et al. (2013). More hurricanes to hit western Europe due to global warming. Geophysical Research Letters, 40(9), 1783–1788. https://doi.org/10.1002/grl.50360

Hodges, K., Cobb, A., & Vidale, P. L. (2017). How well are tropical cyclones represented in reanalysis datasets? Journal of Climate, 30(14), 5243–5264. https://doi.org/10.1175/JCLI-D-16-0557.1

Hodges, K. I. (1994). A general method for tracking analysis and its application to meteorological data. Monthly Weather Review, 122(11), 2573–2586. https://doi.org/10.1175/1520-0493(1994)122<2573:AGMFTA>2.0.CO;2

Hodges, K. I. (1995). Feature Tracking on the Unit Sphere. Monthly Weather Review, 123(12), 3458–3465. https://doi.org/10.1175/1520-0493(1995)123<3458:ftotus>2.0.co;2

Hodges, K. I. (1999). Adaptive constraints for feature tracking. Monthly Weather Review, 127(6), 1362–1373. https://doi.org/10.1175/1520-0493(1999)127<1362:acfft>2.0.co;2

Klein, P. M., Harr, P. A., & Elsberry, R. L. (2000). Extratropical transition of western North Pacific tropical cyclones: An overview and conceptual model of the transformation stage. Weather and Forecasting, 15(4), 373–395. https://doi.org/10.1175/1520-0434(2000)015<0373:ETOWNP>2.0.CO;2

Knapp, K. R., Kruk, M. C., Levinson, D. H., Diamond, H. J., & Neumann, C. J. (2010). The international best track archive for climate stewardship (IBTrACS). Bulletin of the American Meteorological Society, 91(3), 363–376. https://doi.org/10.1175/2009BAMS2755.1

Kofron, D. E., Ritchie, E. A., & Tyo, J. S. (2010). Determination of a consistent time for the extratropical transition of tropical cyclones. Part I: Examination of existing methods for finding “ET Time.” Monthly Weather Review, 138(12), 4328–4343. https://doi.org/10.1175/2010MWR3180.1

Stewart, S. R. (2018). Tropical Cyclone Report: Hurricane Ophelia. National Hurricane Center, (February), 1–32. https://doi.org/AL142016

Zappa, G., Shaffrey, L. C., Hodges, K. I., Sansom, P. G., & Stephenson, D. B. (2013). A multimodel assessment of future projections of North Atlantic and European extratropical cyclones in the CMIP5 climate models. Journal of Climate, 26(16), 5846–5862. https://doi.org/10.1175/JCLI-D-12-00573.1

Exploring the impact of variable floe size on the Arctic sea ice

Email: a.w.bateson@pgr.reading.ac.uk

The Arctic sea ice cover is made up of discrete units of sea ice area called floes. The size of these floes has an impact on several sea ice processes including the volume of melt produced at floe edges, the momentum exchange between the sea ice, ocean, and atmosphere, and the mechanical response of the sea ice to stress. Models of the sea ice have traditionally assumed that floes adopt a uniform size, if floe size is explicitly represented at all in the model. Observations of floes show that floe size can span a huge range, from scales of metres to tens of kilometres. Generally, observations of the floe size distribution (FSD) are fitted to a power law or a combination of power laws (Stern et al., 2018a).

The Los Alamos sea ice model, hereafter referred to as CICE, usually assumes a fixed floe size of 300 m. We can impose a simple FSD model into CICE derived from a power law to explore the impact of variable floe size on the sea ice cover. Figure 1 is a diagram of the WIPoFSD model (Waves-in-Ice module and Power law Floe Size Distribution model), which assumes a power law with a fixed exponent, \alpha, between a lower floe size cut-off, d_{min}, and an upper floe size cut-off, d_{max}. The model also incorporates a floe size variable, l_{var}, to capture the effects of processes that can influence floe size. The processes represented are wave break-up of floes, melting at the floe edge, winter floe growth, and advection. The model includes a wave advection and attenuation scheme so that wave properties can be determined within the sea ice field to enable the identification of wave break-up events. Full details of the WIPoFSD model and its implementation into CICE are available in Bateson et al. (2020). For the WIPoFSD model setup considered here, we explore the impact of the FSD on the lateral melt rate, which is the melt rate at the edge surfaces of floes. It is useful to define a new FSD metric that can be used to characterise the impact of the FSD on lateral melt. To do this we note that the lateral melt volume produced by a floe is proportional to the perimeter of the floe. The effective floe size, l_{eff}, is defined as a fixed floe size that would produce the same lateral melt rate as a given FSD, for a fixed total sea ice area.

Figure 1: A schematic of the imposed FSD model. This model is initiated by prescribing a power law with an exponent, \alpha, and between the limits d_{min} and d_{max}. Within individual grid cells the variable FSD tracer, l_{var}, varies between these two limits. l_{var} evolves through lateral melting, wave break-up events, freezing, and advection.

Here we will compare a CICE simulation incorporating the WIPoFSD model, hereafter referred to as stan-fsd, to a reference case, ref, using the CICE standard fixed floe size of 300 m. For the WIPoFSD model, d_{min} = 10 m, d_{max} = 30 km, and \alpha = -2.5. These values have been selected as representative values from observations. The reference setup is initiated in 1990 and spun-up until 2005, when either continued as ref or the WIPoFSD model imposed for stan-fsd before being evaluated from 2006 – 2016. All figures in this post are given as a mean over 2007 – 2016, such that 2005 – 2006 is a period of spin-up for the incorporated WIPoFSD model.

In Figure 2, we show the percentage reduction in the Arctic sea ice extent and volume of stan-fsd relative to ref. The differences in both extent and volume over the pan-Arctic scale evolve over an annual cycle, with maximum differences of -1.0 % in August and -1.1 % in September respectively. The annual cycle corresponds to periods of melting and freeze-up and is a product of the nature of the imposed FSD. Lateral melt rates are a function of floe size, but freeze-up rates are not, hence model differences only increase during periods of melting and not during periods of freeze-up. The difference in sea ice extent reduces rapidly during freeze-up because this freeze-up is predominantly driven by ocean surface properties, which are strongly coupled to atmospheric conditions in areas of low sea ice extent. In comparison, whilst atmospheric conditions initiate the vertical sea ice growth, this atmosphere-ocean coupling is rapidly lost due to insulation of the warmer ocean from the cooler atmosphere once sea ice extends across the horizontal plane. Hence a residual difference in sea ice thickness and therefore volume propagates throughout the winter season. The interannual variability shows that the impact of the WIPoFSD model with standard parameters varies significantly depending on the year.

Figure 2: Difference in sea ice extent (solid, red ribbon) and volume (dashed, blue ribbon) between stan-fsd relative to ref averaged over 2007–2016. The ribbon shows the region spanned by the mean value plus or minus 2 times the standard deviation for each simulation. This gives a measure of the interannual variability over the 10-year period.

Although the pan-Arctic differences in extent and volume shown in Figure 2 are marginal, differences are larger when considering smaller spatial scales. Figure 3 shows the spatial distribution in the changes in sea ice concentration and thickness in March, June, and September for stan-fsd relative to ref in addition to the spatial distribution in l_{eff} for stan-fsd for the same months. Reductions in the sea ice concentration and thickness of up to 0.1 and 50 cm observed respectively in the September marginal ice zone (MIZ). Within the pack ice, increases in the sea ice concentration of up to 0.05 and ice thickness of up to 10 cm can be seen. To understand the non-uniform spatial impacts of the FSD, it is useful to look at the behaviour of l_{eff}. Regions with an l_{eff} greater than 300 m will experience less lateral melt than the equivalent location in ref (all other things being equal) whereas locations with an l_{eff} below 300 m will experience more lateral melt. In Figure 3 we see the transition to values of l_{eff} smaller than 300 m in the MIZ, hence most of the sea ice cover experiences less lateral melting for stan-fsd compared to ref.

Figure 3: Difference in the sea ice concentration (top row, a-c) and thickness (middle row, d-f) between stan-fsd and ref and l_{eff} (bottom row, g-i) for stan-fsd averaged over 2007 – 2016. Results are presented for March (left column, a, d, g), June (middle column, b, e, h) and September (right column, c, f, i). Values are shown only in locations where the sea ice concentration exceeds 5 %.

For Figures 2-3, the parameters used to define the FSD have been set to fixed, standard values. However, these parameters vary significantly between different observed FSDs. It is therefore useful to explore the model sensitivity to these parameters. For α values of -2, -2.5, -3 and -3.5 have been selected to span the general range of values reported in observations (Stern et al., 2018a). For d_{min} values of 1 m, 20 m and 50 m are selected to reflect the different behaviours reported in studies, with some showing power law behaviour extending to 1 m (Toyota et al., 2006) and others showing a tailing off at an order of 10 s of metres (Stern et al., 2018b). For the upper cut-off, d_{max}, values of 1000 m, 10,000 m, 30,000 m and 50,000 m are selected, again to represent the distributions reported in different studies. 50 km is taken as the largest value for d_{max} as this serves as an upper limit to what can be resolved within an individual grid cell on a CICE 1^{\circ} grid. A total of 19 sensitivity studies have been completed used different permutations of the stated values for the FSD model parameters. Figure 4 shows the change in mean September sea ice extent and volume relative to ref plotted against mean annual l_{eff}, averaged over the sea ice extent, for each of these sensitivity studies. The impacts range from a small increase in extent and volume to large reductions of -22 % and -55 % respectively, even within the parameter space defined by observations. Furthermore, there is almost a one-to-one mapping between mean l_{eff} and extent and volume reduction. This suggests l_{eff} is a useful diagnostic tool to predict the impact of a given set of floe size parameters. The system varies most in response to the changes in the α, but it is also particularly sensitive to d_{min}.

Figure 4: Relative change (%) in mean September sea ice volume from 2007 – 2016 respectively, plotted against mean l_{eff} for simulations with different selections of parameters relative to ref. The mean l_{eff} is taken as the equally weighted average across all grid cells where the sea ice concentration exceeds 15%. The colour of the marker indicates the value of the \alpha, the shape indicates the value of d_{min}, and the three experiments using standard parameters but different d_{max} (1000 m, 10000 m and 50000 m) are indicated by a crossed red square. The parameters are selected to be representative of a parameter space for the WIPoFSD model that has been constrained by observations.

There are several advantages to the assumption of a fixed power law in modelling the sea ice floe size distribution. It provides a simple framework to explore the potential impact of an observed FSD on the sea ice mass balance, given observations of the FSD are generally fitted to a power law. In addition, the use of a simple model makes it easier to constrain the mechanism of how the model changes the sea ice cover. However, there are also significant disadvantages including the high model sensitivity to poorly constrained parameters, as shown in Figure 4. In addition, there is evidence both that the exponent evolves over an annual cycle and is not a fixed value (Stern et al., 2018b) and that the power law is not a statistically valid description of the FSD over all floe sizes (Horvat et al., 2019). An alternative approach to modelling the FSD is the prognostic model of Roach et al. (2018, 2019). The prognostic model avoids any assumptions about the shape of the distribution and instead assigns sea ice area to a set of adjacent floe size categories, with individual processes parameterised at floe scale. This approach carries its own set of challenges. If important physical processes are missing from the model it will not be possible to simulate a physically realistic distribution. In addition, the prognostic model has a significant computational cost. In practice, the choice of FSD modelling approach will depend on the application.

Further reading
Bateson, A. W., Feltham, D. L., Schröder, D., Hosekova, L., Ridley, J. K. and Aksenov, Y.: Impact of sea ice floe size distribution on seasonal fragmentation and melt of Arctic sea ice, Cryosphere, 14, 403–428, https://doi.org/10.5194/tc-14-403-2020, 2020.

Horvat, C., Roach, L. A., Tilling, R., Bitz, C. M., Fox-Kemper, B., Guider, C., Hill, K., Ridout, A., and Shepherd, A.: Estimating the sea ice floe size distribution using satellite altimetry: theory, climatology, and model comparison, The Cryosphere, 13, 2869–2885, https://doi.org/10.5194/tc-13-2869-2019, 2019. 

Stern, H. L., Schweiger, A. J., Zhang, J., and Steele, M.: On reconciling disparate studies of the sea-ice floe size distribution, Elem. Sci. Anth., 6, p. 49, https://doi.org/10.1525/elementa.304, 2018a. 

Stern, H. L., Schweiger, A. J., Stark, M., Zhang, J., Steele, M., and Hwang, B.: Seasonal evolution of the sea-ice floe size distribution in the Beaufort and Chukchi seas, Elem. Sci. Anth., 6, p. 48, https://doi.org/10.1525/elementa.305, 2018b. 

Roach, L. A., Horvat, C., Dean, S. M., and Bitz, C. M.: An Emergent Sea Ice Floe Size Distribution in a Global Coupled Ocean-Sea Ice Model, J. Geophys. Res.-Oceans, 123, 4322–4337, https://doi.org/10.1029/2017JC013692, 2018. 

Roach, L. A., Bitz, C. M., Horvat, C. and Dean, S. M.: Advances in Modeling Interactions Between Sea Ice and Ocean Surface Waves, J. Adv. Model. Earth Syst., 11, 4167–4181, https://doi.org/10.1029/2019MS001836, 2019.

Toyota, T., Takatsuji, S., and Nakayama, M.: Characteristics of sea ice floe size distribution in the seasonal ice zone, Geophys. Res. Lett., 33, 2–5, https://doi.org/10.1029/2005GL024556, 2006.