Models and Memories: Our NCAS CMSS 2025 Experience

Piyali Goswami: p.goswami@pgr.reading.ac.uk

Mehzooz Nizar: m.nizar@pgr.reading.ac.uk

This September, we attended the NCAS Climate Modelling Summer School (CMSS), held at the University of Cambridge from 8th to 19th September. Five of us from the University of Reading joined this two-week residential programme. It was an intense and inspiring experience, full of lectures, coding sessions, discussions, and social events. In this blog, we would like to share our experiences.

 Picture 1: Group Picture of Students and teaching staff. One cohort, many time zones, zero dull moments…

About NCAS CMSS

The NCAS Climate Modelling Summer School (CMSS) is a visionary program, launched in 2007 with funding originating from grant proposals led by Prof. Pier Luigi Vidale. Run by leading researchers from the National Centre for Atmospheric Science and the University of Reading, it’s an immersive, practice-driven program that equips early-career researchers and PhD students with deeper expertise in climate modelling, Earth system science, and state-of-the-art computing. Held biennially in Cambridge, CMSS has trained 350 students from roughly 40 countries worldwide.

The CMSS 2025 brought together around 30 participants, including PhD students and professionals interested in the field of Climate Modelling. 

Long Days, Big Ideas: Inside Our Schedule

The school was full of activity from morning to evening. We started around 9:00 AM and usually wrapped up by 8:30 PM, with a good mix of lectures, practical sessions, and discussions that made the long days fly by.

Week 1 was led by Dr Hilary Weller, who ran an excellent series on Numerical Methods for Atmospheric Models. Mornings were devoted to lectures covering core schemes; afternoons shifted to hands-on Python sessions to implement and test the methods. Between blocks, invited talks from leading researchers across universities highlighted key themes in weather and climate modelling. After dinner, each day closed with a thought-provoking discussion on climate modelling, chaired by Prof. Pier Luigi Vidale, where participants shared ideas on improving models and their societal impact. 

The week concluded with group presentations summarising the key takeaways from Hilary’s sessions and our first collaborative activity that set the tone for the rest of the school. It was followed by a relaxed barbecue evening, where everyone finally had a chance to unwind, chat freely, and celebrate surviving our first week together. 

Picture 2 : Working on our group projects. Looks like NASA, feels like: ‘what’s our team name?’

Week 2 was all about getting hands-on with a climate model and learning how to analyse its output. We moved into group projects using SpeedyWeather.jl to design and run climate model experiments. It is a global atmospheric model with simplified physics, designed as a research playground. One of the developers of SpeedyWeather.jl, Milan Klöwer, was with us throughout the week to guide and support our work. Each team explored a different question, from sensitivity testing to analysing the model outputs, and spent the afternoons debugging, plotting, and comparing results. Evenings featured talks from leading scientists on topics such as the hydrological cycle, land and atmosphere interactions, and the carbon cycle. 

The week also included a formal dinner at Sidney Sussex, a welcomed pause before our final presentations. On Friday 19th of September, every group presented its findings before we all headed home. Some slides were finished only seconds before presenting, but the atmosphere was upbeat and supportive. It was a satisfying end to two weeks of hard work, shared learning, and plenty of laughter. A huge thank you to the teaching team for being there, from the “silly” questions to the stubborn bugs. Your patience, clarity, and genuine care made all the difference.

Picture 3: SpeedyWeather, as told by its favourite storyteller Milan, Picture 4: Pier Luigi probably preparing for the next summer school..

Coffee, Culture, and Climate Chat

The best part of the summer school was the people. The group was diverse: PhD students, and professionals from different countries and research areas. We spent nearly every moment together, from breakfast to evening socials, often ending the day with random games of “Would You Rather” or talking about pets. The summer school’s packed schedule brought us closer and sparked rich chats about science and life, everything from AI’s role in climate modelling to the policy levers behind climate action. We left with a lot to think about. Meeting people from around the world exposed us to rich cultural diversity and new perspectives on how science is practiced in different countries, insights that were both fresh and valuable. It went beyond training: we left with skills, new friends, and the seeds of future collaborations, arguably the most important part of research.

Picture 5: Barbecue evening after wrapping up the first week, Picture 6: Formal dinner at Sidney Sussex, one last evening together before the final presentations

Reflections and takeaways

We didn’t become expert modellers in two weeks, but we did get a glimpse of how complex and creative climate modelling can be. The group presentations were chaotic but fun. Different projects, different approaches, and a few slides that weren’t quite finished in time. Some of us improvised more than we planned, but the atmosphere was supportive and full of laughter. More than anything, we learned by doing and by doing it together. The long days, the discussions, and the teamwork made it all worthwhile.

If you ever get the chance to go, take it. You’ll come back with new ideas, good memories, and friends who make science feel a little more human.

For the future participants

The NCAS CMSS usually opens in early spring, with applications closing around June. With limited spots, selection is competitive and merit-based, evaluating both fit for the course and the expected benefit to the student.

Bring curiosity, enthusiasm, and a healthy dose of patience, you’ll need all three. But honestly, that’s what makes it fun. You learn quickly, laugh a lot, and somehow find time to celebrate when a script finally runs without error. By the end, you’ll be tired, happy, and probably a little proud of how much you managed to do (and probably a few new friends who helped you debug along the way).

Preparing for the assimilation of future ocean-current measurements

By Laura Risley

Ocean data assimilation (DA) is vital. Firstly, it is essential to improving forecasts of ocean variables. Not only that, the interaction between the ocean and atmosphere is key to numerical weather prediction (NWP) as coupled ocean-atmosphere DA schemes are used operationally.  

At present, observations of the ocean currents are not assimilated operationally. This is all set to change, as satellites are being proposed to measure these ocean currents directly. Unfortunately, the operational DA systems are not yet equipped to handle these observations due to some of the assumptions made about the velocities. In my work, we propose the use of alternative velocity variables to prepare for these future ocean current measurements. These will reduce the number of assumptions made about the velocities and is expected to improve the NWP forecasts.

What is DA? 

DA combines observations and a numerical model to give a best estimate of the state of our system – which we call our analysis. This will lead to a better forecast. To quote my lunchtime seminar ‘Everything is better with DA!’

Our model state usually comes from a prior estimate which we refer to as the background. A key component of data assimilation is that the errors present in both sets of data are taken into consideration. These uncertainties are represented by covariance matrices. 

I am particularly interested in variational data assimilation, which formulates the DA problem into a least squares problem. Within variational data assimilation the analysis is performed with a set of variables that differ from the original model variables, called the control variables. After the analysis is found in this new control space, there is a transformation back to the model space. What is the purpose of this transformation? The control variables are chosen such that they can be assumed approximately uncorrelated, reducing the complexity of the data assimilation problem.

Velocity variables in the ocean 

My work is focused on the treatment of the velocities in NEMOVAR. This is the data assimilation software used by the NEMO ocean model, used operationally at the Met Office and ECMWF. In NEMOVAR the velocities are transformed to their unbalanced components, and these are then used as control variables. The unbalanced components of the velocities are highly correlated, therefore contradicting the assumption made about control variables. This would result in suboptimal assimilation of future surface current measurements – therefore we seek alternative velocity control variables. 

The alternative velocity control variables we propose for NEMOVAR are unbalanced streamfunction and velocity potential. This would involve transforming the current control variables, the unbalanced velocities, to these alternative variables using Helmholtz Theorem. This splits a velocity field into its nondivergent (streamfunction) and irrotational (velocity potential) parts. These parts have been suggested by Daley (1993) as more suitable control variables than the velocities themselves. 

Numerical Implications of alternative variables 

We have performed the transformation to these proposed control variables using the shallow water equations (SWEs) on a 𝛽-plane. To do so we discretised the variables on the Arakawa-C grid. The traditional placement of streamfunction on this grid causes issues with the boundary conditions. Therefore, Li et al. (2006) proposed placing streamfunction in the centre of the grid, as shown in Figure 1. This circumvents the need to impose explicit boundary conditions on streamfunction. However, using this grid configuration leads to numerical issues when transforming from the unbalanced velocities to unbalanced streamfunction and velocity potential. We have analysed these theoretically and here we show some numerical results.

Figure 1: The left figure shows the traditional Arakawa-C configuration (Lynch (1989), Watterson (2001)) whereby streamfunction is in the corner of each grid cell. The right figure shows the Arakawa-C configuration proposed by Li et al. (2006) where streamfunction is in the centre of the grid cell. The green shaded region represents land. 

Issue 1: The checkerboard effect 

The transformation from the unbalanced velocities to unbalanced streamfunction and velocity potential involves averaging derivatives, due to the location of streamfunction in the grid cell. This process causes a checkerboard effect – whereby we have numerical noise entering the variable fields due to a loss of information. This is clear to see numerically using the SWEs. We use the shallow water model to generate a velocity field. This is transformed to its unbalanced components and then to unbalanced streamfunction and velocity potential. Using Helmholtz Theorem, the unbalanced velocities are reconstructed. Figure 2 shows the checkboard effect clearly in the velocity error.

Figure 2: The difference between the original ageostrophic velocity increments, calculated using the SWEs, and the reconstructed ageostrophic velocity increments. These are reconstructed using Helmholtz Theorem, from the ageostrophic streamfunction and velocity potential increments. On the left we have the zonal velocity increment error and on the right the meridional velocity increment error. 

Issue 2: Challenges in satisfying the Helmholtz Theorem 

Helmholtz theorem splits the velocity into its nondivergent and irrotational components. We discovered that although streamfunction should be nondivergent and velocity potential should be irrotational, this is not the case at the boundaries, as can be seen in figure 3. This implies the proposed control variables are able to influence each other on the boundary. This would lead to them being strongly coupled and therefore correlated near the boundaries. This directly conflicts the assumption made that our control variables are uncorrelated. 

Figure 3: Issues with Helmholtz Theorem near the boundaries. The left shows the divergence of the velocity field generated by streamfunction. The right shows the vorticity of the velocity field generated by velocity potential. 

Overall, in my work we propose the use of alternative velocity control variables in NEMOVAR, namely unbalanced streamfunction and velocity potential. The use of these variables however leads to several numerical issues that we have identified and discussed. A paper on this work is in preparation, where we discuss some of the potential solutions. Our next work will further this investigation to a more complex domain and assess our proposed control variables in assimilation experiments. 

References: 

Daley, R. (1993) Atmospheric data analysis. No. 2. Cambridge university press. 

Li, Z., Chao, Y. and McWilliams, J. C. (2006) Computation of the streamfunction and velocity potential for limited and irregular domains. Monthly weather review, 134, 3384–3394. 

Lynch, P. (1989) Partitioning the wind in a limited domain. Monthly weather review, 117, 1492–1500. 

Watterson, I. (2001) Decomposition of global ocean currents using a simple iterative method. Journal of Atmospheric and Oceanic Technology, 18, 691–703

The Mystery of Coarse Dust Transport in Observations and Models​

Natalie Ratcliffe – n.ratcliffe@pgr.reading.ac.uk

On Tuesday 23rd April 2024, I presented my PhD work at the lunchtime seminar to the department.  The work I presented incorporated a lot of the work I have achieved during the 3 and a half years of my PhD. This blog post will be a brief overview of the work discussed.

Every year, between 300 and 4000 million tons of mineral dust are lofted from the Earth’s surface (Huneeus et al., 2011; Shao et al., 2011). This dust can travel vast distances, affecting the Earth’s radiative budget, water and carbon cycles, fertilization of land and ocean surfaces, as well as aviation, among other impacts. Observations from recent field campaigns have revealed that we underestimate the amount of coarse particles (>5 um diameter) which are transported long distances (Ryder et al., 2019). Based on our understanding of gravitational settling, some of these particles should not physically be able to travel as far as they do. This results in an underestimation of these particles in climate models, as well as a bias towards modelling finer particles (Kok et al., 2023). Furthermore, fine particles have different impacts on the Earth than coarse particles, for example with the radiative budget at the top of the atmosphere; including more coarse particles in a model reduces the cooling effect that dust has on the Earth.

Thus, my PhD project was born! We wanted to try and peel back the layers of the dusty onion. How are these coarse particles travelling so far?

Comparing a Climate Model and Observations

First, we compared in-situ aircraft observations to a climate model simulation to assess the degree to which the model was struggling to represent coarse particle transport from the Sahara across the Atlantic to the Caribbean. Measuring particles up to 300 um in diameter, the Fennec, AER-D and SALTRACE campaigns provide observations at three stages of transport throughout the lifetime of dust in the atmosphere (near emission, moving over the ocean and at distance from the Sahara; Figure 1). Using these observations, we assess a Met Office Unified Model HadGEM3 configuration. This model has six dust size bins, ranging from 0.063-63.2 um diameter. This is a much larger upper bound than most climate models, which tend to have an upper bound at 10-20 um.

Figure 1: Map showing the location of the flight tracks which were taken when the observations were measured.

We found that the model significantly underestimates the total mass of mineral dust in the atmosphere, as well as the fraction of dust mass made up of coarse particles. This happens at all locations, including at the Sahara: firstly, this suggests that the model is not emitting enough coarse particles to begin with and secondly, the growing model underestimation with distance suggests that the coarse particles are being deposited too quickly. By looking further into the model, we found that the coarsest particles (20-63.2 um) were lost from the atmosphere very quickly, barely surpassing Cape Verde in their westwards transport. Whereas in the observations, these coarsest particles were still present at the Caribbean, representing ~20% of the total dust mass. We also found that the distribution of coarse particles tended to have a stronger dependence on altitude than in the observations, with fewer particles observed at higher altitudes. This work has been written up into a paper which is currently undergoing review, but can be seen in preprint; Ratcliffe et al., (preprint).

Sensitivity Testing of the Model

Now that we have confirmed that the model is struggling to retain coarse particles for long- range transport, we want to work out if any of the model processes involved in transport and deposition could be over- or under-active in coarse particle transport. This involved turning off individual processes one at a time and seeing what impact it has on the dust transport. As we wanted to focus on the impact to coarse particle transport, we needed to start with an improved emission distribution at the Sahara, so we tuned the model to better match the observations from the Fennec campaign.

In our first tests we decided to ‘turn off’ or reduce gravitational settling of dust particles in the model to see what happens if we eliminate the greatest removal mechanism for coarse particles. Figure 2 shows the volume size distribution of these gravitational settling model experiments against the observations. We found that completely removing gravitational settling increased the mass of coarse particles too much, while having little to no effect on the fine particles. We found that to bring the model into better agreement with the observations, sedimentation needs to be reduced by ~50% at the Sahara and more than 80% at the Caribbean.

Figure 2: Mean volume size distribution between 2500-3000 m in the Fennec (red), AER-D (orange) and SALTRACE (yellow) observations, the control mode simulation (black) and the reduced dust sedimentation experiments (blue shades).

We also tested the sensitivity of turbulent mixing, convective mixing and wet deposition on coarse dust transport; however, these experiments did not have as great of an impact on coarse transport as the sedimentation. We found that removing the mixing mechanisms resulted in decreased vertical transport of dust which tended to reduce the horizontal transport. We also carried out an experiment where we doubled the convective mixing, and this did show improved vertical and horizontal transport. Finally, when we removed wet deposition of dust, we found that it had a greater impact on the fine particles, less so on the coarse particles, suggesting that wet deposition is the main removal mechanism for the four finest size bins in the model.

Final Experiment

Now that we know our coarse particles are settling out too quickly and sit a bit too low in the atmosphere, we come to our final set of experiments. Let’s say that our coarse particles in the model and our dust scheme are actually set up perfectly, then could it be the meteorology in the model which is wrong? If the coarse particles were mixed higher up at the Sahara, then would they reach faster horizontal winds to travel further across the Atlantic? To test this theory, I hacked the files which the model uses to start a simulation, and I put all the dust over the Sahara up to the top of the dusty layer (~5 km). We found this increased the lifetime of the coarsest particles so that it took twice as long to lose 50% of the starting mass. This unfortunately only slightly improved transport distance as the particles were still lost relatively quickly. After checking the vertical winds in the model, we found that they were an order of magnitude smaller at the Sahara, Canaries and Cape Verde than the observations made during the field campaigns. This suggests that if the vertical winds were stronger, they could initially raise the dust higher and keep the coarse particles raised higher for longer, extending their atmospheric lifetime.

Summarised Conclusions

To summarise what I’ve found during my PhD:

  1. The model underestimates coarse mass at emission and the underestimation is exacerbated with westwards transport.
  2. Altering the settling velocity of dust in the model brings the model into better agreement with the observations.
    • a. Turbulent mixing, convective mixing and wet deposition have minimal impact on coarse transport.
  3. Lofting the coarse particles higher initially improves transport minimally.
    • a. Vertical winds in the model are an order of magnitude too small.

So what’s next?

If we’ve found that the coarse particles are settling out the atmosphere too quickly (by potentially more than 80%), would that suggest that the deposition equations are wrong and are overestimating particle deposition? So, we change those and everything’s fixed, right? I wish. Unfortunately, the deposition equations are one of the things that we are more scientifically sure of, so our results mean that there’s something happening to the coarse particles that we aren’t modelling which is able to counteract their settling velocity by a very significant amount. Our finding that the vertical winds are too small could be a part of this. Other recent research suggests that processes such as particle asphericity, triboelectrification, vertical mixing and turbulent mixing (has been shown to help in a higher-resolution (not climate) model) in the atmosphere could enhance coarse particle transport.

Huneeus, N., Schulz, M., Balkanski, Y., Griesfeller, J., Prospero, J., Kinne, S., Bauer, S., Boucher, O., Chin, M., Dentener, F., Diehl, T., Easter, R., Fillmore, D., Ghan, S., Ginoux, P., Grini, A., Horowitz, L., Koch, D., Krol, M. C., Landing, W., Liu, X., Mahowald, N., Miller, R., Morcrette, J.-J., Myhre, G., Penner, J., Perlwitz, J., Stier, P., Takemura, T., and Zender, C. S. 2011. Global dust model intercomparison in AeroCom phase I. Atmospheric Chemistry and Physics. 11(15), pp. 7781-7816

Kok, J. F., Storelvmo, T., Karydis, V. A., Adebiyi, A. A., Mahowald, N. M., Evan, A. T., He, C., and Leung, D. M. Jan. 2023. Mineral dust aerosol impacts on global climate and climate change. Nature Reviews Earth Environment 2023, pp. 1–16. url: https://www.nature.com/articles/s43017-022-00379-5

RatcliLe, N. G., Ryder, C. L., Bellouin, N., Woodward, S., Jones, A., Johnson, B., Weinzierl, B., Wieland, L.-M., and Gasteiger, J.: Long range transport of coarse mineral dust: an evaluation of the Met Office Unified Model against aircraft observations, EGUsphere [preprint], https://doi.org/10.5194/egusphere-2024-806, 2024

Ryder, C. L., Highwood, E. J., Walser, A., Seibert, P., Philipp, A., and Weinzierl, B. 2019. Coarse and giant particles are ubiquitous in Saharan dust export regions and are radiatively significant over the Sahara. Atmospheric Chemistry and Physics. 19(24), pp. 15353–15376

Shao, Y., Wyrwoll, K.-H., Chappell, A., Huang, J., Lin, Z., McTainsh, G. H., Mikami, M., Tanaka, T. Y., Wang, X., and Yoon, S. 2011. Dust cycle: An emerging core theme in Earth system science. Aeolian Research. 2(4), pp. 181–204

WesCon 2023: From Unexpected Radiosondes to Experimental Forecasts

Adam Gainford – a.gainford@pgr.reading.ac.uk

Summer might seem like a distant memory at this stage, with the “exact date of snow” drawing ever closer and Mariah Carey’s Christmas desires broadcasting to unsuspecting shoppers across the country. But cast your minds back four-to-six months and you may remember a warmer and generally sunnier time, filled with barbeques, bucket hats, and even the occasional Met Ball. You might also remember that, weather-wise, summer 2023 was one of the more anomalous summers we have experienced in the UK. This summer saw 11% more rainfall recorded than the 1991-2020 average, despite June being dominated by hot, dry weather. In fact, June 2023 was also the warmest June on record and yet temperatures across the summer turned out to be largely average. 

Despite being a bit of an unsettled summer, these mixed conditions provided the perfect opportunity to study a notoriously unpredictable type of weather: convection. Convection is often much more difficult to accurately forecast compared to larger-scale features, even using models which can now explicitly resolve these events. As a crude analogy, consider a pot of bubbling water which has brought to the boil on a kitchen hob. As the amount of heat being delivered to the water increases, we can probably make some reasonable estimates of the number of bubbles we should expect to see on the surface of the water (none initially, but slowly increasing in number as the temperature of the water approaches the boiling point). But we would likely struggle if we tried to predict exactly where those bubbles might appear. 

This is where the WesCon (Wessex Convection) field campaign comes in. WesCon participants spent the entire summer operating radars, launching radiosondes, monitoring weather stations, analysing forecasts, piloting drones, and even taking to the skies — all in an effort to better understand convection and its representation within forecast models. It was a huge undertaking, and I was fortunate enough to be a small part of it. 

In this blog I discuss two of the ways in which I was involved: launching radiosondes from the University of Reading Atmospheric Observatory and evaluating the performance of models at the Met Office Summer Testbed.

Radiosonde Launches and Wiggly Profiles

A core part of WesCon was frequent radiosonde launches from sites across the south and south-west of the UK. Over 300 individual sondes were launched in total, with each one requiring a team of two to three people to calibrate the sonde, record station measurements and fill balloons with helium. Those are the easy parts – the hard part is making sure your radiosonde gets off the ground in one piece.

You can see in the picture below that the observatory is surrounded by sharp fences and monitoring equipment which can be tricky to avoid, especially during gusty conditions. In the rare occurrences when the balloon experienced “rapid unplanned disassembly”, we had to scramble to prepare a new one so as not to delay the recordings by too long.

The University of Reading Atmospheric Observatory, overlooked by some mid-level cloud streets. 

After a few launches, however, the procedure becomes routine. Then you can start taking a cursory look at the data being sent back to the receiving station.

During the two weeks I was involved with launching radiosondes, there were numerous instances of elevated convection, which were a particular priority for the campaign given the headaches these cause for modellers. Elevated convection is where the ascending airmass originates from somewhere above the boundary layer, such as on a frontal boundary. We may therefore expect profiles of elevated convection to include a temperature inversion of some kind, which would prevent surface airmasses from ascending above the boundary layer. 

However, what we certainly did not expect to see were radiosondes appearing to oscillate with height (see my crude screenshot below). 

“The wiggler”! Oscillating radiosondes observed during elevated convection events.

Cue the excited discussions trying to explain what we were seeing. Sensor malfunction? Strong downdraughts? Not quite. 

Notice that the peak of each oscillation occurs almost exactly at 0°C. Surely that can’t be coincidental! Turns out these “wiggly” radiosondes have been observed before, albeit infrequently, and is attributed to snow building up on the surface of the balloon, weighing it down. As the balloon sinks and returns to above-freezing temperatures, the accumulated snow gradually melts and departs the balloon, allowing it to rise back up to the freezing level and accumulate more snow, and so on. 

That sounds reasonable enough. So why, then, do we see this oscillating behaviour so infrequently? One of the reasons discovered was purely technical. 

If you would like to read more about these events, a paper is currently being prepared by Stephen Burt, Caleb Miller and Brian Lo. Check back on the blog for further updates!

Humphrey Lean, Eme Dean-Lewis (left) and myself (right) ready to launch a sonde.

Met Office Summer Testbed

While not strictly a part of WesCon, this summer’s Met Office testbed was closely connected to the themes of the field campaign, and features plenty of collaboration. 

Testbeds are an opportunity for operational meteorologists, researchers, academics, and even students to evaluate forecast outputs and provide feedback on particular model issues. This year’s testbed was focussed on two main themes: convection and ensembles. These are both high priority areas for development in the Met Office, and the testbed provides a chance to get a broader, more subjective evaluation of these issues.

Group photo of the week 2 testbed participants.

Each day was structured into six sets of activities. Firstly, we were divided into three groups to perform a “Forecast Denial Experiment”, whereby each group is given access to a limited set of data and asked to issue a forecast for later in the day. One group only had access to the deterministic UKV model outputs, another group only had access to the MOGREPS-UK high-resolution ensemble output, and the third group has access to both datasets. The idea was to test whether ensemble outputs provide added value and accuracy to forecasts of impactful weather compared to just deterministic outputs. Each group was led by one or two operational meteorologists who navigated the data and, generally, provided most of the guidance. Personally, I found it immensely useful to shadow the op-mets as they made their forecasts, and came away with a much better understanding of the processes which goes into issuing a forecast.

After lunch, we would begin the ensemble evaluation activity which focussed on subjectively evaluating the spread of solutions in the high-resolution MOGREPS-UK ensemble. Improving ensemble spread is one of the major priorities for model development; currently, the members of high-resolution ensembles tend to diverge from the control member too slowly, leading to overconfident forecasts. It was particularly interesting to compare the spread results from MOGREPS-UK with the global MOGREPS-G ensemble and to try to understand the situations when the UK ensemble seemed to resemble a downscaled version of the global model. Next, we would evaluate three surface water flooding products, all combining ensemble data with other surface and impact libraries to produce flooding risk maps. Despite being driven by the same underlying model outputs, it was surprising how much each model differed in the case studies we looked at. 

Finally, we would end the day by evaluating the WMV (Wessex Model Variable) 300 m test ensemble, run over the greater Bristol area over this summer for research purposes. Also driven by MOGREPS-UK, this ensemble would often pick out convective structure which MOGREPS-UK was too coarse to resolve, but also tended to overdo the intensities. It was also very interesting to see the objective metrics suggested that WMV had much worse spread than MOGREPS-UK over the same area, a surprising result which didn’t align with my own interpretation of model performance.

Overall, the testbed was a great opportunity to learn more about how forecasts are issued and to get a deeper intuition for how to interpret model outputs. As researchers, it’s easy to look at model outputs as just abstract data, which is there to be verified and scrutinised, forgetting the impacts that it can have on the people experiencing it. While it was an admittedly exhausting couple of weeks, I would highly recommend more students take part in future testbeds!

Connecting Global to Local Hydrological Modelling Forecasting – Virtual Workshop

Gwyneth Matthews g.r.matthews@pgr.reading.ac.uk
Helen Hooker h.hooker@pgr.reading.ac.uk 

ECMWF- CEMS – C3S – HEPEX – GFP 

What was it? 

The workshop was organised under the umbrella of ECMWF, the Copernicus services CEMS and C3S, the Hydrological Ensemble Prediction EXperiment (HEPEX) and the Global Flood Partnership (GFP). The workshop lasted 3 days, with a keynote speaker followed by Q&A at the start of each of the 6 sessions. Each keynote talk focused on a different part of the forecast chain, from hybrid hydrological forecasting to the use of forecasts for anticipatory humanitarian action, and how the global and local hydrological scales could be linked. Following this were speedy poster pitches from around the world and poster presentations and discussion in the virtual ECMWF (Gather.town).  

Figure 1: Gather.town was used for the poster sessions and was set up to look like the ECMWF site in Reading, complete with a Weather Room and rubber ducks. 

What was your poster about? 

Gwyneth – I presented Evaluating the post-processing of the European Flood Awareness System’s medium-range streamflow forecasts in Session 2 – Catchment-scale hydrometeorological forecasting: from short-range to medium-range. My poster showed the results of the recent evaluation of the post-processing method used in the European Flood Awareness System. Post-processing is used to correct errors and account for uncertainties in the forecasts and is a vital component of a flood forecasting system. By comparing the post-processed forecasts with observations, I was able to identify where the forecasts were most improved.  

Helen – I presented An evaluation of ensemble forecast flood map spatial skill in Session 3 – Monitoring, modelling and forecasting for flood risk, flash floods, inundation and impact assessments. The ensemble approach to forecasting flooding extent and depth is ideal due to the highly uncertain nature of extreme flooding events. The flood maps are linked directly to probabilistic population impacts to enable timely, targeted release of funding. The Flood Foresight System forecast flood inundation maps are evaluated by comparison with satellite based SAR-derived flood maps so that the spatial skill of the ensemble can be determined.  

Figure 2: Gwyneth (left) and Helen (right) presenting their posters shown below in the 2-minute pitches. 

What did you find most interesting at the workshop? 

Gwyneth – All the posters! Every session had a wide range of topics being presented and I really enjoyed talking to people about their work. The keynote talks at the beginning of each session were really interesting and thought-provoking. I especially liked the talk by Dr Wendy Parker about a fitness-for-purpose approach to evaluation which incorporates how the forecasts are used and who is using the forecast into the evaluation.  

Helen – Lots! All of the keynote talks were excellent and inspiring. The latest developments in detecting flooding from satellites include processing the data using machine learning algorithms directly onboard, before beaming the flood map back to earth! If openly available and accessible (this came up quite a bit) this will potentially rapidly decrease the time it takes for flood maps to reach both flood risk managers dealing with the incident and for use in improving flood forecasting models. 

How was your virtual poster presentation/discussion session? 

Gwyneth – It was nerve-racking to give the mini-pitch to +200 people, but the poster session in Gather.town was great! The questions and comments I got were helpful, but it was nice to have conversations on non-research-based topics and to meet some of the EC-HEPEXers (early career members of the Hydrological Ensemble Prediction Experiment). The sessions felt more natural than a lot of the virtual conferences I have been to.  

Helen – I really enjoyed choosing my hairdo and outfit for my mini self. I’ve not actually experienced a ‘real’ conference/workshop but compared to other virtual events this felt quite realistic. I really enjoyed the Gather.town setting, especially the duck pond (although the ducks couldn’t swim or quack! J). It was great to have the chance talk about my work and meet a few people, some thought-provoking questions are always useful.  

The effect of surface heat fluxes on the evolution of storms in the North Atlantic storm track

Andrea Marcheggiani – a.marcheggiani@pgr.reading.ac.uk

Diabatic processes are typically considered as a source of energy for weather systems and as a primary contributing factor to the maintenance of mid-latitude storm tracks (see Hoskins and Valdes 1990 for some classical reading, but also a more recent reviews, e.g. Chang et al. 2002). However, surface heat exchanges do not necessarily act as a fuel for the evolution of weather systems: the effects of surface heat fluxes and their coupling with lower-tropospheric flow can be detrimental to the potential energy available for systems to grow. Indeed, the magnitude and sign of their effects depend on the different time (e.g., synoptic, seasonal) and length (e.g., global, zonal, local) scales which these effects unfold at.


Figure 1: Composites for strong (a-c) and weak (d-f) values of the covariance between heat flux and temperature time anomalies.

Heat fluxes arise in response to thermal imbalances which they attempt to neutralise. In the atmosphere, the primary thermal imbalances that are observed correspond with the meridional temperature gradient caused by the equator—poles differential radiative heating from the Sun, and the temperature contrasts at the air—sea interface which essentially derives from the different heat capacities of the oceans and the atmosphere.

In the context of the energetic scheme of the atmosphere, which was first formulated by Lorenz (1955) and commonly known as Lorenz energy cycle, the meridional transport of heat (or dry static energy) is associated with conversion of zonal available potential energy to eddy available potential energy, while diabatic processes at the surface coincide with generation of eddy available potential energy.

Figure 2: Phase portrait of FT covariance and mean baroclinicity. Streamlines indicate average circulation in the phase space (line thickness proportional to phase speed). The black shaded dot in the top left corner indicates the size of the Gaussian kernel used in the smoothing process. Colour shading indicates the number of data points contributing to the kernel average

The sign of the contribution from surface heat exchanges to the evolution on weather systems is not univocal, as it depends on the specific framework which is used to evaluate their effects. Globally, these have been estimated to have a positive effect on the potential energy budget (Peixoto and Oort, 1992) while locally the picture is less clear, as heating where it is cold and cooling where it is warm would lead to a reduction in temperature variance, which is essentially available potential energy.

The first part of my PhD focussed on assessing the role of local air—sea heat exchanges on the evolution of synoptic systems. To that extent, we built a hybrid framework where the spatial covariance between time anomalies of sensible heat flux F and lower-tropospheric air temperature T  is taken as a measure of the intensity of the air—sea thermal coupling. The time anomalies, denoted by a prime, are defined as departures from a 10-day running mean so that we can concentrate on synoptic variability (Athanasiadis and Ambaum, 2009). The spatial domain where we compute the spatial covariance extends from 30°N to 60°N and from 30°W to 79.5°W, which corresponds with the Gulf Stream extension region, and to focus on air—sea interaction, we excluded grid points covered by land or ice.

This leaves us with a time series for F’—T’ spatial covariance, which we also refer to as FT index.

The FT index is found to be always positive and characterised by frequent bursts of intense activity (or peaks). Composite analysis, shown in Figure 1 for mean sea level pressure (a,d), temperature at 850hPa (b,e) and surface sensible heat flux (c,f), indicates that peaks of the FT index (panels a—c) correspond with intense weather activity in the spatial domain considered (dashed box in Figure 1) while a more settled weather pattern is observed to be typical when the FT index is weak (panels d—f).


Figure 3: Phase portraits for spatial-mean T (a) and cold sector area fraction (b). Shading in (a) represents the difference between phase tendency and the mean value of T, as reported next to the colour bar. Arrows highlight the direction of the circulation, kernel-averaged using the Gaussian kernel shown in the top-left corner of each panel.

We examine the dynamical relationship between the FT index and the area-mean baroclinicity, which is a measure of available potential energy in the spatial domain. To do that, we construct a phase space of FT index and baroclinicity and study the average circulation traced by the time series for the two dynamical variables. The resulting phase portrait is shown in Figure 2. For technical details on phase space analysis refer to Novak et al. (2017), while for more examples of its use see Marcheggiani and Ambaum (2020) or Yano et al. (2020). We observe that, on average, baroclinicity is strongly depleted during events of strong F’—T’ covariance and it recovers primarily when covariance is weak. This points to the idea that events of strong thermal coupling between the surface and the lower troposphere are on average associated with a reduction in baroclinicity, thus acting as a sink of energy in the evolution of storms and, more generally, storm tracks.

Upon investigation of the driving mechanisms that lead to a strong F’—T’ spatial covariance, we find that increases in variances and correlation are equally important and that appears to be a more general feature of heat fluxes in the atmosphere, as more recent results appear to indicate (which is the focus of the second part of my PhD).

In the case of surface heat fluxes, cold sector dynamics play a fundamental role in driving the increase of correlation: when cold air is advected over the ocean surface, flux variance amplifies in response to the stark temperature contrasts at the air—sea interface as the ocean surface temperature field features a higher degree of spatial variability linked to the presence of both the Gulf Stream on the large scale and oceanic eddies on the mesoscale (up to 100 km).

The growing relative importance of the cold sector in the intensification phase of the F’—T’ spatial covariance can also be revealed by looking at the phase portraits for air temperature and cold sector area fraction, which is shown in Figure 3. These phase portraits tell us how these fields vary at different points in the phase space of surface heat flux and air temperature spatial standard deviations (which correspond to the horizontal and vertical axes, respectively). Lower temperatures and larger cold sector area fraction characterise the increase in covariance, while the opposite trend is observed in the decaying stage.

Surface heat fluxes eventually trigger an increase in temperature variance, which within the atmospheric boundary layer follows an almost adiabatic vertical profile which is characteristic of the mixed layer (Stull, 2012).

Figure 4: Diagram of the effect of the atmospheric boundary layer height on modulating surface heat flux—temperature correlation.

Stronger surface heat fluxes are associated with a deeper boundary layer reaching higher levels into the troposphere: this could explain the observed increase in correlation as the lower-tropospheric air temperatures become more strongly coupled with the surface, while a lower correlation with the surface ensues when the boundary layer is shallow and surface heat flux are weak. Figure 4 shows a simple diagram summarising the mechanisms described above.

In conclusion, we showed that surface heat fluxes locally can have a damping effect on the evolution of mid-latitude weather systems, as the covariation of surface heat flux and air temperature in the lower troposphere corresponds with a decrease in the available potential energy.

Results indicate that most of this thermodynamically active heat exchange is realised within the cold sector of weather systems, specifically as the atmospheric boundary layer deepens and exerts a deeper influence upon the tropospheric circulation.

References

  • Athanasiadis, P. J. and Ambaum, M. H. P.: Linear Contributions of Different Time Scales to Teleconnectivity, J. Climate, 22, 3720– 3728, 2009.
  • Chang, E. K., Lee, S., and Swanson, K. L.: Storm track dynamics, J. Climate, 15, 2163–2183, 2002.
  • Hoskins, B. J. and Valdes, P. J.: On the existence of storm-tracks, J. Atmos. Sci., 47, 1854–1864, 1990.
  • Lorenz, E. N.: Available potential energy and the maintenance of the general circulation, Tellus, 7, 157–167, 1955.
  • Marcheggiani, A. and Ambaum, M. H. P.: The role of heat-flux–temperature covariance in the evolution of weather systems, Weather and Climate Dynamics, 1, 701–713, 2020.
  • Novak, L., Ambaum, M. H. P., and Tailleux, R.: Marginal stability and predator–prey behaviour within storm tracks, Q. J. Roy. Meteorol. Soc., 143, 1421–1433, 2017.
  • Peixoto, J. P. and Oort, A. H.: Physics of climate, American Institute of Physics, New York, NY, USA, 1992.
  • Stull, R. B.: Mean boundary layer characteristics, In: An Introduction to Boundary Layer Meteorology, Springer, Dordrecht, Germany, 1–27, 1988.
  • Yano, J., Ambaum, M. H. P., Dacre, H., and Manzato, A.: A dynamical—system description of precipitation over the tropics and the midlatitudes, Tellus A: Dynamic Meteorology and Oceanography, 72, 1–17, 2020.

High-resolution Dispersion Modelling in the Convective Boundary Layer

Lewis Blunn – l.p.blunn@pgr.reading.ac.uk

In this blog I will first give an overview of the representation of pollution dispersion in regional air quality models (AQMs). I will then show that when pollution dispersion simulations in the convective boundary layer (CBL) are run at \mathcal{O}(100 m) horizontal grid length, interesting dynamics emerge that have significant implications for urban air quality. 

Modelling Pollution Dispersion 

AQMs are a critical tool in the management of urban air pollution. They can be used for short-term air quality (AQ) forecasts, and in making planning and policy decisions aimed at abating poor AQ. For accurate AQ prediction the representation of vertical dispersion in the urban boundary layer (BL) is key because it controls the transport of pollution away from the surface. 

Current regional scale Eulerian AQMs are typically run at \mathcal{O}(10 km) horizontal grid length (Baklanov et al., 2014). The UK Met Office’s regional AQM runs at 12 km horizontal grid length (Savage et al., 2013) and its forecasts are used by the Department for Environment Food and Rural Affairs (DEFRA) to provide a daily AQ index across the UK (today’s DEFRA forecast). At such horizontal grid lengths turbulence in the BL is sub-grid.  

Regional AQMs and numerical weather prediction (NWP) models typically parametrise vertical dispersion of pollution in the BL using K-theory and sometimes with an additional non-local component so that 

F=-K_z \frac{\partial{c}}{\partial{z}} +N_l 

where F is the flux of pollution, c is the pollution concentration, K(z) is a turbulent diffusion coefficient and z is the height from the ground. N_l is the non-local term which represents vertical turbulent mixing under convective conditions due to buoyant thermals (Lock et al., 2000; Siebesma et al., 2007).  

K-theory (i.e. N_l=0) parametrisation of turbulent dispersion is consistent mathematically with Fickian diffusion of particles in a fluid. If K(z) is taken as constant and particles are released far from any boundaries (i.e. away from the ground and BL capping inversion), the mean square displacement of pollution particles increases proportional to the time since release. Interestingly, Albert Einstein showed that Brownian motion obeys Fickian diffusion. Therefore, pollution particles in K-theory dispersion parametrisations are analogous to memoryless particles undergoing a random walk. 

It is known however that at short timescales after emission pollution particles do have memory. In the CBL, far from undergoing a random trajectory, pollution particles released in the surface layer initially tend to follow the BL scale overturning eddies. They horizontally converge before being transported to near the top of the BL in updrafts. This results in large pollution concentrations in the upper BL and low concentrations near the surface at times on the order of one CBL eddy turnover period since release (Deardorff, 1972; Willis and Deardorff, 1981). This has important implications for ground level pollution concentration predicted by AQMs (as demonstrated later). 

Pollution dispersion can be thought of as having two different behaviours at short and long times after release. In the short time “ballistic” limit, particles travel at the velocity within the eddy they were released into, and the mean square displacement of pollution particles increases proportional to the time squared. At times greater than the order of one eddy turnover (i.e. the long time “diffusive” limit) dispersion is less efficient, since particles have lost memory of the initial conditions that they were released into and undergo random motion.  For further discussion of atmospheric diffusion and memory effects see this blog (link).

In regional AQMs, the non-local parametrisation component does not capture the ballistic dynamics and K-theory treats dispersion as being “diffusive”. This means that at CBL eddy turnover timescales it is possible that current AQMs have large errors in their predicted concentrations. However, with increases in computing power it is now possible to run NWP for research purposes at \mathcal{O}(100 m) horizontal grid length (e.g. Lean et al., 2019) and routinely at 300 m grid length (Boutle et. al., 2016). At such grid lengths the dominant CBL eddies transporting pollution (and therefore the “ballistic” diffusion) becomes resolved and does not require parametrisation. 

To investigate the differences in pollution dispersion and potential benefits that can be expected when AQMs move to \mathcal{O}(100 m) horizontal grid length, I have run NWP at horizontal grid lengths ranging from 1.5 km (where CBL dispersion is parametrised) to 55 m (where CBL dispersion is mostly resolved). The simulations are unique in that they are the first at such grid lengths to include a passive ground source of scalar representing pollution, in a domain large enough to let dispersion develop for tens of kilometres downstream. 

High-Resolution Modelling Results 

A schematic of the Met Office Unified Model nesting suite used to conduct the simulations is shown in Fig. 1. The UKV (1.5 km horizontal grid length) model was run first and used to pass boundary conditions to the 500 m model, and so on down to the 100 m and 55 m models. A puff release, homogeneous, ground source of passive scalar was included in all models and its horizontal extent covered the area of the 55 m (and 100 m) model domains. The puff releases were conducted on the hour, and at the end of each hour scalar concentration was set to zero. The case study date was 05/05/2016 with clear sky convective conditions.  

Figure 1: Schematic of the Unified Model nesting suite.

Puff Releases  

Figure 2 shows vertical cross-sections of puff released tracer in the UKV and 55 m models at 13-05, 13-20 and 13-55 UTC. At 13-05 UTC the UKV model scalar concentration is very large near the surface and approximately horizontally homogeneous. The 55 m model concentrations however are either much closer to the surface or elevated to great heights within the BL in narrow vertical regions. The heterogeneity in the 55 m model field is due to CBL turbulence being largely resolved in the 55 m model. Shortly after release, most scalar is transported predominantly horizontally rather than vertically, but at localised updrafts scalar is transported rapidly upwards. 

Figure 2: Vertical cross-sections of puff released passive scalar. (a), (b) and (c) are from the UKV model at 13-05, 13-20 and 13-55 UTC respectively. (d), (e) and (f) are from the 55 m model at 13-05, 13-20 and 13-55 UTC respectively. The x-axis is from south (left) to north (right) which is approximately the direction of mean flow. The green line is the BL scheme diagnosed BL height.

By 13-20 UTC it can be seen that the 55 m model has more scalar in the upper BL than lower BL and lowest concentrations within the BL are near the surface. However, the scalar in the UKV model disperses more slowly from the surface. Concentrations remain unrealistically larger in the lower BL than upper BL and are very horizontally homogeneous, since the “ballistic” type dispersion is not represented. By 13-55 UTC the concentration is approximately uniform (or “well mixed”) within the BL in both models and dispersion is tending to the “diffusive” limit. 

It has thus been demonstrated that unless “ballistic” type dispersion is represented in AQMs the evolution of the scalar concentration field will exhibit unphysical behaviour. In reality, pollution emissions are usually continuously released rather than puff released. One could therefore ask the question – when pollution is emitted continuously are the detailed dispersion dynamics important for urban air quality or does the dynamics of particles released at different times cancel out on average?  

Continuous Releases  

To address this question, I included a continuous release, homogeneous, ground source of passive scalar. It was centred on London and had dimensions 50 km by 50 km which is approximately the size of Greater London. Figure 3a shows a schematic of the source.  

The ratio of the 55 m model and UKV model zonally averaged surface concentration with downstream distance from the southern edge of the source is plotted in Fig. 3b. The largest difference in surface concentration between the UKV and 55m model occurs 9 km downstream, with a ratio of 0.61. This is consistent with the distance calculated from the average horizontal velocity in the BL (\approx7 ms-1) and the time at which there was most scalar in the upper BL compared to the lower BL in the puff release simulations (\approx 20 min). The scalar is lofted high into the BL soon after emission, causing reductions in surface concentrations downstream. Beyond 9 km downstream distance a larger proportion of the scalar in the BL has had time to become well-mixed and the ratio increases.  

Figure 3: (a) Schematic of the continuous release source of passive scalar. (b) Ratio of the 55 m model and UKV model zonally averaged surface concentration with downstream distance from the southern edge of the source at 13-00 UTC.

Summary  

By comparing the UKV and 55 m model surface concentrations, it has been demonstrated that “ballistic” type dispersion can influence city scale surface concentrations by up to approximately 40%. It is likely that by either moving to \mathcal{O}(100 m) horizontal grid length or developing turbulence parametrisations that represent “ballistic” type dispersion, that substantial improvements in the predictive capability of AQMs can be made. 

References 

  1. Baklanov, A. et al. (2014) Online coupled regional meteorology chemistry models in Europe: Current status and prospects https://doi.org/10.5194/acp-14-317-2014 
  1. Boutle, I. A. et al. (2016) The London Model: Forecasting fog at 333 m resolution https://doi.org/10.1002/qj.2656 
  1. Deardorff, J. (1972) Numerical Investigation of Neutral and Unstable Planetary Boundary Layers https://doi.org/10.1175/1520-0469(1972)029<0091:NIONAU>2.0.CO;2 
  1. DEFRA – air quality forecast https://uk-air.defra.gov.uk/index.php/air-pollution/research/latest/air-pollution/daqi 
  1. Lean, H. W. et al. (2019) The impact of spin-up and resolution on the representation of a clear convective boundary layer over London in order 100 m grid-length versions of the Met Office Unified Model https://doi.org/10.1002/qj.3519 
  1. Lock, A. P. et al. A New Boundary Layer Mixing Scheme. Part I: Scheme Description and Single-Column Model Tests https://doi.org/10.1175/1520-0493(2000)128<3187:ANBLMS>2.0.CO;2 
  1. Savage, N. H. et al. (2013) Air quality modelling using the Met Office Unified Model (AQUM OS24-26): model description and initial evaluation https://doi.org/10.5194/gmd-6-353-2013 
  1. Siebesma, A. P. et al. (2007) A Combined Eddy-Diffusivity Mass-Flux Approach for the Convective Boundary Layer https://doi.org/10.1175/JAS3888.1 
  1. Willis. G and J. Deardorff (1981) A laboratory study of dispersion from a source in the middle of the convectively mixed layer https://doi.org/10.1016/0004-6981(81)90001-9 

Methane’s Shortwave Radiative Forcing

Email: Rachael.Byrom@pgr.reading.ac.uk

Methane (CH4) is a potent greenhouse gas. Its ability to effectively alter fluxes of longwave (thermal-infrared) radiation emitted and absorbed by the Earth’s surface and atmosphere has been well studied. As a result, methane’s thermal-infrared impact on the climate system has been quantified in detail. According to the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5), methane has the second largest radiative forcing (0.48 W m-2) of the well-mixed greenhouse gases after carbon dioxide (CO2) (Myhre et al. 2013, See Figure 1).  This means that due to its change in atmospheric concentration since the pre-industrial era (from 1750 – 2011), methane has directly perturbed the tropopause net (incoming minus outgoing) stratospheric temperature-adjusted radiative flux by 0.48 W m-2, causing the climate system to warm.

Figure 1: Radiative forcing of the climate system between the years 1750 – 2011 for different forcing agents. Hatched bars represent estimates of stratospheric-temperature adjusted radiative forcing (RF), solid bars represent estimates of effective radiative forcing (ERF) with uncertainties (5 to 95% confidence range) given for RF (dotted lines) and ERF (solid lines). Taken from Chapter 8, IPCC AR5 Figure 8.15 (IPCC 2013).

However, an important effect is missing from the current IPCC AR5 estimate of methane’s climate impact – its absorption of solar radiation. In addition to longwave radiation, methane also alters fluxes of incoming solar shortwave radiation at wavelengths between 0.7 – 10 µm.

Until recently this shortwave effect had not been thoroughly quantified and as such was largely overlooked.  My PhD work focuses on quantifying methane’s shortwave effect in detail and aims to build upon the significant, initial findings of Etminan et al. (2016) and a more recent study by Collins et al. (2018).

Etminan et al. (2016) analysed methane’s absorption of solar near-infrared radiation (at wavelengths between 0.2 – 5 µm) and found that it exerted a direct instantaneous, positive forcing on the climate system, largely due to the absorption of upward scattered, reflected near-infrared radiation by clouds in the troposphere. Essentially, this processes results in photons taking multiple passes throughout the troposphere, which in turn results in increased absorption by CH4. Figure 2 shows the net (downwards minus upwards) spectral variation of this forcing at the tropopause under all-sky (i.e. cloudy) conditions. Here it is clear to see methane’s three key absorption bands across the near-infrared region at 1.7, 2.3 and 3.3 µm.

As Etminan et al. (2016) explain, following a perturbation in methane concentrations all of these bands decrease the downwards shortwave radiative flux at the tropopause, due to increased absorption in the stratosphere. However, the net sign of the forcing depends on whether this negative contribution compensates over increased absorption by these bands in the troposphere (which constitutes a positive forcing). As Figure 2 shows, whilst the 3.3 µm band has a strongly net negative forcing due to the absorption of downwelling solar radiation in the stratosphere, both the 1.7 µm and 2.3 µm bands have a net positive forcing due to increased CH4 absorption in an all-sky troposphere. When summed across the entire spectral range, the positive forcing at 1.7 µm and 2.3 µm dominates over the negative forcing at 3.3 µm – resulting in a net positive forcing. Etminan et al. (2016) also found that the nature of this positive forcing is partly explained by methane’s spectral overlap with water vapour (H2O). The 3.3 µm band overlaps with a region of relatively strong H2O absorption, which reduces its ability to absorb shortwave radiation in the troposphere, where high concentrations of H2O are present. However, both the 1.7 µm and 2.3 µm bands overlap much less with H2O, and so are able to absorb more shortwave radiation in the troposphere.

Figure 2: Upper: Spectral variation of near-infrared tropopause RF (global mean, all sky). Lower: Sum of absorption line strengths for CH4 and water vapour (H2O). Taken from Etminan et al. (2016).

In addition to this, Etminan et al. (2016) also found that the shortwave effect serves to impact methane’s stratospheric temperature-adjusted longwave radiative forcing (the process whereby stratospheric temperatures readjust to radiative equilibrium before the change in net radiative flux is calculated at the tropopause; Myhre et al. (2013)). Here, absorption of solar radiation in the stratosphere results in a warmer stratosphere, and hence increased emission of longwave radiation by methane downwards to the troposphere. This process results in a positive tropopause longwave radiative forcing. Combing both the direct, instantaneous shortwave forcing and its impact on the stratospheric-temperature adjusted longwave forcing, Etminan et al. (2016) found that the inclusion of the shortwave effect enhances methane’s radiative forcing by a total of 15%. The results presented in this study are significant and indicate the importance of methane’s shortwave absorption. However, Etminan et al. (2016) note several areas of uncertainty surrounding their estimate and highlight the need for a more detailed analysis of the subject.

My work aims to address these uncertainties by investigating the impact of factors such as updates to the HITRAN spectroscopic database (which provides key spectroscopic parameters for climate models to simulate the transmission of radiation through the atmosphere), the inclusion of the solar mid-infrared (7.7 µm) band in calculations of the shortwave effect and potential sensitivities, such as the vertical representation of CH4 concentrations throughout the atmosphere and the specification of land surface albedo. My work also extends Etminan et al. (2016) by investigating the shortwave effect at a global spatial resolution, since a two-atmosphere approach (using tropical and extra-tropical profiles) was employed by the latter. To do this I use the model SOCRATES-RF (Checa-Garcia et al. 2018) which computes monthly-mean radiative forcings at a global 5° x 5° spatial resolution using a high resolution 260-band shortwave spectrum (from 0.2 – 10 µm) and a standard 9-band longwave spectrum.

Initial results calculated by SOCRATES-RF confirm that methane’s all-sky tropopause shortwave radiative forcing is positive and that the inclusion of shortwave bands serves to increase the stratospheric-temperature adjusted longwave radiative forcing. In total my calculations estimate that the shortwave effect increases methane’s total radiative forcing by 10%. Whilst this estimate is lower than the 15% proposed by Etminan et al. (2016) it’s important to point out that this SOCRATES-RF estimate is not a final figure and investigations into several key forcing sensitivities are currently underway. For example, methane’s shortwave forcing is highly sensitive to the vertical representation of concentrations throughout the atmosphere. Tests conducted using SOCRATES-RF reveal that when vertically-varying profiles of CH4 concentrations are perturbed, the shortwave forcing almost doubles in magnitude (from 0.014 W m-2 to 0.027 W m-2) when compared to the same calculation conducted using constant vertical profiles of CH4 concentrations. Since observational studies show that concentrations of methane decrease with height above the tropopause (e.g. Koo et al. 2017), the use of realistic vertically-varying profiles in forcing calculations are essential. Highly idealised vertically-varying CH4 profiles are currently employed in SOCRATES-RF, which vary with latitude but not with season. Therefore, the realism of this representation needs to be validated against observational datasets and possibly updated accordingly.

Another key sensitivity currently under investigation is the specification of land surface albedo – a potentially important factor controlling the amount of reflected shortwave radiation absorbed by methane. Since the radiative properties of surface features are highly wavelength-dependent, it is plausible that realistic, spectrally-varying land surface albedo values will be required to accurately simulate methane’s shortwave forcing. For example, vegetation and soils typically tend to reflect much more strongly in the near-infrared than in the visible region of the solar spectrum, whilst snow surfaces reflect much more strongly in the visible (see Roesch et al. 2002). Currently in SOCRATES-RF, globally-varying, spectrally-constant land-surface albedo values are used, derived from ERA-Interim reanalysis data.

Figure 3: Left: Annual mean all-sky tropopause shortwave CH4 radiative forcing calculated by SOCRATES-RF (units W m-2). Right: Annual mean all-sky tropopause near-infrared CH4 radiative forcing from Collins et al. (2018)

Figure 3 compares the spatial distribution of methane’s annual-mean all-sky tropopause shortwave forcing as calculated by SOCRATES-RF and Collins et al. (2018). Both calculations exhibit the same regions of maxima across, for example, the Sahara, the Arabian Peninsula, and the Tibetan Plateau. However, it is interesting to note that the poleward amplification shown by SOCRATES-RF is not evident in Collins et al. (2018). The current leading hypothesis for this difference is the fact that the land surface albedo is specified differently in each calculation. Collins et al. (2018) employ spectrally-varying surface albedo values derived from satellite observations. These are arguably more realistic than the spectrally-constant values currently specified in SOCRATES-RF. The next step in my PhD is to further explore the interdependence between methane’s shortwave forcing and land-surface albedo, and to work towards implementing spectrally-varying albedo values into SOCRATES-RF calculations. Along with the ongoing investigation into the vertical representation of CH4 concentrations, I aim to finally deliver a more definitive estimate of methane’s shortwave effect.

References:

Checa-Garcia, R., Hegglin, M. I., Kinnison, D., Plummer, D. A., and Shine, K. P. 2018: Historical tropospheric and stratospheric ozone radiative forcing using the CMIP6 database. Geophys. Res. Lett., 45, 3264–3273, https://doi.org/10.1002/2017GL076770

Collins, W. D. et al, 2018: Large regional shortwave forcing by anthropogenic methane informed by Jovian observations, Sci. Adv. 4, https://doi.org/10.1126/sciadv.aas9593

Etminan, M., G. Myhre, E. Highwood, K. P. Shine. 2016: Radiative forcing of carbon dioxide, methane and nitrous oxide: a significant revision of methane radiative forcing, Geophys. Res. Lett., 43, https://doi.org/10.1002/2016/GL071930

IPCC, 2013: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, 1535 pp

Myhre, G., et al. 2013: Anthropogenic and natural radiative forcing, in Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, edited by T. F. Stocker et al., pp. 659–740, Cambridge Univ. Press, Cambridge, U. K., and New York.

Roesch, A., M. Wild, R. Pinker, and A. Ohmura, 2002: Comparison of spectral surface albedos and their impact on the general circulation model simulated surface climate, J. Geophys. Res., 107, 13-1 – 13-18, https://doi.org/10.1029/2001JD000809