Relationships in errors between meteorological forecasts and air quality forecasts


Exposure to pollutants in the air we breathe may trigger respiratory problems. Pollutants such as ozone (O_{3}) and particulate matter (PM_{2.5}) – particles of about 1/20th of the width of a hair strand – can get into our lungs and cause inflammation, alter their function, or otherwise cause trouble for the cardiovascular system – especially in people with existing underlying respiratory conditions. Although high pollution episodes in the UK are infrequent, the public becomes aware of the associated problems during events such as red skies, in part caused by long-range transport of Saharan dust. Furthermore, the World Health Organisation (WHO) estimates that 85% of UK towns regularly exceed the safe annual PM_{2.5} limit. It is therefore important to forecast surface pollution concentrations accurately in order to enable the public to mitigate some of those adverse health risks.

Figure 1: Smog in London (December 1952). This 5-day event caused many deaths attributable to elevated concentrations of pollutants. The Clean Air Act of 1956 followed. Credit: TopFoto / The Image Works.

In general, air pollution can be difficult to forecast near the surface because of the multitude of factors which affect it. Incorrectly modelling chemical processes within the atmosphere, surface emissions or indeed the meteorology can lead to errors in predicting ground-level pollution concentrations. It is well accepted within the literature that weather forecasting is of decisive importance for air quality. Thus, my PhD project tries to link forecast errors in meteorological processes within the atmospheric boundary layer (BL) with forecast errors in pollutants such as O_{3} and NO_{2} (nitrogen dioxide) using the operational air quality forecasting model in the UK, the Air Quality in the Unified Model (AQUM). This model produces an hourly air quality forecast issued to the public by DEFRA in the form of a Daily Air Quality Index (DAQI) and is verified against surface-based observations from the Automatic Urban and Rural Network (AURN).

Figure 2: Automatic Urban and Rural Network (AURN) ground-based measuring sites for O_{3} and NO_{2}.

A three-month evaluation of hourly forecasts from AQUM shows a delay in the average increase of the morning O_{3} + NO_{2} (‘total oxidant’) concentrations when compared to AURN observations. We also know that BL depth is important for the mixing of pollutants – it acts as a sort of lid on top of the lower part of the troposphere. Since the noted lag in total oxidant increase in our model occurs exactly at the time of the morning BL development, we can form a testable hypothesis: that an inaccurate representation of BL processes – specifically, morning BL growth – leads to a delay in entrainment of O_{3}-rich air masses from the layer of air above it: the residual layer. It has been suggested in the literature that when the daytime convective mixed layer collapses upon sunset, the remaining pollutants are effectively trapped in the leftover (‘residual’) layer, and thus can act as a night-time reservoir of O_{3} above the stable or neutral night-time boundary layer (NBL).

Figure 3: Total oxidant (O_{3} + NO_{2}) average forecast (AQUM, red) and observations (AURN, black) diurnal cycle, averaged over JJA 2017 at 48 urban background sites. Shading is inter-quartile range.
Figure 4: Rate of change of the mean diurnal profile of the forecast (AQUM, red) and observations (AURN, black) of the total oxidant.

To test the hypothesis, semi-idealised experiments are conducted. We simulate a one-month long release of chemically inert tracers within the Numerical Atmospheric Dispersion Environment (NAME) using different sets of numerical weather prediction (NWP) outputs. This enables a process-based evaluation of how different meteorology affects tracers within the BL. Tracers are released within the lateral boundaries of the domain centred on the UK. The idea is to separate the effects of meteorology from chemistry on the tracer concentrations. In particular, we want to understand the role of entrainment of O_{3}-rich air masses from the residual layer down into the developing BL during the morning hours.

We located around 50 AURN sites in urban locations and compared hourly BL depths from June 2017 in the two sets of NWP output used for the tracer simulations: the UKV and UM Global (UMG) configurations of the Met Office Unified Model. It was found that although the average diurnal profiles of BL depth were quite similar, there was a lag in the morning increase of BL depth within the UMG configuration. This may be because the representation of surface sensible heat flux (SSHF) differs in the two NWP models: the UMG uses a single tile scheme to represent urban areas, whereas the UKV uses a more realistic, two-tile scheme (‘MORUSES’) which distinguishes between roof surfaces and street canyons. SSHF is a measure of energy exchange at the ground, where positive fluxes represent a loss of heat from the surface to the atmosphere. Therefore, a more realistic representation of SSHF results in the UKV being better at capturing and storing urban heat. This leads to a faster development of the BL depth in the UKV compared to the UMG, which in turn could mean that there is more turbulent motion and mixing within the atmosphere.

Assuming that the vertical gradient in pollutant concentrations is positive between the morning BL and the free troposphere, mixing air from above should enhance pollutant concentrations nearer to the surface. Our tracer results show that during days when synoptic conditions are dominated by high pressure, the diurnal cycle in forecast and observed surface pollutant concentrations can be adequately replicated by our simplified set-up. Differences between the diurnal cycle between tracer simulations with the two different meteorological set-ups show that the UKV is not only entraining more tracer from above the boundary layer than the simulation using UMG, but also the concentrations increase on average 1 – 2 hours earlier in the morning. These results suggest that indeed the model meteorology – in particular, representation of BL processes – is important to entrainment of polluted air masses into the BL, which in turn has a significant influence on the surface pollutant concentrations.

Within the past two decades, it has been recognised by the weather and air quality modelling communities that neither type of model can truly exist without the other. This post has discussed just one aspect of how meteorology influences the air quality forecast – there are, of course, many other parameters (e.g. wind speed, precipitation, relative humidity) which affect the forecast pollutant concentrations. We therefore also evaluated night-time errors in the wind speed and found that these errors are positively correlated with the total oxidant forecast errors. This means that when the wind speed forecast is overestimated, it is likely to affect the night-time and morning forecast of both O_{3} and NO_{2} in a significant way.


Ambient Air Pollution: A global assessment of exposure and burden of disease. WHO, 2016.

Bohnenstengel S., Evans S., Clark P., Belcher S.: Simulations of the London urban heat island, Quarterly Journal of the Royal Meteorological Society, 2011 vol: 137 (659) pp: 1625-1640

Cocks A., 1993: The Chemistry and Deposition of Nitrogen Species in the Troposphere, The Royal Society of Chemistry, Cambridge 1993

Savage N., Agnew P., Davis L., Ordonez C., Thorpe R., Johnson C., O’Connor F., Dalvi M.: Air quality modelling using the Met Office Unified Model (AQUM OS24-26): model description and initial evaluation, Geoscientific Model Development, 2013 vol: 6 pp: 353-372

Sun J., Mahrt L., Banta R., Pichugina Y.: Turbulence Regimes and Turbulence Intermittency in the Stable Boundary Layer during CASES-99, Journal of the Atmospheric Sciences, 2012 vol: 69 (1) pp: 338-351

Zhang, 2008: Online-coupled meteorology and chemistry models: History, current status, and outlook. Atmos. Chem. Phys, 2008 vol: 8 (11) pp: 2895-2932

A new, explicit thunderstorm electrification scheme for the Met Office Unified Model


Forecasting lightning is a difficult problem due to the complexity of the lightning process and how dependent the lightning forecast is on the accuracy of the convective forecast. In order to verify forecasts of lightning independently of the accuracy of the convective forecast, it can be helpful to introduce a lightning scheme that is more complex and physically representative than the simple lightning parameterisations often used in Numerical Weather Prediction (NWP).

The existing method of predicting lightning in the Met Office’s Unified Model (MetUM) uses upwards graupel flux and total ice water path, based on the method of McCaul et al. (2009). However, this method tends to overpredict the total number and coverage of lighting, particularly in the UK.

I’ve implemented a physically based, explicit electrification scheme in the MetUM in order to try and improve the current lightning forecasts. The processes involved in the scheme are shown in the flowchart in Figure 1. The electrification scheme uses the Non-Inductive Charging (NIC) process to separate charge within thunderstorms (Mansell et al., 2005; Saunders and Peck, 1998). The NIC theory states that when graupel and ice crystals collide some charge is transferred from one particle to the other. The sign and the magnitude of the charge that is transferred to the graupel particle depends on a number of parameters. It is affected by the ice crystal diameter, the velocity of the collision, the liquid water content and the temperature at which the collision occurs. Once the charge has been generated on graupel and ice or snow particles, it can be moved around the model domain and can be transferred between hydrometeor species. Charge is removed from hydrometeor species and the domain when the hydrometeors precipitate to the surface or if the hydrometeor evaporates or sublimates. Charge is transferred between hydrometeor species proportionally to the mass that is transferred. Charge is held on graupel, rain and cloud ice (or aggregates and crystals if these are included separately).

Figure 1: A flowchart showing the process and order of those processes involved within the new electrification scheme.

Once these charged hydrometeors are distributed through the cloud, they can be totalled to create a charge density distribution. From this distribution the electric field can be calculated. Then from the electric field lightning flashes can be discharged. Lightning flashes are discharged based on two thresholds, the first of these is the initiation threshold and governs where the initiation point for the lightning channel should be (Marshall et al., 1995). The second of these is a propagation threshold and governs whether or not the lightning channel can move through a grid box (Barthe et al., 2012). Lightning channels are only allowed to propagate vertically within a grid column to simplify the model structure (Fierro et al., 2013). Once the channel is created charge is neutralised along the channel, charge is removed from hydrometeor species in both the channel and the grid points immediately adjacent to the channel.

The updated charge density distribution is then used to recalculate the electric field and new flashes are discharged from any points that exceed the electric field threshold. This process keeps repeating until no new lightning flashes are discharged within the domain.

The plots in Figure 2 show the charge on graupel (a), cloud ice (b), rain (c) and the total charge (d) for a small single cell thunderstorm in the south of the UK on the 31st August 2017. It can be seen in these figure that the charge is mainly positive on cloud ice and mainly negative on graupel. The cloud ice, being less dense is lofted towards the top of the thunderstorm, while the graupel being denser generally falls towards the bottom of the storm. This creates the charge structure seen in Fig. 2d, with two positive-negative dipoles. This charge structure allows for the development of strong electric fields between the positive and negative charge centres in each dipole. If the electric field between the charge centres reaches the order of 100s kVm-1 the air can become electrically conductive, causing lightning.

Figure 2: The charge on hydrometeors in a small single-cell thunderstorm (a) shows the charge on graupel, (b) shows the charge on cloud ice, (c) shows the charge on rain and (d) shows total charge. In each plot, the outline indicated by the solid black line is the 5 dBZ reflectivity contour.

The electrification scheme was run within the operational configuration of the MetUM for a case study. The case study was a case of some organised and some single-cell, fair weather convection, on the 31st August 2017. The observations of lightning flashes are taken from the Met Office’s ATDNet lightning location system. The results of the total lighting accumulated for the entire day of the 31st August are shown in Figure 3. It can be easily seen that the existing method is producing far too much lightning compared to the observations. The new scheme is much closer to the observations.

It is an improvement, not only in the total lightning output, but also in the appearance of the lightning flash map. The scattered nature of the observations is captured by the new scheme, whereas the existing parameterisation appears to be largely producing lightning in neat, contoured paths. These paths show that the way that the existing parameterisation predicts lightning is not physically accurate and indicate the problem with the parameterisation, namely that it relies too heavily on the total ice water path. The new scheme suggests a possible improvement, in considering more explicitly the combination of graupel, liquid water and cloud ice that is vital for the production of charge and therefore lightning.

Figure 3: The total lightning flash accumulation for 31st August 2017 across the UK, (a) shows the output of the new electrification scheme, (b) shows the observed flashes, binned to match the model grid, and (c) shows the output of the existing MetUM parameterisation.

Barthe, C., Chong, M., Pinty, J.-P., and Escobar, J. (2012). CELLS v1.0: updated and parallelized version of an electrical scheme to simulate multiple electrified clouds and flashes over large domains. Geoscientific Model Development, (5), 167–184.

Fierro, A. O., Mansell, E. R., MacGorman, D. R., and Ziegler, C. L. (2013). The Implementation of an Explicit Charging and Discharge Lightning Scheme within the WRF-ARW Model: Benchmark Simulations of a Continental Squall Line, a Tropical Cyclone, and a Winter Storm. Monthly Weather Review, 141, 2390–2415.

Mansell, E. R., MacGorman, D. R., Ziegler, C. L., and Straka, J. M. (2005). Charge structure and lightning sensitivity in a simulated multicell thunderstorm. Journal of Geophysical Research, 110.

Marshall, T. C., McCarthy, M. P., and Rust, W. D. (1995). Electric field magnitudes and lightning initiation in thunderstorms. Journal of Geophysical Research, 100, 7097–7103.

McCaul, E. W., Goodman, S. J., LaCasse, K. M., and Cecil, D. J. (2009). Forecasting lightning threat using cloud-resolving model simulations. Weather and Forecasting, 24(3), 709–729.

Saunders, C. P. R. and Peck, S. L. (1998). Laboratory studies of the influence of the rime accretion rate on charge transfer during crystal / graupel collisions. Journal of Geophysical Research, 103, 949–13.

The (real) butterfly effect: the impact of resolving the mesoscale range


What does the ‘butterfly effect’ exactly mean? Many people would attribute the butterfly effect to the famous 3-dimensional non-linear model of Lorenz (1963) whose attractor looks like a butterfly when viewed from a particular angle. While it serves as an important foundation to chaos theory (by establishing that 3 dimensions are not only necessary for chaos as mandated in the Poincaré-Bendixson Theorem, but are also sufficient), the term ‘butterfly effect’ was not coined until 1972 (Palmer et al. 2014) based on a scientific presentation that Lorenz gave on a more radical, more recent work (Lorenz 1969) on the predictability barrier in multi-scale fluid systems. In this work, Lorenz demonstrated that under certain conditions, small-scale errors grow faster than large-scale errors in such a way that the predictability horizon cannot be extended beyond an absolute limit by reducing the initial error (unless the initial error is perfectly zero). Such limited predictability, or the butterfly effect as understood in this context, has now become a ‘canon in dynamical meteorology’ (Rotunno and Snyder 2008). Recent studies with advanced numerical weather prediction (NWP) models estimate this predictability horizon to be on the order of 2 to 3 weeks (Buizza and Leutbecher 2015; Judt 2018), in agreement with Lorenz’s original result.

The predictability properties of a fluid system primarily depend on the energy spectrum, whereas the nature of the dynamics per se only plays a secondary role (Rotunno and Snyder 2008). It is well-known that a slope shallower than (equal to or steeper than) -3 in the energy spectrum is associated with limited (unlimited) predictability (Lorenz 1969; Rotunno and Snyder 2008), which could be understood through analysing the characteristics of the energy spectrum of the error field. As shown in Figure 1, the error appears to grow uniformly across scales when predictability is indefinite, and appears to ‘cascade’ upscale when predictability is limited. In the latter case, the error spectra peak at the small scale and the growth rate is faster there.

Figure 1: Growth of error energy spectra (red, bottom to top) in the Lorenz (1969) model under the influence of a control spectrum (blue) of slope (left) -3 and (right) -\frac{5}{3}.

The Earth’s atmospheric energy spectrum consists of a -3 range in the synoptic scale and a -\frac{5}{3} range in the mesoscale (Nastrom and Gage 1985). While the limited predictability of the atmosphere arises from mesoscale physical processes, it would be of interest to understand how errors grow under this hybrid spectrum, and to what extent do global numerical weather prediction (NWP) models, which are just beginning to resolve the mesoscale -\frac{5}{3} range, demonstrate the fast error growth proper to the limited predictability associated with this range.

We use the Lorenz (1969) model at two different resolutions: K_{max}=11, corresponding to a maximal wavenumber of 2^{11}=2048, and K_{max}=21. The former represents the approximate resolution of global NWP models (~ 20 km), and the latter represents a resolution about 1000 times finer so that the shallower mesoscale range is much better resolved. Figure 2 shows the growth of a small-scale, small-amplitude initial error under these model settings.

Figure 2: As in Figure 1, except that the control spectrum is a hybrid spectrum with a -3 range in the synoptic scale and a -\frac{5}{3} range in the mesoscale, truncating at (left) K_{max}=11 and (right) K_{max}=21. The colours red and blue are reversed compared to Figure 1.

In the K_{max}=11 case where the -\frac{5}{3} range is not so much resolved, the error growth remains more or less up-magnitude, and the upscale cascade is not visible. The error is still much influenced by the synoptic-scale -3 range. Such behaviour largely agrees with the results of a recent study using a full-physics global NWP model (Judt 2018). In contrast, with the higher resolution K_{max}=21, the upscale propagation of error in the mesoscale is clearly visible. As the error spreads to the synoptic scale, its growth becomes more up-magnitude.

To understand the dependence of the error growth rate on scales, we use the parametric model of Žagar et al. (2017) by fitting the error-versus-time curve for every wavenumber / scale to the equation E\left ( t \right )=A\tanh\left (  at+b\right )+B, so that the parameters A, B, a and b are functions of the wavenumber / scale. Among the parameters, a describes the rate of error growth, the larger the quicker. A dimensional argument suggests that a \sim (k^3 E(k))^{1/2}, so that a should be constant for a -3 range (E(k) \sim k^{-3}), and should grow 10^{2/3}>4.5-fold for every decade of wavenumbers in the case of a -\frac{5}{3} range. These scalings are indeed observed in the model simulations, except that the sharp increase pertaining to the -\frac{5}{3} range only kicks in at K \sim 15 (1 to 2 km), much smaller in scale than the transition between the -3 and -\frac{5}{3} ranges at K \sim 7 (300 to 600 km). See Figure 3 for details.

Figure 3: The parameter a as a function of the scale K, for truncations (left) K_{max}=8,9,10,11 and (right) K_{max}=11,13,15,17,19,21.

This explains the absence of the upscale cascade in the K_{max}=11 simulation. As models go into very high resolution in the future, the strong predictability constraints proper to the mesoscale -\frac{5}{3} range will emerge, but only when it is sufficiently resolved. Our idealised study with the Lorenz model shows that this will happen only if K_{max} >15. In other words, motions at 1 to 2 km have to be fully resolved in order for error growth in the small scales be correctly represented. This would mean a grid resolution of ~ 250 m after accounting for the need of a dissipation range in a numerical model (Skamarock 2004).

While this seems to be a pessimistic statement, we have observed that the sensitivity of the error growth behaviour to the model resolution is itself sensitive to the initial error profile. The results presented above are for an initial error confined to a single small scale. When the initial error distribution is changed, the qualitative picture of error growth may not present such a contrast between the two resolutions. Thus, we highlight the need of further research to assess the potential gains of resolving more scales in the mesoscale, especially for the case of a realistic distribution of error that initiates the integrations of operational NWP models.

A manuscript on this work has been submitted and is currently under review.

This work is supported by a PhD scholarship awarded by the EPSRC Centre for Doctoral Training in the Mathematics of Planet Earth, with additional funding support from the ERC Advanced Grant ‘Understanding the Atmospheric Circulation Response to Climate Change’ and the Deutsche Forschungsgemeinschaft (DFG) Grant ‘Scaling Cascades in Complex Systems’.


Buizza, R. and Leutbecher, M. (2015). The forecast skill horizon. Quart. J. Roy. Meteor. Soc. 141, 3366—3382.

Judt, F. (2018). Insights into atmospheric predictability through global convection-permitting model simulations. J. Atmos. Sci. 75, 1477—1497.

Leung, T. Y., Leutbecher, M., Reich, S. and Shepherd, T. G. (2019). Impact of the mesoscale range on error growth and the limits to atmospheric predictability. Submitted.

Lorenz, E. N. (1963). Deterministic Nonperiodic Flow. J. Atmos. Sci. 20, 130—141.<0130:DNF>2.0.CO;2

Lorenz, E. N. (1969). The predictability of a flow which possesses many scales of motion. Tellus 21, 289—307.

Nastrom, G. D. and Gage, K. S. (1985). A climatology of atmospheric wavenumber spectra of wind and temperature observed by commercial aircraft. J. Atmos. Sci. 42, 950—960.<0950:ACOAWS>2.0.CO;2

Palmer, T. N., Döring, A. and Seregin, G. (2014). The real butterfly effect. Nonlinearity 27, R123—R141.

Rotunno, R. and Snyder, C. (2008). A generalization of Lorenz’s model for the predictability of flows with many scales of motion. J. Atmos. Sci. 65, 1063—1076.

Skamarock, W. C. (2004). Evaluating mesoscale NWP models using kinetic energy spectra. Mon. Wea. Rev. 132, 3019—3032.

Žagar, N., Horvat, M., Zaplotnik, Ž. and Magnusson, L. (2017). Scale-dependent estimates of the growth of forecast uncertainties in a global prediction system. Tellus A 69:1, 1287492.

The impact of atmospheric model resolution on the Arctic


The Arctic region is rapidly changing, with surface temperatures warming at around twice the global average and sea ice extent is rapidly declining, particularly in the summer. These changes affect the local ecosystems and people as well as the rest of the global climate. The decline in sea ice has corresponded with cold winters over the Northern Hemisphere mid-latitudes and an increase in other extreme weather events (Cohen et al., 2014). There are many suggested mechanisms linking changes in the sea ice to changes in the stratospheric jet, midlatitude jet and storm tracks; however this is an area of active research, with much ongoing debate.

Figure 1. Time-series of September sea ice extent from 20 CMIP5 models (colored lines), individual ensemble members are dotted lines and the individual model mean is solid. Multi-model ensemble mean from a subset of the models is shown in solid black with +/- 1 standard deviation in dotted black. The red line shows observations. From Stroeve et al. (2012)

It is therefore important that we are able to understand and predict the changes in the Arctic, however there is still a lot of uncertainty. Stroeve et al. (2012) calculated time series of September sea ice extent for different CMIP5 models, shown in Figure 1. In general the models do a reasonable job of reproducing the recent trends in sea ice decline, although there is a large inter-model spread and and even larger spread in future projections. One area of model development is increasing the horizontal resolution – where the size of the grid cells used to calculate the model equations is reduced.

The aim of my PhD is to investigate the impact that climate model resolution has on the representation of the Arctic climate. This will help us understand the benefits that we can get from increasing model resolution. The first part of the project was investigating the impact of atmospheric resolution. We looked at three experiments (using HadGEM3-GC2), each at a different atmospheric resolutions: 135km (N512), 60km (N216) and 25km (N96).

Figure 2. Annual mean sea ice concentration for observations (HadISST) and the bias of each different experiment from the observations N96: low resolution, N216: medium resolution, N512: high resolution.

The annual mean sea ice concentration for observations and the biases of the 3 experiments are shown in Figure 2. The low resolution experiment does a good job of producing the sea extent seen in observations with only small biases in the marginal sea ice regions. However, in the higher resolution experiments we find that the sea ice concentration is much lower than the observations, particularly in the Barents Sea (north of Norway). These changes in sea ice are consistent with warmer temperatures in the high resolution experiments compared to the low resolution.

To understand where these changes have come from we looked at the energy transported into the ocean by the atmosphere and the ocean. We found that there is an increase in the total energy being transported into the Arctic which is consistent with the reduced sea ice and warmer temperatures. Interestingly, the increase in energy is being transported into the Arctic by the ocean (Figure 3), even though it is the atmospheric resolution that is changing between the experiments. In the high resolution experiments the ocean energy transport into the Arctic, 0.15 petawatts (PW), is in better agreement with observational estimates, 0.154 PW, from Tsubouchi et al. (2018). Interestingly, this is in contrast to the worse representation of sea ice concentration in the high resolution experiments. (It is important to note that the model was tuned at the low resolution and as little as possible was changed when running the high resolution experiments which may contribute to the better sea ice concentration in the low resolution experiment.)

Location of ocean gateways into the Arctic. Red: Bering Strait, Green: Davis Strait, Blue: Fram Strait, Magenta: Barents Sea

Figure 3. Ocean energy transport for each resolution experiment through the four ocean gateways into the Arctic. The four gateways form a closed boundary into the Arctic.

We find that the ocean is very sensitive to the differences in the surface winds between the high and low resolution experiments. In different regions the differences in winds arise from different processes. In the Davis Strait the effect of coastal tiling is important, where at higher resolution a smaller area is covered by atmospheric grid cells that cover both land and ocean. In a cell covering both land and ocean the model usually produces wind speeds to low for over the ocean. Therefore in the higher resolution experiment we find that there are higher wind speeds over the ocean near the coast. Whereas over the Fram Strait and the Barents Sea instead we find that there are large scale atmospheric circulation changes that give the differences in surface winds between the experiments.


Cohen, J., Screen, J. A., Furtado, J. C., Barlow, M., Whittleston, D., Coumou, D., Francis, J., Dethloff, K., Entekhabi, D., Overland, J. & Jones, J. 2014: Recent Arctic amplification and extreme mid-latitude weather. Nature Geoscience, 7(9), 627–637,

Stroeve, J. C., Kattsov, V., Barrett, A., Serreze, M., Pavlova, T., Holland, M., & Meier, W. N., 2012: Trends in Arctic sea ice extent from CMIP5, CMIP3 and observations. Geophysical Research Letters, 39(16), 1–7,

Tsubouchi, T., Bacon, S., Naveira Garabato, A. C., Aksenov, Y., Laxon, S. W., Fahrbach, E., Beszczynska-Möller, A., Hansen, E., Lee, C.M., Ingvaldsen, R. B. 2018: The Arctic Ocean Seasonal Cycles of Heat and Freshwater Fluxes: Observation-Based Inverse Estimates. Journal of Physical Oceanography, 48(9), 2029–2055,

How much energy is available in a moist atmosphere?


It is often useful to know how much energy is available to generate motion in the atmosphere, for example in storm tracks or tropical cyclones. To this end, Lorenz (1955) developed the theory of Available Potential Energy (APE), which defines the part of the potential energy in the atmosphere that could be converted into kinetic energy.

To calculate the APE of the atmosphere, we first find the minimum total potential energy that could be obtained by adiabatic motion (no heat exchange between parcels of air). The atmospheric setup that gives this minimum is called the reference state. This is illustrated in Figure 1: in the atmosphere on the left, the denser air will move horizontally into the less dense air, but in the reference state on the right, the atmosphere is stable and no motion would occur. No further kinetic energy is expected to be generated once we reach the reference state, and so the APE of the atmosphere is its total potential energy minus the total potential energy of the reference state.

Figure 1: Construction of the APE reference state for a 2D atmosphere. The purple shading indicates the density of the air; darker colours mean denser air. In the actual state, the density stratification is not completely horizontal, which leads to the air motion shown by the orange arrows. The reference state has a stable, horizontal density stratification, so the air will not move without some disturbance.

If we think about an atmosphere that only varies in the vertical direction, it is easy to find the reference state if the atmosphere is dry. We assume that the atmosphere consists of a number of air parcels, and then all we have to do is place the parcels in order of increasing potential temperature with height. This ensures that density decreases upwards, so we have a stable atmosphere.

However, if we introduce water vapour into the atmosphere, the situation becomes more complicated. When water vapour condenses, latent heat is released, which increases the temperature of the air, decreasing its density. One moist air parcel can be denser than another at a certain height, but then less dense if they are lifted to a height where the first parcel condenses but the second one does not. The moist reference state therefore depends on the exact method used to sort the parcels by their density.

It is possible to find the rearrangement of the moist air parcels that gives the minimum possible total potential energy, using the Munkres (1957) sorting algorithm, but this takes a very long time for a large number of parcels. Lots of different sorting algorithms have therefore been developed that try to find an approximate moist reference state more quickly (the different types of algorithms are explained by Stansifer (2017) and Harris and Tailleux (2018)). However, these sorting algorithms do not try to analyse whether the parcel movements they are simulating could actually happen in the real atmosphere—for example, many work by lifting all parcels to a fixed level in the atmosphere, without considering whether the parcels could feasibly move there—and there has been little understanding of whether the reference states they find are accurate.

As part of my PhD, I have performed the first assessment of these sorting algorithms across a wide range of atmospheric data, using over 3000 soundings from both tropical island and mid-latitude continental locations (Harris and Tailleux, 2018). This showed that whilst some of the sorting algorithms can provide a good estimate of the minimum potential energy reference state, others are prone to computing a rearrangement that actually has a higher potential energy than the original atmosphere.

We also showed that a new algorithm, which does not rely on sorting procedures, can calculate APE with comparable accuracy to the sorting algorithms. This method finds a layer of near-surface buoyant parcels, and performs the rearrangement by lifting the layer upwards until it is no longer buoyant. The success of this method suggests that we do not need to rely on possibly unphysical sorting algorithms to calculate moist APE, but that we can move towards approaches that consider the physical processes generating motion in a moist atmosphere.


Harris, B. L. and R. Tailleux, 2018: Assessment of algorithms for computing moist available potential energy. Q. J. R. Meteorol. Soc., 144, 1501–1510,

Lorenz, E. N., 1955: Available potential energy and the maintenance of the general circulation. Tellus, 7, 157–167,

Munkres, J., 1957: Algorithms for the Assignment and Transportation Problems. J. Soc. Ind. Appl. Math., 5, 32–38,

Stansifer, E. M., P. A. O’Gorman, and J. I. Holt, 2017: Accurate computation of moist available potential energy with the Munkres algorithm. Q. J. R. Meteorol. Soc., 143, 288–292,

Combining multiple streams of environmental data into a soil moisture dataset


An accurate estimate of soil moisture has a vital role in a number of scientific research areas. It is important for day to day numerical weather prediction, extreme weather event forecasting such as for flooding and droughts, crop suitability to a particular region and crop yield estimation to mention a few. However, in-situ measurements of soil moisture are generally expensive to obtain, labour intensive and have sparse spatial coverage. To assist this, satellite measurements and models are used as a proxy of the ground measurement. Satellite missions such as SMAP (Soil Moisture Active Passive) observe the soil moisture content for the top few centimetres from the surface of the earth. On the other hand, soil moisture estimates from models are prone to errors due to model errors in representing the physics or the parameter values used.

Data assimilation is a method of combining numerical models with observed data and its error statistics. In principle, the state estimate after data assimilation is expected to be better than the standalone numerical model estimate of the state or the observations. There are a variety of data assimilation methods: Variational, Sequential, Monte Carlo methods and a combination of them. The Joint UK Land Environment Simulator (JULES) is a community land surface model which calculates several land surface processes such as surface energy balance and carbon cycle and used by the Met Office – the UK’s national weather service.

My PhD aims to improve the estimate of soil moisture from the JULES model using satellite data from SMAP and the Four-Dimensional Ensemble Variational (4DEnVar) data assimilation method introduced by Liu et al. (2008) and implemented by Pinnington (2019; under review), a combination of Variational and Ensemble data assimilation methods. In addition to satellite soil moisture data assimilation, ground measurement soil moisture data from Oklahoma Mesoscale Networks (Mesonet) are also assimilated.

Figure 1: Top layer prior, background, posterior and SMAP satellite observed volumetric soil moisture for Antlers station in Oklahoma Mesonet, for the year 2017.
Figure 2: Distance of prior soil moisture and posterior soil moisture from the concurrent soil moisture SMAP observations explained in Figure 1.

The time series of soil moisture from the JULES model (prior), soil moisture obtained after assimilation (posterior) and observed soil moisture for Antlers station in Mesonet are depicted in Figure 1. Figure 2 shows the distance of prior soil moisture estimates and posterior soil moisture estimates from the assimilated observations. The smaller the distance is the better as the primary objective of data assimilation is to optimally fit the model trajectory into the observations and background. From Figure 1 and Figure 2 we can conclude that posterior soil moisture estimates are closer to the observations compared to the prior. Looking at particular months, prior soil moisture is closer to observations compared to the posterior around January and October. This is due to the fact that 4DEnVar considers all the observations to calculate an optimal trajectory which fits observations and background. Hence, it is not surprising to see the prior being closer to the observations than the posterior in some places.

Data assimilation experiments are repeated for different sites in Mesonet with varying soil type, topography and different climate and with different soil moisture dataset. In all the experiments, we have observed that posterior soil moisture estimates are closer to the observations than the prior soil moisture estimates. As a verification, soil moisture reanalysis is calculated for the year 2018 and compared to the observations. Figure 3 is SMAP soil moisture data assimilated into the JULES model and hind-casted for the following year.

Figure 3: Hind-casted soil moisture for 2018 based on posterior soil texture corresponding to the result obtained from assimilated mesonet soil moisture data for 2017.


Liu, C., Q. Xiao, and B. Wang, 2008: An Ensemble-Based Four-Dimensional Variational Data Assimilation Scheme. Part I: Technical Formulation and Preliminary Test. Mon. Weather Rev., 136 (9), 3363–3373.,

Pinnington, E., T. Quaife, A. Lawless, K. Williams, T. Arkebauer, and D. Scoby, 2019: The Land Variational Ensemble Data Assimilation fRamework:
LaVEnDAR. Geosci. Model Dev. Discuss.

Simulating measurements from the ISMAR radiometer using a new light scattering approximation


It is widely known that clouds pose a lot of difficulties for both weather and climate modelling, particularly when ice is present. The ice water content (IWC) of a cloud is defined as the mass of ice per unit volume of air. The integral of this quantity over a column is referred to as the ice water path (IWP) and is considered one of the essential climate variables by the World Meteorological Organisation. Currently there are large inconsistencies in the IWP retrieved from different satellites, and there is also a large spread in the amount produced by different climate models (Eliasson et al., 2011).
A major part of the problem is the lack of reliable global measurements of cloud ice. For this reason, the Ice Cloud Imager (ICI) will be launched in 2022. ICI will be the first instrument in space specifically designed to measure cloud ice, with channels ranging from 183 to 664 GHz. It is expected that the combination of frequencies available will allow for more accurate estimations of IWP and particle size. A radiometer called ISMAR has been developed by the UK Met Office and ESA as an airborne demonstrator for ICI, flying on the FAAM BAe-146 research aircraft shown in Fig. 1.

Figure 1: The Facility for Airborne Atmospheric Measurements (FAAM) aircraft which carries the ISMAR radiometer.

As radiation passes through cloud, it is scattered in all directions. Remote sensing instruments measure the scattered field in some way; either by detecting some of the scattered waves, or by detecting how much radiation has been removed from the incident field as a result of scattering. The retrieval of cloud ice properties therefore relies on accurate scattering models. A variety of numerical methods currently exist to simulate scattering by ice particles with complex geometries. In a very broad sense, these can be divided into 2 categories –
1: Methods that are accurate but computationally expensive
2: Methods that are computationally efficient but inaccurate

My PhD has involved developing a new approximation for aggregates which falls somewhere in between the two extremes. The method is called the Independent Monomer Approximation (IMA). So far, tests have shown that it performs well for small particle sizes, with particularly impressive results for aggregates of dendritic monomers.

Radiometers such as ICI and ISMAR convert measured radiation into brightness temperatures (Tb), i.e. the temperature of a theoretical blackbody that would emit an equivalent amount of radiation. Lower values of Tb correspond to more ice in the clouds, as a greater amount of radiation from the lower atmosphere is scattered on its way to the instrument’s detector (i.e. a brightness temperature “depression” is observed over thick ice cloud). Generally, the interpretation of measurements from remote-sensing instruments requires many assumptions to be made about the shapes and distributions of particles within the cloud. However, by comparing Tb at orthogonal horizontal (H) and vertical (V) polarisations, we can gain some information about the size, shape, and orientation of ice particles within the cloud. If large V-H polarimetric differences are measured, it is indicative of horizontally oriented particles, whereas random orientation produces less of a difference in signal. According to Gong and Wu (2017), neglecting the polarimetric signal could result in errors of up to 30% in IWP retrievals. Examples of Tb depressions and the corresponding V-H polarimetric differences can be seen in Fig. 2. In the work shown here, we explore this particular case further.

Figure 2: (a) ISMAR measured brightness temperatures, showing a depression (decrease in Tb) caused by thick cloud; (b) Polarimetric V-H brightness temperature difference, with significant values reaching almost 10 K.

Using the ISMAR instrument, we can test scattering models that could be used within retrieval algorithms for ICI. We want to find out whether the IMA method is capable of reproducing realistic brightness temperature depressions, and whether it captures the polarimetric signal. To do this, we look at a case study that was part of the NAWDEX (North Atlantic Waveguide and Downstream Impact Experiment) campaign of flying. The observations from the ISMAR radiometer were collected on 14 October 2016 off the North-West Coast of Scotland, over a frontal ice cloud. Three different aircraft took measurements from above the cloud during this case, which means that we have coincident data from ISMAR and two different radar frequencies of 35 GHz and 95 GHz. This particular case saw large V-H polarimetric differences reaching almost 10 K, as seen in Fig. 2(b). We will look at the applicability of the IMA method to simulating the polarisation signal measured from ISMAR, using the Atmospheric Radiative Transfer Simulator (ARTS).

For this study, we need to construct a model of the atmosphere to be used in the radiative transfer simulations. The nice thing about this case is that the FAAM aircraft also flew through the cloud, meaning we have measurements from both in-situ and remote-sensing instruments. Consequently, we can design our model cloud using realistic assumptions. We try to match the atmospheric state at the time of the in-situ observations by deriving mass-size relationships specific to this case, and generating particles to follow the derived relationship for each layer. The particles were generated using the aggregation model of Westbrook (2004).

Due to the depth of the cloud, it would not be possible to obtain an adequate representation of the atmospheric conditions using a single averaged layer. Hence, we modelled our atmosphere based on the aircraft profiles, using 7 different layers of ice with depths of approximately 1 km each. These layers are located between altitudes of 2 km and 9 km. Below 2 km, the Marshall-Palmer drop size distribution was used to represent rain, with an estimated rain rate of 1-2mm/hr taken from the Met Office radar. The general structure of our model atmosphere can be seen in Fig. 3, along with some of the particles used in each layer. Note that this is a crude representation and the figure shows only a few examples; in the simulations we use between 46 and 62 different aggregate realisations in each layer.

Figure 3: Examples of particles used in our model atmosphere. We represent the ice cloud using 3 layers of columnar aggregates and 4 layers of dendritic aggregates, and include a distribution of rain beneath the cloud.

To test our model atmosphere, we simulated the radar reflectivities at 35 GHz and 95 GHz using the particle models generated for this case. This allowed us to refine our model until sufficient accuracy was achieved. Then we used the IMA method to calculate the scattering quantities required by the ARTS radiative transfer model. These were implemented into ARTS in order to simulate the ISMAR polarisation observations.
Fig. 4 shows the simulated brightness temperatures using different layers of our modelled atmosphere, i.e. starting with the clear-sky case and gradually increasing the cloud amount. The simulations using the IMA scattering method in the ARTS model were compared to the measurements from ISMAR shown in Fig. 2. Looking at the solid lines in Fig. 4, it can be seen that the aggregates of columns and dendrites simulate the brightness temperature depression well, but do not reproduce the V-H polarization signal. Thus we decided to include some horizontally aligned single dendrites which were not included in our original atmospheric model. The reason we chose these particles is that they tend to have a greater polarization signal compared to aggregates, and there was evidence in the cloud particle imagery that they were present in the cloud during the time of interest. We placed these particles at the cloud base, without changing the ice water content of the model. The results from that experiment are shown by the diagonal crosses in Fig. 4. It is clear that adding single dendrites allow us to simulate a considerably larger polarimetric signal, closely matching the ISMAR measurements. Using only aggregates of columns and dendrites gives a V-H polarimetric difference of 1.8K, whereas the inclusion of dendritic particles increases this value to 8.4K.

Figure 4: Simulated brightness temperatures using different layers of our model atmosphere. Along the x-axis we start with the clear-sky case, followed by the addition of rain. Then we add one layer of cloud at a time, starting from the top layer of columnar aggregates.

To conclude, we have used our new light scattering approximation (IMA) along with the ARTS radiative transfer model to simulate brightness temperature measurements from the ISMAR radiometer. Although the measured brightness temperature depressions can generally be reproduced using the IMA scattering method, the polarisation difference is very sensitive to the assumed particle shape for a given ice water path. Therefore, to obtain good retrievals from ICI, it is important to represent the cloud as accurately as possible. Utilising the polarisation information available from the instrument could provide a way to infer realistic particle shapes, thereby reducing the need to make unrealistic assumptions.


Eliasson, S., S. A. Buehler, M. Milz, P. Eriksson, and V. O. John, 2011: Assessing observed and modelled spatial distributions of ice water path using satellite data. Atmos. Chem. Phys., 11, 375-391.

Gong, J., and D. L. Wu, 2017: Microphysical properties of frozen particles inferred from Global Precipitation Measurement (GPM) Microwave Imager (GMI) polarimetric measurements. Atmos. Chem. Phys., 17, 2741-2757.

Westbrook, C. D., R. C. Ball, P. R. Field, and A. J. Heymsfield, 2004: A theory of growth by differential sedimentation with application to snowflake formation. Phys. Rev. E, 70, 021403.

Extending the predictability of flood hazard at the global scale


When I started my PhD, there were no global scale operational seasonal forecasts of river flow or flood hazard. Global overviews of upcoming flood events are key for organisations working at the global scale, from water resources management to humanitarian aid, and for regions where no other local or national forecasts are available. While GloFAS (the Global Flood Awareness System, run by the European Centre for Medium-Range Weather Forecasts (ECMWF) and the European Commission Joint Research Centre (JRC) as part of the Copernicus Emergency Management Services) was producing operational, openly-available flood forecasts out to 30 days ahead, there was a need for more extended-range forecast information. Often, due to a lack of hydrological forecasts, seasonal rainfall forecasts are used as a proxy for flood hazard – however, the link between precipitation and floodiness is nonlinear, and recent research has shown that seasonal rainfall forecasts are not necessarily the best indicator of potential flood hazard. The aim of my PhD research was to look into ways in which we could provide earlier warning information, several weeks to months ahead, using hydrological analysis in addition to the meteorology.

Presidente Kuczynski recorre zonas afectadas por lluvias e inund
Flooding in Trujillo, Peru, March 2017 (Photo: Presidencia Perú on Twitter)

Broadly speaking, there are two key ways in which to provide early warning information on seasonal timescales: (1) through statistical analysis based on large-scale climate variability and teleconnections, and (2) by producing dynamical seasonal forecasts using coupled ocean-atmosphere GCMs. Over the past 4.5 years, I worked on providing hydrologically-relevant seasonal forecast products using these two approaches, at the global scale. This blog post will give a quick overview of the two new forecast products we produced as part of this research!

Can we use El Niño to predict flood hazard?

ENSO (the El Niño Southern Oscillation), is known to influence river flow and flooding across much of the globe, and often, statistical historical probabilities of extreme precipitation during El Niño and La Niña (the extremes of ENSO climate variability) are used to provide information on likely flood impacts. Due to its global influence on weather and climate, we decided to assess whether it is possible to use ENSO as a predictor of flood hazard at the global scale, by assessing the links between ENSO and river flow globally, and estimating the equivalent historical probabilities for high and low river flow, to those that are already used for meteorological variables.

With a lack of sufficient river flow observations across much of the globe, we needed to use a reanalysis dataset – but global reanalysis datasets for river flow are few and far between, and none extended beyond ~40 years (which includes a sample of ≤10 El Niños and ≤13 La Niñas). We ended up producing a 20th Century global river flow reconstruction, by forcing the Camaflood hydrological model with ECMWF’s ERA-20CM atmospheric reconstruction, to produce a 10-member river flow dataset covering 1901-2010, which we called ERA-20CM-R.


Using this dataset, we calculated the percentage of past El Niño and La Niña events, during which the monthly mean river flow exceeded a high flow threshold (the 75th percentile of the 110-year climatology) or fell below a low flow threshold (the 25th percentile), for each month of an El Niño / La Niña. This percentage is then taken as the probability that high or low flow will be observed in future El Niño/La Niña events. Maps of these probabilities are shown above, for El Niño, and all maps for both El Niño and La Niña can be found here. When comparing to the same historical probabilities calculated for precipitation, it is evident that additional information can be gained from considering the hydrology. For example, the River Nile in northern Africa is likely to see low river flow, even though the surrounding area is likely to see more precipitation – because it is influenced more by changes in precipitation upstream. In places that are likely to see more precipitation but in the form of snow, there would be no influence on river flow or flood hazard during the time when more precipitation is expected. However, several months later, there may be no additional precipitation expected, but there may be increased flood hazard due to the melting of more snow than normal – so we’re able to see a lagged influence of ENSO on river flow in some regions.

While there are locations where these probabilities are high and can provide a useful forecast of hydrological extremes, across much of the globe, the probabilities are lower and much more uncertain (see here for more info on uncertainty in these forecasts) than might be useful for decision-making purposes.

Providing openly-available seasonal river flow forecasts, globally

For the next ‘chapter’ of my PhD, we looked into the feasibility of providing seasonal forecasts of river flow at the global scale. Providing global-scale flood forecasts in the medium-range has only become possible in recent years, and extended-range flood forecasting was highlighted as a grand challenge and likely future development in hydro-meteorological forecasting.

To do this, I worked with Ervin Zsoter at ECMWF, to drive the GloFAS hydrological model (Lisflood) with reforecasts from ECMWF’s latest seasonal forecasting system, SEAS5, to produce seasonal forecasts of river flow. We also forced Lisflood with the new ERA5 reanalysis, to produce an ERA5-R river flow reanalysis with which to initialise Lisflood, and to provide a climatology. The system set-up is shown in the flowchart below.


I also worked with colleagues at ECMWF to design forecast products for a GloFAS seasonal outlook, based on a combination of features from the GloFAS flood forecasts, and the EFAS (the European Flood Awareness System) seasonal outlook, and incorporating feedback from users of EFAS.

After ~1 year of working on getting the system set up and finalising the forecast products, including a four-month research placement at ECMWF, the first GloFAS -Seasonal forecast was released in November 2017, with the release of SEAS5. GloFAS-Seasonal is now running operationally at ECMWF, providing forecasts of high and low weekly-averaged river flow for the global river network, up to 4 months ahead, with 3 new forecast layers available through the GloFAS interface. These provide a forecast overview for 307 major river basins, a map of the forecast for the entire river network at the sub-basin scale, and ensemble hydrographs at thousands of locations across the globe (which change with each forecast depending on forecast probabilities). New forecasts are produced once per month, and released on the 10th of each month. You can find more information on each of the different forecast layers and the system set-up here, and you can access the (openly available) forecasts here. ERA5-R, ERA-20CM-R and the GloFAS-Seasonal reforecasts are also all freely available – just get in touch! GloFAS-Seasonal will continue to be developed by ECMWF and the JRC, and has already been updated to v2.0, including a calibrated version of the hydrological model.

Screenshot of the GloFAS seasonal outlook at

So, over the course of my PhD, we developed two new seasonal forecasts for hydrological extremes, at the global scale. You may be wondering whether they’re skilful, or in fact, which one provides the most useful forecasts! For information on the skill or ‘potential usefulness’ of GloFAS-Seasonal, head to our paper, and stay tuned for a paper coming soon (hopefully! [update: this paper has just been accepted and can be accessed online here]) on the ‘most useful approach for forecasting hydrological extremes during El Niño’, in which we compare the skill of the two forecasts at predicting observed high and low flow events during El Niño.


With thanks to my PhD supervisors & co-authors:

Hannah Cloke1, Liz Stephens1, Florian Pappenberger2, Steve Woolnough1, Ervin Zsoter2, Peter Salamon3, Louise Arnal1,2, Christel Prudhomme2, Davide Muraro3

1University of Reading, 2ECMWF, 3European Commission Joint Research Centre

Modelling windstorm losses in a climate model

Extratropical cyclones cause vast amounts of damage across Europe throughout the winter seasons. The damage from these cyclones mainly comes from the associated severe winds. The most intense cyclones have gusts of over 200 kilometres per hour, resulting in substantial damage to property and forestry, for example, the Great Storm of 1987 uprooted approximately 15 million trees in one night. The average loss from these storms is over $2 billion per year (Schwierz et al. 2010) and is second only to Atlantic Hurricanes globally in terms of insured losses from natural hazards. However, the most severe cyclones such as Lothar (26/12/1999) and Kyrill (18/1/2007) can cause losses in excess of $10 billion (Munich Re, 2016). One property of extratropical cyclones is that they have a tendency to cluster (to arrive in groups – see example in Figure 1), and in such cases these impacts can be greatly increased. For example Windstorm Lothar was followed just one day later by Windstorm Martin and the two storms combined caused losses of over $15 billion. The large-scale atmospheric dynamics associated with clustering events have been discussed in a previous blog post and also in the scientific literature (Pinto et al., 2014; Priestley et al. 2017).

Figure 1. Composite visible satellite image from 11 February 2014 of 4 extratropical cyclones over the North Atlantic (circled) (NASA).

A large part of my PhD has involved investigating exactly how important the clustering of cyclones is on losses across Europe during the winter. In order to do this, I have used 918 years of high resolution coupled climate model data from HiGEM (Shaffrey et al., 2017) which provides a huge amount of winter seasons and cyclone events for analysis.

In order to understand how clustering affects losses, I first of all need to know how much loss/damage is associated with each individual cyclone. This is done using a measure called the Storm Severity Index (SSI – Leckebusch et al., 2008), which is a proxy for losses that is based on the 10-metre wind field of the cyclone events. The SSI is a good proxy for windstorm loss. Firstly, it scales the wind speed in any particular location by the 98th percentile of the wind speed climatology in that location. This scaling ensures that only the most severe winds at any one point are considered, as different locations have different perspectives on what would be classed as ‘damaging’. This exceedance above the 98th percentile is then raised to the power of 3 due to damage from wind being a highly non-linear function. Finally, we apply a population density weighting to our calculations. This weighting is required because a hypothetical gust of 40 m/s across London will cause considerably more damage than the same gust across far northern Scandinavia, and the population density is a good approximation for the density of insured property. An example of the SSI that has been calculated for Windstorm Lothar is shown in Figure 2.


Figure 2. (a) Wind footprint of Windstorm Lothar (25-27/12/1999) – 10 metre wind speed in coloured contours (m/s). Black line is the track of Lothar with points every 6 hours (black dots). (b) The SSI field of Windstorm Lothar. All data from ERA-Interim.


From Figure 2b you can see how most of the damage from Windstorm Lothar was concentrated across central/northern France and also across southern Germany. This is because the winds here were most extreme relative to what is the climatology. Even though the winds are highest across the North Atlantic Ocean, the lack of insured property, and a much high climatological winter mean wind speed, means that we do not observe losses/damage from Windstorm Lothar in these locations.

Figure 3. The average SSI for 918 years of HiGEM data.


I can apply the SSI to all of the individual cyclone events in HiGEM and therefore can construct a climatology of where windstorm losses occur. Figure 3 shows the average loss across all 918 years of HiGEM. You can see that the losses are concentrated in a band from southern UK towards Poland in an easterly direction. This mainly covers the countries of Great Britain, Belgium, The Netherlands, France, Germany, and Denmark.

This blog post introduces my methodology of calculating and investigating the losses associated with the winter season extratropical cyclones. Work in Priestley et al. (2018) uses this methodology to investigate the role of clustering on winter windstorm losses.

This work has been funded by the SCENARIO NERC DTP and also co-sponsored by Aon Benfield.





Leckebusch, G. C., Renggli, D., and Ulbrich, U. 2008. Development and application of an objective storm severity measure for the Northeast Atlantic region. Meteorologische Zeitschrift.

Munich Re. 2016. Loss events in Europe 1980 – 2015. 10 costliest winter storms ordered by overall losses.

Pinto, J. G., Gómara, I., Masato, G., Dacre, H. F., Woollings, T., and Caballero, R. 2014. Large-scale dynamics associated with clustering of extratropical cyclones affecting Western Europe. Journal of Geophysical Research: Atmospheres.

Priestley, M. D. K., Dacre, H. F., Shaffrey, L. C., Hodges, K. I., and Pinto, J. G. 2018. The role of European windstorm clustering for extreme seasonal losses as determined from a high resolution climate model, Nat. Hazards Earth Syst. Sci. Discuss.,, in review.

Priestley, M. D. K., Pinto, J. G., Dacre, H. F., and Shaffrey, L. C. 2017. Rossby wave breaking, the upper level jet, and serial clustering of extratropical cyclones in western Europe. Geophysical Research Letters.

Schwierz, C., Köllner-Heck, P., Zenklusen Mutter, E. et al. 2010. Modelling European winter wind storm losses in current and future climate. Climatic Change.

Shaffrey, L. C., Hodson, D., Robson, J., Stevens, D., Hawkins, E., Polo, I., Stevens, I., Sutton, R. T., Lister, G., Iwi, A., et al. 2017. Decadal predictions with the HiGEM high resolution global coupled climate model: description and basic evaluation, Climate Dynamics,

Baroclinic and Barotropic Annular Modes of Variability


Modes of variability are climatological features that have global effects on regional climate and weather. They are identified through spatial structures and the timeseries associated with them (so-called EOF/PC analysis, which finds the largest variability of a given atmospheric field). Examples of modes of variability include El Niño Southern Oscillation, Madden-Julian Oscillation, North Atlantic Oscillation, Annular modes, etc. The latter are named after the “annulus” (a region bounded by two concentric circles) as they occur in the Earth’s midlatitudes (a band of atmosphere bounded by the polar and tropical regions, Fig. 1), and are the most important modes of midlatitude variability, generally representing 20-30% of the variability in a field.

Figure 1: Southern Hemisphere midlatitudes (red concentric circles) as annulus, region where annular modes have the largest impacts. Source.

We know two types of annular modes: baroclinic (based on eddy kinetic energy, a proxy for eddy activity and an indicator of storm-track intensity) and barotropic (based on zonal mean zonal wind, representing the north-south shifts of the jet stream) (Fig. 2). The latter are usually referred to as Southern (SAM or Antarctic Oscillation) or Northern (NAM or Arctic Oscillation) Annular Mode (depending on the hemisphere), have generally quasi-barotropic (uniform) vertical structure, and impact the temperature variations, sea-ice distribution, and storm paths in both hemispheres with timescales of about 10 days. The former are referred to as BAM (baroclinic annular mode) and exhibit strong vertical structure associated with strong vertical wind shear (baroclinicity), and their impacts are yet to be determined (e.g. Thompson and Barnes 2014, Marshall et al. 2017). These two modes of variability are linked to the key processes of the midlatitude tropospheric dynamics that are involved in the growth (baroclinic processes) and decay (barotropic processes) of midlatitude storms. The growth stage of the midlatitude storms is conventionally associated with increase in eddy kinetic energy (EKE) and the decay stage with decrease in EKE.

Figure 2: Barotropic annular mode (right), based on zonal wind (contours), associated with eddy momentum flux (shading); Baroclinic annular mode (left), based on eddy kinetic energy (contours), associated with eddy heat flux (shading). Source: Thompson and Woodworth (2014).

However, recent observational studies (e.g. Thompson and Woodworth 2014) have suggested decoupling of baroclinic and barotropic components of atmospheric variability in the Southern Hemisphere (i.e. no correlation between the BAM and SAM) and a simpler formulation of the EKE budget that only depends on eddy heat fluxes and BAM (Thompson et al. 2017). Using cross-spectrum analysis, we empirically test the validity of the suggested relationship between EKE and heat flux at different timescales (Boljka et al. 2018). Two different relationships are identified in Fig. 3: 1) a regime where EKE and eddy heat flux relationship holds well (periods longer than 10 days; intermediate timescale); and 2) a regime where this relationship breaks down (periods shorter than 10 days; synoptic timescale). For the relationship to hold (by construction), the imaginary part of the cross-spectrum must follow the angular frequency line and the real part must be constant. This is only true at the intermediate timescales. Hence, the suggested decoupling of baroclinic and barotropic components found in Thompson and Woodworth (2014) only works at intermediate timescales. This is consistent with our theoretical model (Boljka and Shepherd 2018), which predicts decoupling under synoptic temporal and spatial averaging. At synoptic timescales, processes such as barotropic momentum fluxes (closely related to the latitudinal shifts in the jet stream) contribute to the variability in EKE. This is consistent with the dynamics of storms that occur on timescales shorter than 10 days (e.g. Simmons and Hoskins 1978). This is further discussed in Boljka et al. (2018).

Figure 3: Imaginary (black solid line) and Real (grey solid line) parts of cross-spectrum between EKE and eddy heat flux. Black dashed line shows the angular frequency (if the tested relationship holds, the imaginary part of cross-spectrum follows this line), the red line distinguishes between the two frequency regimes discussed in text. Source: Boljka et al. (2018).


Boljka, L., and T. G. Shepherd, 2018: A multiscale asymptotic theory of extratropical wave, mean-flow interaction. J. Atmos. Sci., in press.

Boljka, L., T. G. Shepherd, and M. Blackburn, 2018: On the coupling between barotropic and baroclinic modes of extratropical atmospheric variability. J. Atmos. Sci., in review.

Marshall, G. J., D. W. J. Thompson, and M. R. van den Broeke, 2017: The signature of Southern Hemisphere atmospheric circulation patterns in Antarctic precipitation. Geophys. Res. Lett., 44, 11,580–11,589.

Simmons, A. J., and B. J. Hoskins, 1978: The life cycles of some nonlinear baroclinic waves. J. Atmos. Sci., 35, 414–432.

Thompson, D. W. J., and E. A. Barnes, 2014: Periodic variability in the large-scale Southern Hemisphere atmospheric circulation. Science, 343, 641–645.

Thompson, D. W. J., B. R. Crow, and E. A. Barnes, 2017: Intraseasonal periodicity in the Southern Hemisphere circulation on regional spatial scales. J. Atmos. Sci., 74, 865–877.

Thompson, D. W. J., and J. D. Woodworth, 2014: Barotropic and baroclinic annular variability in the Southern Hemisphere. J. Atmos. Sci., 71, 1480–1493.