## The (real) butterfly effect: the impact of resolving the mesoscale range

What does the ‘butterfly effect’ exactly mean? Many people would attribute the butterfly effect to the famous 3-dimensional non-linear model of Lorenz (1963) whose attractor looks like a butterfly when viewed from a particular angle. While it serves as an important foundation to chaos theory (by establishing that 3 dimensions are not only necessary for chaos as mandated in the Poincaré-Bendixson Theorem, but are also sufficient), the term ‘butterfly effect’ was not coined until 1972 (Palmer et al. 2014) based on a scientific presentation that Lorenz gave on a more radical, more recent work (Lorenz 1969) on the predictability barrier in multi-scale fluid systems. In this work, Lorenz demonstrated that under certain conditions, small-scale errors grow faster than large-scale errors in such a way that the predictability horizon cannot be extended beyond an absolute limit by reducing the initial error (unless the initial error is perfectly zero). Such limited predictability, or the butterfly effect as understood in this context, has now become a ‘canon in dynamical meteorology’ (Rotunno and Snyder 2008). Recent studies with advanced numerical weather prediction (NWP) models estimate this predictability horizon to be on the order of 2 to 3 weeks (Buizza and Leutbecher 2015; Judt 2018), in agreement with Lorenz’s original result.

The predictability properties of a fluid system primarily depend on the energy spectrum, whereas the nature of the dynamics per se only plays a secondary role (Rotunno and Snyder 2008). It is well-known that a slope shallower than (equal to or steeper than) -3 in the energy spectrum is associated with limited (unlimited) predictability (Lorenz 1969; Rotunno and Snyder 2008), which could be understood through analysing the characteristics of the energy spectrum of the error field. As shown in Figure 1, the error appears to grow uniformly across scales when predictability is indefinite, and appears to ‘cascade’ upscale when predictability is limited. In the latter case, the error spectra peak at the small scale and the growth rate is faster there.

The Earth’s atmospheric energy spectrum consists of a -3 range in the synoptic scale and a $-\frac{5}{3}$ range in the mesoscale (Nastrom and Gage 1985). While the limited predictability of the atmosphere arises from mesoscale physical processes, it would be of interest to understand how errors grow under this hybrid spectrum, and to what extent do global numerical weather prediction (NWP) models, which are just beginning to resolve the mesoscale $-\frac{5}{3}$ range, demonstrate the fast error growth proper to the limited predictability associated with this range.

We use the Lorenz (1969) model at two different resolutions: $K_{max}=11$, corresponding to a maximal wavenumber of $2^{11}=2048$, and $K_{max}=21$. The former represents the approximate resolution of global NWP models (~ 20 km), and the latter represents a resolution about 1000 times finer so that the shallower mesoscale range is much better resolved. Figure 2 shows the growth of a small-scale, small-amplitude initial error under these model settings.

In the $K_{max}=11$ case where the $-\frac{5}{3}$ range is not so much resolved, the error growth remains more or less up-magnitude, and the upscale cascade is not visible. The error is still much influenced by the synoptic-scale -3 range. Such behaviour largely agrees with the results of a recent study using a full-physics global NWP model (Judt 2018). In contrast, with the higher resolution $K_{max}=21$, the upscale propagation of error in the mesoscale is clearly visible. As the error spreads to the synoptic scale, its growth becomes more up-magnitude.

To understand the dependence of the error growth rate on scales, we use the parametric model of Žagar et al. (2017) by fitting the error-versus-time curve for every wavenumber / scale to the equation $E\left ( t \right )=A\tanh\left ( at+b\right )+B$, so that the parameters $A, B, a$ and $b$ are functions of the wavenumber / scale. Among the parameters, a describes the rate of error growth, the larger the quicker. A dimensional argument suggests that $a \sim (k^3 E(k))^{1/2}$, so that $a$ should be constant for a $-3$ range $(E(k) \sim k^{-3})$, and should grow $10^{2/3}>4.5$-fold for every decade of wavenumbers in the case of a $-\frac{5}{3}$ range. These scalings are indeed observed in the model simulations, except that the sharp increase pertaining to the $-\frac{5}{3}$ range only kicks in at $K \sim 15$ (1 to 2 km), much smaller in scale than the transition between the $-3$ and $-\frac{5}{3}$ ranges at $K \sim 7$ (300 to 600 km). See Figure 3 for details.

This explains the absence of the upscale cascade in the $K_{max}=11$ simulation. As models go into very high resolution in the future, the strong predictability constraints proper to the mesoscale $-\frac{5}{3}$ range will emerge, but only when it is sufficiently resolved. Our idealised study with the Lorenz model shows that this will happen only if $K_{max} >15$. In other words, motions at 1 to 2 km have to be fully resolved in order for error growth in the small scales be correctly represented. This would mean a grid resolution of ~ 250 m after accounting for the need of a dissipation range in a numerical model (Skamarock 2004).

While this seems to be a pessimistic statement, we have observed that the sensitivity of the error growth behaviour to the model resolution is itself sensitive to the initial error profile. The results presented above are for an initial error confined to a single small scale. When the initial error distribution is changed, the qualitative picture of error growth may not present such a contrast between the two resolutions. Thus, we highlight the need of further research to assess the potential gains of resolving more scales in the mesoscale, especially for the case of a realistic distribution of error that initiates the integrations of operational NWP models.

A manuscript on this work has been submitted and is currently under review.

This work is supported by a PhD scholarship awarded by the EPSRC Centre for Doctoral Training in the Mathematics of Planet Earth, with additional funding support from the ERC Advanced Grant ‘Understanding the Atmospheric Circulation Response to Climate Change’ and the Deutsche Forschungsgemeinschaft (DFG) Grant ‘Scaling Cascades in Complex Systems’.

References

Buizza, R. and Leutbecher, M. (2015). The forecast skill horizon. Quart. J. Roy. Meteor. Soc. 141, 3366—3382. https://doi.org/10.1002/qj.2619

Judt, F. (2018). Insights into atmospheric predictability through global convection-permitting model simulations. J. Atmos. Sci. 75, 1477—1497. https://doi.org/10.1175/JAS-D-17-0343.1

Leung, T. Y., Leutbecher, M., Reich, S. and Shepherd, T. G. (2019). Impact of the mesoscale range on error growth and the limits to atmospheric predictability. Submitted.

Lorenz, E. N. (1963). Deterministic Nonperiodic Flow. J. Atmos. Sci. 20, 130—141. https://doi.org/10.1175/1520-0469(1963)020<0130:DNF>2.0.CO;2

Lorenz, E. N. (1969). The predictability of a flow which possesses many scales of motion. Tellus 21, 289—307. https://doi.org/10.3402/tellusa.v21i3.10086

Nastrom, G. D. and Gage, K. S. (1985). A climatology of atmospheric wavenumber spectra of wind and temperature observed by commercial aircraft. J. Atmos. Sci. 42, 950—960. https://doi.org/10.1175/1520-0469(1985)042<0950:ACOAWS>2.0.CO;2

Palmer, T. N., Döring, A. and Seregin, G. (2014). The real butterfly effect. Nonlinearity 27, R123—R141. https://doi.org/10.1088/0951-7715/27/9/R123

Rotunno, R. and Snyder, C. (2008). A generalization of Lorenz’s model for the predictability of flows with many scales of motion. J. Atmos. Sci. 65, 1063—1076. https://doi.org/10.1175/2007JAS2449.1

Skamarock, W. C. (2004). Evaluating mesoscale NWP models using kinetic energy spectra. Mon. Wea. Rev. 132, 3019—3032. https://doi.org/10.1175/MWR2830.1

Žagar, N., Horvat, M., Zaplotnik, Ž. and Magnusson, L. (2017). Scale-dependent estimates of the growth of forecast uncertainties in a global prediction system. Tellus A 69:1, 1287492. https://doi.org/10.1080/16000870.2017.1287492

## The impact of atmospheric model resolution on the Arctic

The Arctic region is rapidly changing, with surface temperatures warming at around twice the global average and sea ice extent is rapidly declining, particularly in the summer. These changes affect the local ecosystems and people as well as the rest of the global climate. The decline in sea ice has corresponded with cold winters over the Northern Hemisphere mid-latitudes and an increase in other extreme weather events (Cohen et al., 2014). There are many suggested mechanisms linking changes in the sea ice to changes in the stratospheric jet, midlatitude jet and storm tracks; however this is an area of active research, with much ongoing debate.

It is therefore important that we are able to understand and predict the changes in the Arctic, however there is still a lot of uncertainty. Stroeve et al. (2012) calculated time series of September sea ice extent for different CMIP5 models, shown in Figure 1. In general the models do a reasonable job of reproducing the recent trends in sea ice decline, although there is a large inter-model spread and and even larger spread in future projections. One area of model development is increasing the horizontal resolution – where the size of the grid cells used to calculate the model equations is reduced.

The aim of my PhD is to investigate the impact that climate model resolution has on the representation of the Arctic climate. This will help us understand the benefits that we can get from increasing model resolution. The first part of the project was investigating the impact of atmospheric resolution. We looked at three experiments (using HadGEM3-GC2), each at a different atmospheric resolutions: 135km (N512), 60km (N216) and 25km (N96).

The annual mean sea ice concentration for observations and the biases of the 3 experiments are shown in Figure 2. The low resolution experiment does a good job of producing the sea extent seen in observations with only small biases in the marginal sea ice regions. However, in the higher resolution experiments we find that the sea ice concentration is much lower than the observations, particularly in the Barents Sea (north of Norway). These changes in sea ice are consistent with warmer temperatures in the high resolution experiments compared to the low resolution.

To understand where these changes have come from we looked at the energy transported into the ocean by the atmosphere and the ocean. We found that there is an increase in the total energy being transported into the Arctic which is consistent with the reduced sea ice and warmer temperatures. Interestingly, the increase in energy is being transported into the Arctic by the ocean (Figure 3), even though it is the atmospheric resolution that is changing between the experiments. In the high resolution experiments the ocean energy transport into the Arctic, 0.15 petawatts (PW), is in better agreement with observational estimates, 0.154 PW, from Tsubouchi et al. (2018). Interestingly, this is in contrast to the worse representation of sea ice concentration in the high resolution experiments. (It is important to note that the model was tuned at the low resolution and as little as possible was changed when running the high resolution experiments which may contribute to the better sea ice concentration in the low resolution experiment.)

We find that the ocean is very sensitive to the differences in the surface winds between the high and low resolution experiments. In different regions the differences in winds arise from different processes. In the Davis Strait the effect of coastal tiling is important, where at higher resolution a smaller area is covered by atmospheric grid cells that cover both land and ocean. In a cell covering both land and ocean the model usually produces wind speeds to low for over the ocean. Therefore in the higher resolution experiment we find that there are higher wind speeds over the ocean near the coast. Whereas over the Fram Strait and the Barents Sea instead we find that there are large scale atmospheric circulation changes that give the differences in surface winds between the experiments.

References

Cohen, J., Screen, J. A., Furtado, J. C., Barlow, M., Whittleston, D., Coumou, D., Francis, J., Dethloff, K., Entekhabi, D., Overland, J. & Jones, J. 2014: Recent Arctic amplification and extreme mid-latitude weather. Nature Geoscience, 7(9), 627–637, http://dx.doi.org/10.1038/ngeo2234

Stroeve, J. C., Kattsov, V., Barrett, A., Serreze, M., Pavlova, T., Holland, M., & Meier, W. N., 2012: Trends in Arctic sea ice extent from CMIP5, CMIP3 and observations. Geophysical Research Letters, 39(16), 1–7, https://doi.org/10.1029/2012GL052676

Tsubouchi, T., Bacon, S., Naveira Garabato, A. C., Aksenov, Y., Laxon, S. W., Fahrbach, E., Beszczynska-Möller, A., Hansen, E., Lee, C.M., Ingvaldsen, R. B. 2018: The Arctic Ocean Seasonal Cycles of Heat and Freshwater Fluxes: Observation-Based Inverse Estimates. Journal of Physical Oceanography, 48(9), 2029–2055, http://journals.ametsoc.org/doi/10.1175/JPO-D-17-0239.1

## How much energy is available in a moist atmosphere?

It is often useful to know how much energy is available to generate motion in the atmosphere, for example in storm tracks or tropical cyclones. To this end, Lorenz (1955) developed the theory of Available Potential Energy (APE), which defines the part of the potential energy in the atmosphere that could be converted into kinetic energy.

To calculate the APE of the atmosphere, we first find the minimum total potential energy that could be obtained by adiabatic motion (no heat exchange between parcels of air). The atmospheric setup that gives this minimum is called the reference state. This is illustrated in Figure 1: in the atmosphere on the left, the denser air will move horizontally into the less dense air, but in the reference state on the right, the atmosphere is stable and no motion would occur. No further kinetic energy is expected to be generated once we reach the reference state, and so the APE of the atmosphere is its total potential energy minus the total potential energy of the reference state.

If we think about an atmosphere that only varies in the vertical direction, it is easy to find the reference state if the atmosphere is dry. We assume that the atmosphere consists of a number of air parcels, and then all we have to do is place the parcels in order of increasing potential temperature with height. This ensures that density decreases upwards, so we have a stable atmosphere.

However, if we introduce water vapour into the atmosphere, the situation becomes more complicated. When water vapour condenses, latent heat is released, which increases the temperature of the air, decreasing its density. One moist air parcel can be denser than another at a certain height, but then less dense if they are lifted to a height where the first parcel condenses but the second one does not. The moist reference state therefore depends on the exact method used to sort the parcels by their density.

It is possible to find the rearrangement of the moist air parcels that gives the minimum possible total potential energy, using the Munkres (1957) sorting algorithm, but this takes a very long time for a large number of parcels. Lots of different sorting algorithms have therefore been developed that try to find an approximate moist reference state more quickly (the different types of algorithms are explained by Stansifer (2017) and Harris and Tailleux (2018)). However, these sorting algorithms do not try to analyse whether the parcel movements they are simulating could actually happen in the real atmosphere—for example, many work by lifting all parcels to a fixed level in the atmosphere, without considering whether the parcels could feasibly move there—and there has been little understanding of whether the reference states they find are accurate.

As part of my PhD, I have performed the first assessment of these sorting algorithms across a wide range of atmospheric data, using over 3000 soundings from both tropical island and mid-latitude continental locations (Harris and Tailleux, 2018). This showed that whilst some of the sorting algorithms can provide a good estimate of the minimum potential energy reference state, others are prone to computing a rearrangement that actually has a higher potential energy than the original atmosphere.

We also showed that a new algorithm, which does not rely on sorting procedures, can calculate APE with comparable accuracy to the sorting algorithms. This method finds a layer of near-surface buoyant parcels, and performs the rearrangement by lifting the layer upwards until it is no longer buoyant. The success of this method suggests that we do not need to rely on possibly unphysical sorting algorithms to calculate moist APE, but that we can move towards approaches that consider the physical processes generating motion in a moist atmosphere.

References

Harris, B. L. and R. Tailleux, 2018: Assessment of algorithms for computing moist available potential energy. Q. J. R. Meteorol. Soc., 144, 1501–1510, https://doi.org/10.1002/qj.3297

Lorenz, E. N., 1955: Available potential energy and the maintenance of the general circulation. Tellus, 7, 157–167, https://doi.org/10.3402/tellusa.v7i2.8796

Munkres, J., 1957: Algorithms for the Assignment and Transportation Problems. J. Soc. Ind. Appl. Math., 5, 32–38, https://doi.org/10.1137/0105003

Stansifer, E. M., P. A. O’Gorman, and J. I. Holt, 2017: Accurate computation of moist available potential energy with the Munkres algorithm. Q. J. R. Meteorol. Soc., 143, 288–292, https://doi.org/10.1002/qj.2921

## Combining multiple streams of environmental data into a soil moisture dataset

An accurate estimate of soil moisture has a vital role in a number of scientific research areas. It is important for day to day numerical weather prediction, extreme weather event forecasting such as for flooding and droughts, crop suitability to a particular region and crop yield estimation to mention a few. However, in-situ measurements of soil moisture are generally expensive to obtain, labour intensive and have sparse spatial coverage. To assist this, satellite measurements and models are used as a proxy of the ground measurement. Satellite missions such as SMAP (Soil Moisture Active Passive) observe the soil moisture content for the top few centimetres from the surface of the earth. On the other hand, soil moisture estimates from models are prone to errors due to model errors in representing the physics or the parameter values used.

Data assimilation is a method of combining numerical models with observed data and its error statistics. In principle, the state estimate after data assimilation is expected to be better than the standalone numerical model estimate of the state or the observations. There are a variety of data assimilation methods: Variational, Sequential, Monte Carlo methods and a combination of them. The Joint UK Land Environment Simulator (JULES) is a community land surface model which calculates several land surface processes such as surface energy balance and carbon cycle and used by the Met Office – the UK’s national weather service.

My PhD aims to improve the estimate of soil moisture from the JULES model using satellite data from SMAP and the Four-Dimensional Ensemble Variational (4DEnVar) data assimilation method introduced by Liu et al. (2008) and implemented by Pinnington (2019; under review), a combination of Variational and Ensemble data assimilation methods. In addition to satellite soil moisture data assimilation, ground measurement soil moisture data from Oklahoma Mesoscale Networks (Mesonet) are also assimilated.

The time series of soil moisture from the JULES model (prior), soil moisture obtained after assimilation (posterior) and observed soil moisture for Antlers station in Mesonet are depicted in Figure 1. Figure 2 shows the distance of prior soil moisture estimates and posterior soil moisture estimates from the assimilated observations. The smaller the distance is the better as the primary objective of data assimilation is to optimally fit the model trajectory into the observations and background. From Figure 1 and Figure 2 we can conclude that posterior soil moisture estimates are closer to the observations compared to the prior. Looking at particular months, prior soil moisture is closer to observations compared to the posterior around January and October. This is due to the fact that 4DEnVar considers all the observations to calculate an optimal trajectory which fits observations and background. Hence, it is not surprising to see the prior being closer to the observations than the posterior in some places.

Data assimilation experiments are repeated for different sites in Mesonet with varying soil type, topography and different climate and with different soil moisture dataset. In all the experiments, we have observed that posterior soil moisture estimates are closer to the observations than the prior soil moisture estimates. As a verification, soil moisture reanalysis is calculated for the year 2018 and compared to the observations. Figure 3 is SMAP soil moisture data assimilated into the JULES model and hind-casted for the following year.

References

Liu, C., Q. Xiao, and B. Wang, 2008: An Ensemble-Based Four-Dimensional Variational Data Assimilation Scheme. Part I: Technical Formulation and Preliminary Test. Mon. Weather Rev., 136 (9), 3363–3373., https://doi.org/10.1175/2008MWR2312.1

Pinnington, E., T. Quaife, A. Lawless, K. Williams, T. Arkebauer, and D. Scoby, 2019: The Land Variational Ensemble Data Assimilation fRamework:
LaVEnDAR. Geosci. Model Dev. Discuss. https://doi.org/10.5194/gmd-2019-60

## Simulating measurements from the ISMAR radiometer using a new light scattering approximation

It is widely known that clouds pose a lot of difficulties for both weather and climate modelling, particularly when ice is present. The ice water content (IWC) of a cloud is defined as the mass of ice per unit volume of air. The integral of this quantity over a column is referred to as the ice water path (IWP) and is considered one of the essential climate variables by the World Meteorological Organisation. Currently there are large inconsistencies in the IWP retrieved from different satellites, and there is also a large spread in the amount produced by different climate models (Eliasson et al., 2011).
A major part of the problem is the lack of reliable global measurements of cloud ice. For this reason, the Ice Cloud Imager (ICI) will be launched in 2022. ICI will be the first instrument in space specifically designed to measure cloud ice, with channels ranging from 183 to 664 GHz. It is expected that the combination of frequencies available will allow for more accurate estimations of IWP and particle size. A radiometer called ISMAR has been developed by the UK Met Office and ESA as an airborne demonstrator for ICI, flying on the FAAM BAe-146 research aircraft shown in Fig. 1.

As radiation passes through cloud, it is scattered in all directions. Remote sensing instruments measure the scattered field in some way; either by detecting some of the scattered waves, or by detecting how much radiation has been removed from the incident field as a result of scattering. The retrieval of cloud ice properties therefore relies on accurate scattering models. A variety of numerical methods currently exist to simulate scattering by ice particles with complex geometries. In a very broad sense, these can be divided into 2 categories –
1: Methods that are accurate but computationally expensive
2: Methods that are computationally efficient but inaccurate

My PhD has involved developing a new approximation for aggregates which falls somewhere in between the two extremes. The method is called the Independent Monomer Approximation (IMA). So far, tests have shown that it performs well for small particle sizes, with particularly impressive results for aggregates of dendritic monomers.

Radiometers such as ICI and ISMAR convert measured radiation into brightness temperatures (Tb), i.e. the temperature of a theoretical blackbody that would emit an equivalent amount of radiation. Lower values of Tb correspond to more ice in the clouds, as a greater amount of radiation from the lower atmosphere is scattered on its way to the instrument’s detector (i.e. a brightness temperature “depression” is observed over thick ice cloud). Generally, the interpretation of measurements from remote-sensing instruments requires many assumptions to be made about the shapes and distributions of particles within the cloud. However, by comparing Tb at orthogonal horizontal (H) and vertical (V) polarisations, we can gain some information about the size, shape, and orientation of ice particles within the cloud. If large V-H polarimetric differences are measured, it is indicative of horizontally oriented particles, whereas random orientation produces less of a difference in signal. According to Gong and Wu (2017), neglecting the polarimetric signal could result in errors of up to 30% in IWP retrievals. Examples of Tb depressions and the corresponding V-H polarimetric differences can be seen in Fig. 2. In the work shown here, we explore this particular case further.

Figure 2: (a) ISMAR measured brightness temperatures, showing a depression (decrease in Tb) caused by thick cloud; (b) Polarimetric V-H brightness temperature difference, with significant values reaching almost 10 K.

Using the ISMAR instrument, we can test scattering models that could be used within retrieval algorithms for ICI. We want to find out whether the IMA method is capable of reproducing realistic brightness temperature depressions, and whether it captures the polarimetric signal. To do this, we look at a case study that was part of the NAWDEX (North Atlantic Waveguide and Downstream Impact Experiment) campaign of flying. The observations from the ISMAR radiometer were collected on 14 October 2016 off the North-West Coast of Scotland, over a frontal ice cloud. Three different aircraft took measurements from above the cloud during this case, which means that we have coincident data from ISMAR and two different radar frequencies of 35 GHz and 95 GHz. This particular case saw large V-H polarimetric differences reaching almost 10 K, as seen in Fig. 2(b). We will look at the applicability of the IMA method to simulating the polarisation signal measured from ISMAR, using the Atmospheric Radiative Transfer Simulator (ARTS).

For this study, we need to construct a model of the atmosphere to be used in the radiative transfer simulations. The nice thing about this case is that the FAAM aircraft also flew through the cloud, meaning we have measurements from both in-situ and remote-sensing instruments. Consequently, we can design our model cloud using realistic assumptions. We try to match the atmospheric state at the time of the in-situ observations by deriving mass-size relationships specific to this case, and generating particles to follow the derived relationship for each layer. The particles were generated using the aggregation model of Westbrook (2004).

Due to the depth of the cloud, it would not be possible to obtain an adequate representation of the atmospheric conditions using a single averaged layer. Hence, we modelled our atmosphere based on the aircraft profiles, using 7 different layers of ice with depths of approximately 1 km each. These layers are located between altitudes of 2 km and 9 km. Below 2 km, the Marshall-Palmer drop size distribution was used to represent rain, with an estimated rain rate of 1-2mm/hr taken from the Met Office radar. The general structure of our model atmosphere can be seen in Fig. 3, along with some of the particles used in each layer. Note that this is a crude representation and the figure shows only a few examples; in the simulations we use between 46 and 62 different aggregate realisations in each layer.

To test our model atmosphere, we simulated the radar reflectivities at 35 GHz and 95 GHz using the particle models generated for this case. This allowed us to refine our model until sufficient accuracy was achieved. Then we used the IMA method to calculate the scattering quantities required by the ARTS radiative transfer model. These were implemented into ARTS in order to simulate the ISMAR polarisation observations.
Fig. 4 shows the simulated brightness temperatures using different layers of our modelled atmosphere, i.e. starting with the clear-sky case and gradually increasing the cloud amount. The simulations using the IMA scattering method in the ARTS model were compared to the measurements from ISMAR shown in Fig. 2. Looking at the solid lines in Fig. 4, it can be seen that the aggregates of columns and dendrites simulate the brightness temperature depression well, but do not reproduce the V-H polarization signal. Thus we decided to include some horizontally aligned single dendrites which were not included in our original atmospheric model. The reason we chose these particles is that they tend to have a greater polarization signal compared to aggregates, and there was evidence in the cloud particle imagery that they were present in the cloud during the time of interest. We placed these particles at the cloud base, without changing the ice water content of the model. The results from that experiment are shown by the diagonal crosses in Fig. 4. It is clear that adding single dendrites allow us to simulate a considerably larger polarimetric signal, closely matching the ISMAR measurements. Using only aggregates of columns and dendrites gives a V-H polarimetric difference of 1.8K, whereas the inclusion of dendritic particles increases this value to 8.4K.

To conclude, we have used our new light scattering approximation (IMA) along with the ARTS radiative transfer model to simulate brightness temperature measurements from the ISMAR radiometer. Although the measured brightness temperature depressions can generally be reproduced using the IMA scattering method, the polarisation difference is very sensitive to the assumed particle shape for a given ice water path. Therefore, to obtain good retrievals from ICI, it is important to represent the cloud as accurately as possible. Utilising the polarisation information available from the instrument could provide a way to infer realistic particle shapes, thereby reducing the need to make unrealistic assumptions.

References

Eliasson, S., S. A. Buehler, M. Milz, P. Eriksson, and V. O. John, 2011: Assessing observed and modelled spatial distributions of ice water path using satellite data. Atmos. Chem. Phys., 11, 375-391.

Gong, J., and D. L. Wu, 2017: Microphysical properties of frozen particles inferred from Global Precipitation Measurement (GPM) Microwave Imager (GMI) polarimetric measurements. Atmos. Chem. Phys., 17, 2741-2757.

Westbrook, C. D., R. C. Ball, P. R. Field, and A. J. Heymsfield, 2004: A theory of growth by differential sedimentation with application to snowflake formation. Phys. Rev. E, 70, 021403.

## Extending the predictability of flood hazard at the global scale

When I started my PhD, there were no global scale operational seasonal forecasts of river flow or flood hazard. Global overviews of upcoming flood events are key for organisations working at the global scale, from water resources management to humanitarian aid, and for regions where no other local or national forecasts are available. While GloFAS (the Global Flood Awareness System, run by the European Centre for Medium-Range Weather Forecasts (ECMWF) and the European Commission Joint Research Centre (JRC) as part of the Copernicus Emergency Management Services) was producing operational, openly-available flood forecasts out to 30 days ahead, there was a need for more extended-range forecast information. Often, due to a lack of hydrological forecasts, seasonal rainfall forecasts are used as a proxy for flood hazard – however, the link between precipitation and floodiness is nonlinear, and recent research has shown that seasonal rainfall forecasts are not necessarily the best indicator of potential flood hazard. The aim of my PhD research was to look into ways in which we could provide earlier warning information, several weeks to months ahead, using hydrological analysis in addition to the meteorology.

Broadly speaking, there are two key ways in which to provide early warning information on seasonal timescales: (1) through statistical analysis based on large-scale climate variability and teleconnections, and (2) by producing dynamical seasonal forecasts using coupled ocean-atmosphere GCMs. Over the past 4.5 years, I worked on providing hydrologically-relevant seasonal forecast products using these two approaches, at the global scale. This blog post will give a quick overview of the two new forecast products we produced as part of this research!

Can we use El Niño to predict flood hazard?

ENSO (the El Niño Southern Oscillation), is known to influence river flow and flooding across much of the globe, and often, statistical historical probabilities of extreme precipitation during El Niño and La Niña (the extremes of ENSO climate variability) are used to provide information on likely flood impacts. Due to its global influence on weather and climate, we decided to assess whether it is possible to use ENSO as a predictor of flood hazard at the global scale, by assessing the links between ENSO and river flow globally, and estimating the equivalent historical probabilities for high and low river flow, to those that are already used for meteorological variables.

With a lack of sufficient river flow observations across much of the globe, we needed to use a reanalysis dataset – but global reanalysis datasets for river flow are few and far between, and none extended beyond ~40 years (which includes a sample of ≤10 El Niños and ≤13 La Niñas). We ended up producing a 20th Century global river flow reconstruction, by forcing the Camaflood hydrological model with ECMWF’s ERA-20CM atmospheric reconstruction, to produce a 10-member river flow dataset covering 1901-2010, which we called ERA-20CM-R.

Using this dataset, we calculated the percentage of past El Niño and La Niña events, during which the monthly mean river flow exceeded a high flow threshold (the 75th percentile of the 110-year climatology) or fell below a low flow threshold (the 25th percentile), for each month of an El Niño / La Niña. This percentage is then taken as the probability that high or low flow will be observed in future El Niño/La Niña events. Maps of these probabilities are shown above, for El Niño, and all maps for both El Niño and La Niña can be found here. When comparing to the same historical probabilities calculated for precipitation, it is evident that additional information can be gained from considering the hydrology. For example, the River Nile in northern Africa is likely to see low river flow, even though the surrounding area is likely to see more precipitation – because it is influenced more by changes in precipitation upstream. In places that are likely to see more precipitation but in the form of snow, there would be no influence on river flow or flood hazard during the time when more precipitation is expected. However, several months later, there may be no additional precipitation expected, but there may be increased flood hazard due to the melting of more snow than normal – so we’re able to see a lagged influence of ENSO on river flow in some regions.

While there are locations where these probabilities are high and can provide a useful forecast of hydrological extremes, across much of the globe, the probabilities are lower and much more uncertain (see here for more info on uncertainty in these forecasts) than might be useful for decision-making purposes.

Providing openly-available seasonal river flow forecasts, globally

For the next ‘chapter’ of my PhD, we looked into the feasibility of providing seasonal forecasts of river flow at the global scale. Providing global-scale flood forecasts in the medium-range has only become possible in recent years, and extended-range flood forecasting was highlighted as a grand challenge and likely future development in hydro-meteorological forecasting.

To do this, I worked with Ervin Zsoter at ECMWF, to drive the GloFAS hydrological model (Lisflood) with reforecasts from ECMWF’s latest seasonal forecasting system, SEAS5, to produce seasonal forecasts of river flow. We also forced Lisflood with the new ERA5 reanalysis, to produce an ERA5-R river flow reanalysis with which to initialise Lisflood, and to provide a climatology. The system set-up is shown in the flowchart below.

I also worked with colleagues at ECMWF to design forecast products for a GloFAS seasonal outlook, based on a combination of features from the GloFAS flood forecasts, and the EFAS (the European Flood Awareness System) seasonal outlook, and incorporating feedback from users of EFAS.

After ~1 year of working on getting the system set up and finalising the forecast products, including a four-month research placement at ECMWF, the first GloFAS -Seasonal forecast was released in November 2017, with the release of SEAS5. GloFAS-Seasonal is now running operationally at ECMWF, providing forecasts of high and low weekly-averaged river flow for the global river network, up to 4 months ahead, with 3 new forecast layers available through the GloFAS interface. These provide a forecast overview for 307 major river basins, a map of the forecast for the entire river network at the sub-basin scale, and ensemble hydrographs at thousands of locations across the globe (which change with each forecast depending on forecast probabilities). New forecasts are produced once per month, and released on the 10th of each month. You can find more information on each of the different forecast layers and the system set-up here, and you can access the (openly available) forecasts here. ERA5-R, ERA-20CM-R and the GloFAS-Seasonal reforecasts are also all freely available – just get in touch! GloFAS-Seasonal will continue to be developed by ECMWF and the JRC, and has already been updated to v2.0, including a calibrated version of the hydrological model.

So, over the course of my PhD, we developed two new seasonal forecasts for hydrological extremes, at the global scale. You may be wondering whether they’re skilful, or in fact, which one provides the most useful forecasts! For information on the skill or ‘potential usefulness’ of GloFAS-Seasonal, head to our paper, and stay tuned for a paper coming soon (hopefully! [update: this paper has just been accepted and can be accessed online here]) on the ‘most useful approach for forecasting hydrological extremes during El Niño’, in which we compare the skill of the two forecasts at predicting observed high and low flow events during El Niño.

With thanks to my PhD supervisors & co-authors:

Hannah Cloke1, Liz Stephens1, Florian Pappenberger2, Steve Woolnough1, Ervin Zsoter2, Peter Salamon3, Louise Arnal1,2, Christel Prudhomme2, Davide Muraro3

1University of Reading, 2ECMWF, 3European Commission Joint Research Centre

## Modelling windstorm losses in a climate model

Extratropical cyclones cause vast amounts of damage across Europe throughout the winter seasons. The damage from these cyclones mainly comes from the associated severe winds. The most intense cyclones have gusts of over 200 kilometres per hour, resulting in substantial damage to property and forestry, for example, the Great Storm of 1987 uprooted approximately 15 million trees in one night. The average loss from these storms is over $2 billion per year (Schwierz et al. 2010) and is second only to Atlantic Hurricanes globally in terms of insured losses from natural hazards. However, the most severe cyclones such as Lothar (26/12/1999) and Kyrill (18/1/2007) can cause losses in excess of$10 billion (Munich Re, 2016). One property of extratropical cyclones is that they have a tendency to cluster (to arrive in groups – see example in Figure 1), and in such cases these impacts can be greatly increased. For example Windstorm Lothar was followed just one day later by Windstorm Martin and the two storms combined caused losses of over \$15 billion. The large-scale atmospheric dynamics associated with clustering events have been discussed in a previous blog post and also in the scientific literature (Pinto et al., 2014; Priestley et al. 2017).

A large part of my PhD has involved investigating exactly how important the clustering of cyclones is on losses across Europe during the winter. In order to do this, I have used 918 years of high resolution coupled climate model data from HiGEM (Shaffrey et al., 2017) which provides a huge amount of winter seasons and cyclone events for analysis.

In order to understand how clustering affects losses, I first of all need to know how much loss/damage is associated with each individual cyclone. This is done using a measure called the Storm Severity Index (SSI – Leckebusch et al., 2008), which is a proxy for losses that is based on the 10-metre wind field of the cyclone events. The SSI is a good proxy for windstorm loss. Firstly, it scales the wind speed in any particular location by the 98th percentile of the wind speed climatology in that location. This scaling ensures that only the most severe winds at any one point are considered, as different locations have different perspectives on what would be classed as ‘damaging’. This exceedance above the 98th percentile is then raised to the power of 3 due to damage from wind being a highly non-linear function. Finally, we apply a population density weighting to our calculations. This weighting is required because a hypothetical gust of 40 m/s across London will cause considerably more damage than the same gust across far northern Scandinavia, and the population density is a good approximation for the density of insured property. An example of the SSI that has been calculated for Windstorm Lothar is shown in Figure 2.

From Figure 2b you can see how most of the damage from Windstorm Lothar was concentrated across central/northern France and also across southern Germany. This is because the winds here were most extreme relative to what is the climatology. Even though the winds are highest across the North Atlantic Ocean, the lack of insured property, and a much high climatological winter mean wind speed, means that we do not observe losses/damage from Windstorm Lothar in these locations.

I can apply the SSI to all of the individual cyclone events in HiGEM and therefore can construct a climatology of where windstorm losses occur. Figure 3 shows the average loss across all 918 years of HiGEM. You can see that the losses are concentrated in a band from southern UK towards Poland in an easterly direction. This mainly covers the countries of Great Britain, Belgium, The Netherlands, France, Germany, and Denmark.

This blog post introduces my methodology of calculating and investigating the losses associated with the winter season extratropical cyclones. Work in Priestley et al. (2018) uses this methodology to investigate the role of clustering on winter windstorm losses.

This work has been funded by the SCENARIO NERC DTP and also co-sponsored by Aon Benfield.

References

Leckebusch, G. C., Renggli, D., and Ulbrich, U. 2008. Development and application of an objective storm severity measure for the Northeast Atlantic region. Meteorologische Zeitschrift. https://doi.org/10.1127/0941-2948/2008/0323.

Munich Re. 2016. Loss events in Europe 1980 – 2015. 10 costliest winter storms ordered by overall losses. https://www.munichre.com/touch/naturalhazards/en/natcatservice/significant-natural-catastrophes/index.html

Pinto, J. G., Gómara, I., Masato, G., Dacre, H. F., Woollings, T., and Caballero, R. 2014. Large-scale dynamics associated with clustering of extratropical cyclones affecting Western Europe. Journal of Geophysical Research: Atmospheres. https://doi.org/10.1002/2014JD022305.

Priestley, M. D. K., Dacre, H. F., Shaffrey, L. C., Hodges, K. I., and Pinto, J. G. 2018. The role of European windstorm clustering for extreme seasonal losses as determined from a high resolution climate model, Nat. Hazards Earth Syst. Sci. Discuss., https://doi.org/10.5194/nhess-2018-165, in review.

Priestley, M. D. K., Pinto, J. G., Dacre, H. F., and Shaffrey, L. C. 2017. Rossby wave breaking, the upper level jet, and serial clustering of extratropical cyclones in western Europe. Geophysical Research Letters. https://doi.org/10.1002/2016GL071277.

Schwierz, C., Köllner-Heck, P., Zenklusen Mutter, E. et al. 2010. Modelling European winter wind storm losses in current and future climate. Climatic Change. https://doi.org/10.1007/s10584-009-9712-1.

Shaffrey, L. C., Hodson, D., Robson, J., Stevens, D., Hawkins, E., Polo, I., Stevens, I., Sutton, R. T., Lister, G., Iwi, A., et al. 2017. Decadal predictions with the HiGEM high resolution global coupled climate model: description and basic evaluation, Climate Dynamics, https://doi.org/10.1007/s00382-016-3075-x.