The impact of atmospheric model resolution on the Arctic

Email: sally.woodhouse@pgr.reading.ac.uk

The Arctic region is rapidly changing, with surface temperatures warming at around twice the global average and sea ice extent is rapidly declining, particularly in the summer. These changes affect the local ecosystems and people as well as the rest of the global climate. The decline in sea ice has corresponded with cold winters over the Northern Hemisphere mid-latitudes and an increase in other extreme weather events (Cohen et al., 2014). There are many suggested mechanisms linking changes in the sea ice to changes in the stratospheric jet, midlatitude jet and storm tracks; however this is an area of active research, with much ongoing debate.

Stroeve_et_al-2012-fig2a
Figure 1. Time-series of September sea ice extent from 20 CMIP5 models (colored lines), individual ensemble members are dotted lines and the individual model mean is solid. Multi-model ensemble mean from a subset of the models is shown in solid black with +/- 1 standard deviation in dotted black. The red line shows observations. From Stroeve et al. (2012)

It is therefore important that we are able to understand and predict the changes in the Arctic, however there is still a lot of uncertainty. Stroeve et al. (2012) calculated time series of September sea ice extent for different CMIP5 models, shown in Figure 1. In general the models do a reasonable job of reproducing the recent trends in sea ice decline, although there is a large inter-model spread and and even larger spread in future projections. One area of model development is increasing the horizontal resolution – where the size of the grid cells used to calculate the model equations is reduced.

The aim of my PhD is to investigate the impact that climate model resolution has on the representation of the Arctic climate. This will help us understand the benefits that we can get from increasing model resolution. The first part of the project was investigating the impact of atmospheric resolution. We looked at three experiments (using HadGEM3-GC2), each at a different atmospheric resolutions: 135km (N512), 60km (N216) and 25km (N96).

sea_ice_concentration_obs_GC2
Figure 2. Annual mean sea ice concentration for observations (HadISST) and the bias of each different experiment from the observations N96: low resolution, N216: medium resolution, N512: high resolution.

The annual mean sea ice concentration for observations and the biases of the 3 experiments are shown in Figure 2. The low resolution experiment does a good job of producing the sea extent seen in observations with only small biases in the marginal sea ice regions. However, in the higher resolution experiments we find that the sea ice concentration is much lower than the observations, particularly in the Barents Sea (north of Norway). These changes in sea ice are consistent with warmer temperatures in the high resolution experiments compared to the low resolution.

To understand where these changes have come from we looked at the energy transported into the ocean by the atmosphere and the ocean. We found that there is an increase in the total energy being transported into the Arctic which is consistent with the reduced sea ice and warmer temperatures. Interestingly, the increase in energy is being transported into the Arctic by the ocean (Figure 3), even though it is the atmospheric resolution that is changing between the experiments. In the high resolution experiments the ocean energy transport into the Arctic, 0.15 petawatts (PW), is in better agreement with observational estimates, 0.154 PW, from Tsubouchi et al. (2018). Interestingly, this is in contrast to the worse representation of sea ice concentration in the high resolution experiments. (It is important to note that the model was tuned at the low resolution and as little as possible was changed when running the high resolution experiments which may contribute to the better sea ice concentration in the low resolution experiment.)

strait_locations
Location of ocean gateways into the Arctic. Red: Bering Strait, Green: Davis Strait, Blue: Fram Strait, Magenta: Barents Sea
ocean_heat_transport_GC2
Figure 3. Ocean energy transport for each resolution experiment through the four ocean gateways into the Arctic. The four gateways form a closed boundary into the Arctic.

We find that the ocean is very sensitive to the differences in the surface winds between the high and low resolution experiments. In different regions the differences in winds arise from different processes. In the Davis Strait the effect of coastal tiling is important, where at higher resolution a smaller area is covered by atmospheric grid cells that cover both land and ocean. In a cell covering both land and ocean the model usually produces wind speeds to low for over the ocean. Therefore in the higher resolution experiment we find that there are higher wind speeds over the ocean near the coast. Whereas over the Fram Strait and the Barents Sea instead we find that there are large scale atmospheric circulation changes that give the differences in surface winds between the experiments.

References

Cohen, J., Screen, J. A., Furtado, J. C., Barlow, M., Whittleston, D., Coumou, D., Francis, J., Dethloff, K., Entekhabi, D., Overland, J. & Jones, J. 2014: Recent Arctic amplification and extreme mid-latitude weather. Nature Geoscience, 7(9), 627–637, http://dx.doi.org/10.1038/ngeo2234

Stroeve, J. C., Kattsov, V., Barrett, A., Serreze, M., Pavlova, T., Holland, M., & Meier, W. N., 2012: Trends in Arctic sea ice extent from CMIP5, CMIP3 and observations. Geophysical Research Letters, 39(16), 1–7, https://doi.org/10.1029/2012GL052676

Tsubouchi, T., Bacon, S., Naveira Garabato, A. C., Aksenov, Y., Laxon, S. W., Fahrbach, E., Beszczynska-Möller, A., Hansen, E., Lee, C.M., Ingvaldsen, R. B. 2018: The Arctic Ocean Seasonal Cycles of Heat and Freshwater Fluxes: Observation-Based Inverse Estimates. Journal of Physical Oceanography, 48(9), 2029–2055, http://journals.ametsoc.org/doi/10.1175/JPO-D-17-0239.1

How much energy is available in a moist atmosphere?

Email: b.l.harris@pgr.reading.ac.uk

It is often useful to know how much energy is available to generate motion in the atmosphere, for example in storm tracks or tropical cyclones. To this end, Lorenz (1955) developed the theory of Available Potential Energy (APE), which defines the part of the potential energy in the atmosphere that could be converted into kinetic energy.

To calculate the APE of the atmosphere, we first find the minimum total potential energy that could be obtained by adiabatic motion (no heat exchange between parcels of air). The atmospheric setup that gives this minimum is called the reference state. This is illustrated in Figure 1: in the atmosphere on the left, the denser air will move horizontally into the less dense air, but in the reference state on the right, the atmosphere is stable and no motion would occur. No further kinetic energy is expected to be generated once we reach the reference state, and so the APE of the atmosphere is its total potential energy minus the total potential energy of the reference state.

Figure 1: Construction of the APE reference state for a 2D atmosphere. The purple shading indicates the density of the air; darker colours mean denser air. In the actual state, the density stratification is not completely horizontal, which leads to the air motion shown by the orange arrows. The reference state has a stable, horizontal density stratification, so the air will not move without some disturbance.

If we think about an atmosphere that only varies in the vertical direction, it is easy to find the reference state if the atmosphere is dry. We assume that the atmosphere consists of a number of air parcels, and then all we have to do is place the parcels in order of increasing potential temperature with height. This ensures that density decreases upwards, so we have a stable atmosphere.

However, if we introduce water vapour into the atmosphere, the situation becomes more complicated. When water vapour condenses, latent heat is released, which increases the temperature of the air, decreasing its density. One moist air parcel can be denser than another at a certain height, but then less dense if they are lifted to a height where the first parcel condenses but the second one does not. The moist reference state therefore depends on the exact method used to sort the parcels by their density.

It is possible to find the rearrangement of the moist air parcels that gives the minimum possible total potential energy, using the Munkres (1957) sorting algorithm, but this takes a very long time for a large number of parcels. Lots of different sorting algorithms have therefore been developed that try to find an approximate moist reference state more quickly (the different types of algorithms are explained by Stansifer (2017) and Harris and Tailleux (2018)). However, these sorting algorithms do not try to analyse whether the parcel movements they are simulating could actually happen in the real atmosphere—for example, many work by lifting all parcels to a fixed level in the atmosphere, without considering whether the parcels could feasibly move there—and there has been little understanding of whether the reference states they find are accurate.

As part of my PhD, I have performed the first assessment of these sorting algorithms across a wide range of atmospheric data, using over 3000 soundings from both tropical island and mid-latitude continental locations (Harris and Tailleux, 2018). This showed that whilst some of the sorting algorithms can provide a good estimate of the minimum potential energy reference state, others are prone to computing a rearrangement that actually has a higher potential energy than the original atmosphere.

We also showed that a new algorithm, which does not rely on sorting procedures, can calculate APE with comparable accuracy to the sorting algorithms. This method finds a layer of near-surface buoyant parcels, and performs the rearrangement by lifting the layer upwards until it is no longer buoyant. The success of this method suggests that we do not need to rely on possibly unphysical sorting algorithms to calculate moist APE, but that we can move towards approaches that consider the physical processes generating motion in a moist atmosphere.

References

Harris, B. L. and R. Tailleux, 2018: Assessment of algorithms for computing moist available potential energy. Q. J. R. Meteorol. Soc., 144, 1501–1510, https://doi.org/10.1002/qj.3297

Lorenz, E. N., 1955: Available potential energy and the maintenance of the general circulation. Tellus, 7, 157–167, https://doi.org/10.3402/tellusa.v7i2.8796

Munkres, J., 1957: Algorithms for the Assignment and Transportation Problems. J. Soc. Ind. Appl. Math., 5, 32–38, https://doi.org/10.1137/0105003

Stansifer, E. M., P. A. O’Gorman, and J. I. Holt, 2017: Accurate computation of moist available potential energy with the Munkres algorithm. Q. J. R. Meteorol. Soc., 143, 288–292, https://doi.org/10.1002/qj.2921

Combining multiple streams of environmental data into a soil moisture dataset

Email: a.a.ejigu@pgr.reading.ac.uk

An accurate estimate of soil moisture has a vital role in a number of scientific research areas. It is important for day to day numerical weather prediction, extreme weather event forecasting such as for flooding and droughts, crop suitability to a particular region and crop yield estimation to mention a few. However, in-situ measurements of soil moisture are generally expensive to obtain, labour intensive and have sparse spatial coverage. To assist this, satellite measurements and models are used as a proxy of the ground measurement. Satellite missions such as SMAP (Soil Moisture Active Passive) observe the soil moisture content for the top few centimetres from the surface of the earth. On the other hand, soil moisture estimates from models are prone to errors due to model errors in representing the physics or the parameter values used.

Data assimilation is a method of combining numerical models with observed data and its error statistics. In principle, the state estimate after data assimilation is expected to be better than the standalone numerical model estimate of the state or the observations. There are a variety of data assimilation methods: Variational, Sequential, Monte Carlo methods and a combination of them. The Joint UK Land Environment Simulator (JULES) is a community land surface model which calculates several land surface processes such as surface energy balance and carbon cycle and used by the Met Office – the UK’s national weather service.

My PhD aims to improve the estimate of soil moisture from the JULES model using satellite data from SMAP and the Four-Dimensional Ensemble Variational (4DEnVar) data assimilation method introduced by Liu et al. (2008) and implemented by Pinnington (2019; under review), a combination of Variational and Ensemble data assimilation methods. In addition to satellite soil moisture data assimilation, ground measurement soil moisture data from Oklahoma Mesoscale Networks (Mesonet) are also assimilated.

Figure 1: Top layer prior, background, posterior and SMAP satellite observed volumetric soil moisture for Antlers station in Oklahoma Mesonet, for the year 2017.
Figure 2: Distance of prior soil moisture and posterior soil moisture from the concurrent soil moisture SMAP observations explained in Figure 1.

The time series of soil moisture from the JULES model (prior), soil moisture obtained after assimilation (posterior) and observed soil moisture for Antlers station in Mesonet are depicted in Figure 1. Figure 2 shows the distance of prior soil moisture estimates and posterior soil moisture estimates from the assimilated observations. The smaller the distance is the better as the primary objective of data assimilation is to optimally fit the model trajectory into the observations and background. From Figure 1 and Figure 2 we can conclude that posterior soil moisture estimates are closer to the observations compared to the prior. Looking at particular months, prior soil moisture is closer to observations compared to the posterior around January and October. This is due to the fact that 4DEnVar considers all the observations to calculate an optimal trajectory which fits observations and background. Hence, it is not surprising to see the prior being closer to the observations than the posterior in some places.

Data assimilation experiments are repeated for different sites in Mesonet with varying soil type, topography and different climate and with different soil moisture dataset. In all the experiments, we have observed that posterior soil moisture estimates are closer to the observations than the prior soil moisture estimates. As a verification, soil moisture reanalysis is calculated for the year 2018 and compared to the observations. Figure 3 is SMAP soil moisture data assimilated into the JULES model and hind-casted for the following year.

Figure 3: Hind-casted soil moisture for 2018 based on posterior soil texture corresponding to the result obtained from assimilated mesonet soil moisture data for 2017.

References

Liu, C., Q. Xiao, and B. Wang, 2008: An Ensemble-Based Four-Dimensional Variational Data Assimilation Scheme. Part I: Technical Formulation and Preliminary Test. Mon. Weather Rev., 136 (9), 3363–3373., https://doi.org/10.1175/2008MWR2312.1

Pinnington, E., T. Quaife, A. Lawless, K. Williams, T. Arkebauer, and D. Scoby, 2019: The Land Variational Ensemble Data Assimilation fRamework:
LaVEnDAR. Geosci. Model Dev. Discuss. https://doi.org/10.5194/gmd-2019-60

Simulating measurements from the ISMAR radiometer using a new light scattering approximation

Email: karina.mccusker@pgr.reading.ac.uk

It is widely known that clouds pose a lot of difficulties for both weather and climate modelling, particularly when ice is present. The ice water content (IWC) of a cloud is defined as the mass of ice per unit volume of air. The integral of this quantity over a column is referred to as the ice water path (IWP) and is considered one of the essential climate variables by the World Meteorological Organisation. Currently there are large inconsistencies in the IWP retrieved from different satellites, and there is also a large spread in the amount produced by different climate models (Eliasson et al., 2011).
A major part of the problem is the lack of reliable global measurements of cloud ice. For this reason, the Ice Cloud Imager (ICI) will be launched in 2022. ICI will be the first instrument in space specifically designed to measure cloud ice, with channels ranging from 183 to 664 GHz. It is expected that the combination of frequencies available will allow for more accurate estimations of IWP and particle size. A radiometer called ISMAR has been developed by the UK Met Office and ESA as an airborne demonstrator for ICI, flying on the FAAM BAe-146 research aircraft shown in Fig. 1.

Figure 1: The Facility for Airborne Atmospheric Measurements (FAAM) aircraft which carries the ISMAR radiometer.

As radiation passes through cloud, it is scattered in all directions. Remote sensing instruments measure the scattered field in some way; either by detecting some of the scattered waves, or by detecting how much radiation has been removed from the incident field as a result of scattering. The retrieval of cloud ice properties therefore relies on accurate scattering models. A variety of numerical methods currently exist to simulate scattering by ice particles with complex geometries. In a very broad sense, these can be divided into 2 categories –
1: Methods that are accurate but computationally expensive
2: Methods that are computationally efficient but inaccurate

My PhD has involved developing a new approximation for aggregates which falls somewhere in between the two extremes. The method is called the Independent Monomer Approximation (IMA). So far, tests have shown that it performs well for small particle sizes, with particularly impressive results for aggregates of dendritic monomers.

Radiometers such as ICI and ISMAR convert measured radiation into brightness temperatures (Tb), i.e. the temperature of a theoretical blackbody that would emit an equivalent amount of radiation. Lower values of Tb correspond to more ice in the clouds, as a greater amount of radiation from the lower atmosphere is scattered on its way to the instrument’s detector (i.e. a brightness temperature “depression” is observed over thick ice cloud). Generally, the interpretation of measurements from remote-sensing instruments requires many assumptions to be made about the shapes and distributions of particles within the cloud. However, by comparing Tb at orthogonal horizontal (H) and vertical (V) polarisations, we can gain some information about the size, shape, and orientation of ice particles within the cloud. If large V-H polarimetric differences are measured, it is indicative of horizontally oriented particles, whereas random orientation produces less of a difference in signal. According to Gong and Wu (2017), neglecting the polarimetric signal could result in errors of up to 30% in IWP retrievals. Examples of Tb depressions and the corresponding V-H polarimetric differences can be seen in Fig. 2. In the work shown here, we explore this particular case further.

Figure 2: (a) ISMAR measured brightness temperatures, showing a depression (decrease in Tb) caused by thick cloud; (b) Polarimetric V-H brightness temperature difference, with significant values reaching almost 10 K.

Using the ISMAR instrument, we can test scattering models that could be used within retrieval algorithms for ICI. We want to find out whether the IMA method is capable of reproducing realistic brightness temperature depressions, and whether it captures the polarimetric signal. To do this, we look at a case study that was part of the NAWDEX (North Atlantic Waveguide and Downstream Impact Experiment) campaign of flying. The observations from the ISMAR radiometer were collected on 14 October 2016 off the North-West Coast of Scotland, over a frontal ice cloud. Three different aircraft took measurements from above the cloud during this case, which means that we have coincident data from ISMAR and two different radar frequencies of 35 GHz and 95 GHz. This particular case saw large V-H polarimetric differences reaching almost 10 K, as seen in Fig. 2(b). We will look at the applicability of the IMA method to simulating the polarisation signal measured from ISMAR, using the Atmospheric Radiative Transfer Simulator (ARTS).

For this study, we need to construct a model of the atmosphere to be used in the radiative transfer simulations. The nice thing about this case is that the FAAM aircraft also flew through the cloud, meaning we have measurements from both in-situ and remote-sensing instruments. Consequently, we can design our model cloud using realistic assumptions. We try to match the atmospheric state at the time of the in-situ observations by deriving mass-size relationships specific to this case, and generating particles to follow the derived relationship for each layer. The particles were generated using the aggregation model of Westbrook (2004).

Due to the depth of the cloud, it would not be possible to obtain an adequate representation of the atmospheric conditions using a single averaged layer. Hence, we modelled our atmosphere based on the aircraft profiles, using 7 different layers of ice with depths of approximately 1 km each. These layers are located between altitudes of 2 km and 9 km. Below 2 km, the Marshall-Palmer drop size distribution was used to represent rain, with an estimated rain rate of 1-2mm/hr taken from the Met Office radar. The general structure of our model atmosphere can be seen in Fig. 3, along with some of the particles used in each layer. Note that this is a crude representation and the figure shows only a few examples; in the simulations we use between 46 and 62 different aggregate realisations in each layer.

Figure 3: Examples of particles used in our model atmosphere. We represent the ice cloud using 3 layers of columnar aggregates and 4 layers of dendritic aggregates, and include a distribution of rain beneath the cloud.

To test our model atmosphere, we simulated the radar reflectivities at 35 GHz and 95 GHz using the particle models generated for this case. This allowed us to refine our model until sufficient accuracy was achieved. Then we used the IMA method to calculate the scattering quantities required by the ARTS radiative transfer model. These were implemented into ARTS in order to simulate the ISMAR polarisation observations.
Fig. 4 shows the simulated brightness temperatures using different layers of our modelled atmosphere, i.e. starting with the clear-sky case and gradually increasing the cloud amount. The simulations using the IMA scattering method in the ARTS model were compared to the measurements from ISMAR shown in Fig. 2. Looking at the solid lines in Fig. 4, it can be seen that the aggregates of columns and dendrites simulate the brightness temperature depression well, but do not reproduce the V-H polarization signal. Thus we decided to include some horizontally aligned single dendrites which were not included in our original atmospheric model. The reason we chose these particles is that they tend to have a greater polarization signal compared to aggregates, and there was evidence in the cloud particle imagery that they were present in the cloud during the time of interest. We placed these particles at the cloud base, without changing the ice water content of the model. The results from that experiment are shown by the diagonal crosses in Fig. 4. It is clear that adding single dendrites allow us to simulate a considerably larger polarimetric signal, closely matching the ISMAR measurements. Using only aggregates of columns and dendrites gives a V-H polarimetric difference of 1.8K, whereas the inclusion of dendritic particles increases this value to 8.4K.

Figure 4: Simulated brightness temperatures using different layers of our model atmosphere. Along the x-axis we start with the clear-sky case, followed by the addition of rain. Then we add one layer of cloud at a time, starting from the top layer of columnar aggregates.

To conclude, we have used our new light scattering approximation (IMA) along with the ARTS radiative transfer model to simulate brightness temperature measurements from the ISMAR radiometer. Although the measured brightness temperature depressions can generally be reproduced using the IMA scattering method, the polarisation difference is very sensitive to the assumed particle shape for a given ice water path. Therefore, to obtain good retrievals from ICI, it is important to represent the cloud as accurately as possible. Utilising the polarisation information available from the instrument could provide a way to infer realistic particle shapes, thereby reducing the need to make unrealistic assumptions.

References

Eliasson, S., S. A. Buehler, M. Milz, P. Eriksson, and V. O. John, 2011: Assessing observed and modelled spatial distributions of ice water path using satellite data. Atmos. Chem. Phys., 11, 375-391.

Gong, J., and D. L. Wu, 2017: Microphysical properties of frozen particles inferred from Global Precipitation Measurement (GPM) Microwave Imager (GMI) polarimetric measurements. Atmos. Chem. Phys., 17, 2741-2757.

Westbrook, C. D., R. C. Ball, P. R. Field, and A. J. Heymsfield, 2004: A theory of growth by differential sedimentation with application to snowflake formation. Phys. Rev. E, 70, 021403.

Extending the predictability of flood hazard at the global scale

Email: rebecca.emerton@reading.ac.uk

When I started my PhD, there were no global scale operational seasonal forecasts of river flow or flood hazard. Global overviews of upcoming flood events are key for organisations working at the global scale, from water resources management to humanitarian aid, and for regions where no other local or national forecasts are available. While GloFAS (the Global Flood Awareness System, run by the European Centre for Medium-Range Weather Forecasts (ECMWF) and the European Commission Joint Research Centre (JRC) as part of the Copernicus Emergency Management Services) was producing operational, openly-available flood forecasts out to 30 days ahead, there was a need for more extended-range forecast information. Often, due to a lack of hydrological forecasts, seasonal rainfall forecasts are used as a proxy for flood hazard – however, the link between precipitation and floodiness is nonlinear, and recent research has shown that seasonal rainfall forecasts are not necessarily the best indicator of potential flood hazard. The aim of my PhD research was to look into ways in which we could provide earlier warning information, several weeks to months ahead, using hydrological analysis in addition to the meteorology.

Presidente Kuczynski recorre zonas afectadas por lluvias e inund
Flooding in Trujillo, Peru, March 2017 (Photo: Presidencia Perú on Twitter)

Broadly speaking, there are two key ways in which to provide early warning information on seasonal timescales: (1) through statistical analysis based on large-scale climate variability and teleconnections, and (2) by producing dynamical seasonal forecasts using coupled ocean-atmosphere GCMs. Over the past 4.5 years, I worked on providing hydrologically-relevant seasonal forecast products using these two approaches, at the global scale. This blog post will give a quick overview of the two new forecast products we produced as part of this research!

Can we use El Niño to predict flood hazard?

ENSO (the El Niño Southern Oscillation), is known to influence river flow and flooding across much of the globe, and often, statistical historical probabilities of extreme precipitation during El Niño and La Niña (the extremes of ENSO climate variability) are used to provide information on likely flood impacts. Due to its global influence on weather and climate, we decided to assess whether it is possible to use ENSO as a predictor of flood hazard at the global scale, by assessing the links between ENSO and river flow globally, and estimating the equivalent historical probabilities for high and low river flow, to those that are already used for meteorological variables.

With a lack of sufficient river flow observations across much of the globe, we needed to use a reanalysis dataset – but global reanalysis datasets for river flow are few and far between, and none extended beyond ~40 years (which includes a sample of ≤10 El Niños and ≤13 La Niñas). We ended up producing a 20th Century global river flow reconstruction, by forcing the Camaflood hydrological model with ECMWF’s ERA-20CM atmospheric reconstruction, to produce a 10-member river flow dataset covering 1901-2010, which we called ERA-20CM-R.

elnino_flood_hazard_gif_beccalize

Using this dataset, we calculated the percentage of past El Niño and La Niña events, during which the monthly mean river flow exceeded a high flow threshold (the 75th percentile of the 110-year climatology) or fell below a low flow threshold (the 25th percentile), for each month of an El Niño / La Niña. This percentage is then taken as the probability that high or low flow will be observed in future El Niño/La Niña events. Maps of these probabilities are shown above, for El Niño, and all maps for both El Niño and La Niña can be found here. When comparing to the same historical probabilities calculated for precipitation, it is evident that additional information can be gained from considering the hydrology. For example, the River Nile in northern Africa is likely to see low river flow, even though the surrounding area is likely to see more precipitation – because it is influenced more by changes in precipitation upstream. In places that are likely to see more precipitation but in the form of snow, there would be no influence on river flow or flood hazard during the time when more precipitation is expected. However, several months later, there may be no additional precipitation expected, but there may be increased flood hazard due to the melting of more snow than normal – so we’re able to see a lagged influence of ENSO on river flow in some regions.

While there are locations where these probabilities are high and can provide a useful forecast of hydrological extremes, across much of the globe, the probabilities are lower and much more uncertain (see here for more info on uncertainty in these forecasts) than might be useful for decision-making purposes.

Providing openly-available seasonal river flow forecasts, globally

For the next ‘chapter’ of my PhD, we looked into the feasibility of providing seasonal forecasts of river flow at the global scale. Providing global-scale flood forecasts in the medium-range has only become possible in recent years, and extended-range flood forecasting was highlighted as a grand challenge and likely future development in hydro-meteorological forecasting.

To do this, I worked with Ervin Zsoter at ECMWF, to drive the GloFAS hydrological model (Lisflood) with reforecasts from ECMWF’s latest seasonal forecasting system, SEAS5, to produce seasonal forecasts of river flow. We also forced Lisflood with the new ERA5 reanalysis, to produce an ERA5-R river flow reanalysis with which to initialise Lisflood, and to provide a climatology. The system set-up is shown in the flowchart below.

glofas_seasonal_flowchart_POSTER_EGU

I also worked with colleagues at ECMWF to design forecast products for a GloFAS seasonal outlook, based on a combination of features from the GloFAS flood forecasts, and the EFAS (the European Flood Awareness System) seasonal outlook, and incorporating feedback from users of EFAS.

After ~1 year of working on getting the system set up and finalising the forecast products, including a four-month research placement at ECMWF, the first GloFAS -Seasonal forecast was released in November 2017, with the release of SEAS5. GloFAS-Seasonal is now running operationally at ECMWF, providing forecasts of high and low weekly-averaged river flow for the global river network, up to 4 months ahead, with 3 new forecast layers available through the GloFAS interface. These provide a forecast overview for 307 major river basins, a map of the forecast for the entire river network at the sub-basin scale, and ensemble hydrographs at thousands of locations across the globe (which change with each forecast depending on forecast probabilities). New forecasts are produced once per month, and released on the 10th of each month. You can find more information on each of the different forecast layers and the system set-up here, and you can access the (openly available) forecasts here. ERA5-R, ERA-20CM-R and the GloFAS-Seasonal reforecasts are also all freely available – just get in touch! GloFAS-Seasonal will continue to be developed by ECMWF and the JRC, and has already been updated to v2.0, including a calibrated version of the hydrological model.

NEW_WEB_figure1_basins
Screenshot of the GloFAS seasonal outlook at www.globalfloods.eu

So, over the course of my PhD, we developed two new seasonal forecasts for hydrological extremes, at the global scale. You may be wondering whether they’re skilful, or in fact, which one provides the most useful forecasts! For information on the skill or ‘potential usefulness’ of GloFAS-Seasonal, head to our paper, and stay tuned for a paper coming soon (hopefully! [update: this paper has just been accepted and can be accessed online here]) on the ‘most useful approach for forecasting hydrological extremes during El Niño’, in which we compare the skill of the two forecasts at predicting observed high and low flow events during El Niño.

 

With thanks to my PhD supervisors & co-authors:

Hannah Cloke1, Liz Stephens1, Florian Pappenberger2, Steve Woolnough1, Ervin Zsoter2, Peter Salamon3, Louise Arnal1,2, Christel Prudhomme2, Davide Muraro3

1University of Reading, 2ECMWF, 3European Commission Joint Research Centre

Modelling windstorm losses in a climate model

Extratropical cyclones cause vast amounts of damage across Europe throughout the winter seasons. The damage from these cyclones mainly comes from the associated severe winds. The most intense cyclones have gusts of over 200 kilometres per hour, resulting in substantial damage to property and forestry, for example, the Great Storm of 1987 uprooted approximately 15 million trees in one night. The average loss from these storms is over $2 billion per year (Schwierz et al. 2010) and is second only to Atlantic Hurricanes globally in terms of insured losses from natural hazards. However, the most severe cyclones such as Lothar (26/12/1999) and Kyrill (18/1/2007) can cause losses in excess of $10 billion (Munich Re, 2016). One property of extratropical cyclones is that they have a tendency to cluster (to arrive in groups – see example in Figure 1), and in such cases these impacts can be greatly increased. For example Windstorm Lothar was followed just one day later by Windstorm Martin and the two storms combined caused losses of over $15 billion. The large-scale atmospheric dynamics associated with clustering events have been discussed in a previous blog post and also in the scientific literature (Pinto et al., 2014; Priestley et al. 2017).

Picture1
Figure 1. Composite visible satellite image from 11 February 2014 of 4 extratropical cyclones over the North Atlantic (circled) (NASA).

A large part of my PhD has involved investigating exactly how important the clustering of cyclones is on losses across Europe during the winter. In order to do this, I have used 918 years of high resolution coupled climate model data from HiGEM (Shaffrey et al., 2017) which provides a huge amount of winter seasons and cyclone events for analysis.

In order to understand how clustering affects losses, I first of all need to know how much loss/damage is associated with each individual cyclone. This is done using a measure called the Storm Severity Index (SSI – Leckebusch et al., 2008), which is a proxy for losses that is based on the 10-metre wind field of the cyclone events. The SSI is a good proxy for windstorm loss. Firstly, it scales the wind speed in any particular location by the 98th percentile of the wind speed climatology in that location. This scaling ensures that only the most severe winds at any one point are considered, as different locations have different perspectives on what would be classed as ‘damaging’. This exceedance above the 98th percentile is then raised to the power of 3 due to damage from wind being a highly non-linear function. Finally, we apply a population density weighting to our calculations. This weighting is required because a hypothetical gust of 40 m/s across London will cause considerably more damage than the same gust across far northern Scandinavia, and the population density is a good approximation for the density of insured property. An example of the SSI that has been calculated for Windstorm Lothar is shown in Figure 2.

 

figure_2_blog_2018_new
Figure 2. (a) Wind footprint of Windstorm Lothar (25-27/12/1999) – 10 metre wind speed in coloured contours (m/s). Black line is the track of Lothar with points every 6 hours (black dots). (b) The SSI field of Windstorm Lothar. All data from ERA-Interim.

 

From Figure 2b you can see how most of the damage from Windstorm Lothar was concentrated across central/northern France and also across southern Germany. This is because the winds here were most extreme relative to what is the climatology. Even though the winds are highest across the North Atlantic Ocean, the lack of insured property, and a much high climatological winter mean wind speed, means that we do not observe losses/damage from Windstorm Lothar in these locations.

figure_3_blog_2018_new
Figure 3. The average SSI for 918 years of HiGEM data.

 

I can apply the SSI to all of the individual cyclone events in HiGEM and therefore can construct a climatology of where windstorm losses occur. Figure 3 shows the average loss across all 918 years of HiGEM. You can see that the losses are concentrated in a band from southern UK towards Poland in an easterly direction. This mainly covers the countries of Great Britain, Belgium, The Netherlands, France, Germany, and Denmark.

This blog post introduces my methodology of calculating and investigating the losses associated with the winter season extratropical cyclones. Work in Priestley et al. (2018) uses this methodology to investigate the role of clustering on winter windstorm losses.

This work has been funded by the SCENARIO NERC DTP and also co-sponsored by Aon Benfield.

 

Email: m.d.k.priestley@pgr.reading.ac.uk

 

References

Leckebusch, G. C., Renggli, D., and Ulbrich, U. 2008. Development and application of an objective storm severity measure for the Northeast Atlantic region. Meteorologische Zeitschrift. https://doi.org/10.1127/0941-2948/2008/0323.

Munich Re. 2016. Loss events in Europe 1980 – 2015. 10 costliest winter storms ordered by overall losses. https://www.munichre.com/touch/naturalhazards/en/natcatservice/significant-natural-catastrophes/index.html

Pinto, J. G., Gómara, I., Masato, G., Dacre, H. F., Woollings, T., and Caballero, R. 2014. Large-scale dynamics associated with clustering of extratropical cyclones affecting Western Europe. Journal of Geophysical Research: Atmospheres. https://doi.org/10.1002/2014JD022305.

Priestley, M. D. K., Dacre, H. F., Shaffrey, L. C., Hodges, K. I., and Pinto, J. G. 2018. The role of European windstorm clustering for extreme seasonal losses as determined from a high resolution climate model, Nat. Hazards Earth Syst. Sci. Discuss., https://doi.org/10.5194/nhess-2018-165, in review.

Priestley, M. D. K., Pinto, J. G., Dacre, H. F., and Shaffrey, L. C. 2017. Rossby wave breaking, the upper level jet, and serial clustering of extratropical cyclones in western Europe. Geophysical Research Letters. https://doi.org/10.1002/2016GL071277.

Schwierz, C., Köllner-Heck, P., Zenklusen Mutter, E. et al. 2010. Modelling European winter wind storm losses in current and future climate. Climatic Change. https://doi.org/10.1007/s10584-009-9712-1.

Shaffrey, L. C., Hodson, D., Robson, J., Stevens, D., Hawkins, E., Polo, I., Stevens, I., Sutton, R. T., Lister, G., Iwi, A., et al. 2017. Decadal predictions with the HiGEM high resolution global coupled climate model: description and basic evaluation, Climate Dynamics, https://doi.org/10.1007/s00382-016-3075-x.

Baroclinic and Barotropic Annular Modes of Variability

Email: l.boljka@pgr.reading.ac.uk

Modes of variability are climatological features that have global effects on regional climate and weather. They are identified through spatial structures and the timeseries associated with them (so-called EOF/PC analysis, which finds the largest variability of a given atmospheric field). Examples of modes of variability include El Niño Southern Oscillation, Madden-Julian Oscillation, North Atlantic Oscillation, Annular modes, etc. The latter are named after the “annulus” (a region bounded by two concentric circles) as they occur in the Earth’s midlatitudes (a band of atmosphere bounded by the polar and tropical regions, Fig. 1), and are the most important modes of midlatitude variability, generally representing 20-30% of the variability in a field.

Southern_Hemi_Antarctica
Figure 1: Southern Hemisphere midlatitudes (red concentric circles) as annulus, region where annular modes have the largest impacts. Source.

We know two types of annular modes: baroclinic (based on eddy kinetic energy, a proxy for eddy activity and an indicator of storm-track intensity) and barotropic (based on zonal mean zonal wind, representing the north-south shifts of the jet stream) (Fig. 2). The latter are usually referred to as Southern (SAM or Antarctic Oscillation) or Northern (NAM or Arctic Oscillation) Annular Mode (depending on the hemisphere), have generally quasi-barotropic (uniform) vertical structure, and impact the temperature variations, sea-ice distribution, and storm paths in both hemispheres with timescales of about 10 days. The former are referred to as BAM (baroclinic annular mode) and exhibit strong vertical structure associated with strong vertical wind shear (baroclinicity), and their impacts are yet to be determined (e.g. Thompson and Barnes 2014, Marshall et al. 2017). These two modes of variability are linked to the key processes of the midlatitude tropospheric dynamics that are involved in the growth (baroclinic processes) and decay (barotropic processes) of midlatitude storms. The growth stage of the midlatitude storms is conventionally associated with increase in eddy kinetic energy (EKE) and the decay stage with decrease in EKE.

ThompsonWoodworth_Fig2a_SAM_2f_BAM(1)
Figure 2: Barotropic annular mode (right), based on zonal wind (contours), associated with eddy momentum flux (shading); Baroclinic annular mode (left), based on eddy kinetic energy (contours), associated with eddy heat flux (shading). Source: Thompson and Woodworth (2014).

However, recent observational studies (e.g. Thompson and Woodworth 2014) have suggested decoupling of baroclinic and barotropic components of atmospheric variability in the Southern Hemisphere (i.e. no correlation between the BAM and SAM) and a simpler formulation of the EKE budget that only depends on eddy heat fluxes and BAM (Thompson et al. 2017). Using cross-spectrum analysis, we empirically test the validity of the suggested relationship between EKE and heat flux at different timescales (Boljka et al. 2018). Two different relationships are identified in Fig. 3: 1) a regime where EKE and eddy heat flux relationship holds well (periods longer than 10 days; intermediate timescale); and 2) a regime where this relationship breaks down (periods shorter than 10 days; synoptic timescale). For the relationship to hold (by construction), the imaginary part of the cross-spectrum must follow the angular frequency line and the real part must be constant. This is only true at the intermediate timescales. Hence, the suggested decoupling of baroclinic and barotropic components found in Thompson and Woodworth (2014) only works at intermediate timescales. This is consistent with our theoretical model (Boljka and Shepherd 2018), which predicts decoupling under synoptic temporal and spatial averaging. At synoptic timescales, processes such as barotropic momentum fluxes (closely related to the latitudinal shifts in the jet stream) contribute to the variability in EKE. This is consistent with the dynamics of storms that occur on timescales shorter than 10 days (e.g. Simmons and Hoskins 1978). This is further discussed in Boljka et al. (2018).

EKE_hflux_cross_spectrum_blog
Figure 3: Imaginary (black solid line) and Real (grey solid line) parts of cross-spectrum between EKE and eddy heat flux. Black dashed line shows the angular frequency (if the tested relationship holds, the imaginary part of cross-spectrum follows this line), the red line distinguishes between the two frequency regimes discussed in text. Source: Boljka et al. (2018).

References

Boljka, L., and T. G. Shepherd, 2018: A multiscale asymptotic theory of extratropical wave, mean-flow interaction. J. Atmos. Sci., in press.

Boljka, L., T. G. Shepherd, and M. Blackburn, 2018: On the coupling between barotropic and baroclinic modes of extratropical atmospheric variability. J. Atmos. Sci., in review.

Marshall, G. J., D. W. J. Thompson, and M. R. van den Broeke, 2017: The signature of Southern Hemisphere atmospheric circulation patterns in Antarctic precipitation. Geophys. Res. Lett., 44, 11,580–11,589.

Simmons, A. J., and B. J. Hoskins, 1978: The life cycles of some nonlinear baroclinic waves. J. Atmos. Sci., 35, 414–432.

Thompson, D. W. J., and E. A. Barnes, 2014: Periodic variability in the large-scale Southern Hemisphere atmospheric circulation. Science, 343, 641–645.

Thompson, D. W. J., B. R. Crow, and E. A. Barnes, 2017: Intraseasonal periodicity in the Southern Hemisphere circulation on regional spatial scales. J. Atmos. Sci., 74, 865–877.

Thompson, D. W. J., and J. D. Woodworth, 2014: Barotropic and baroclinic annular variability in the Southern Hemisphere. J. Atmos. Sci., 71, 1480–1493.