APPLICATE General Assembly and Early Career Science event

5

On 28th January to 1st February I attended the APPLICATE (Advanced Prediction in Polar regions and beyond: modelling, observing system design and LInkages associated with a Changing Arctic climaTE (bold choice)) General Assembly and Early Career Science event at ECMWF in Reading. APPLICATE is one of the EU Horizon 2020 projects with the aim of improving weather and climate prediction in the polar regions. The Arctic is a region of rapid change, with decreases in sea ice extent (Stroeve et al., 2012) and changes to ecosystems (Post et al., 2009). These changes are leading to increased interest in the Arctic for business opportunities such as the opening of shipping routes (Aksenov et al., 2017). There is also a lot of current work being done on the link between changes in the Arctic and mid-latitude weather (Cohen et al., 2014), however there is still much uncertainty. These changes could have large impacts on human life, therefore there needs to be a concerted scientific effort to develop our understanding of Arctic processes and how this links to the mid-latitudes. This is the gap that APPLICATE aims to fill.

The overarching goal of APPLICATE is to develop enhanced predictive capacity for weather and climate in the Arctic and beyond, and to determine the influence of Arctic climate change on Northern Hemisphere mid-latitudes, for the benefit of policy makers, businesses and society.

APPLICATE Goals & Objectives

Attending the General Assembly was a great opportunity to get an insight into how large scientific projects work. The project is made up of different work packages each with a different focus. Within these work packages there are then a set of specific tasks and deliverables spread out throughout the project. At the GA there were a number of breakout sessions where the progress of the working groups was discussed. It was interesting to see how these discussions worked and how issues, such as the delay in CMIP6 experiments, are handled. The General Assembly also allows the different work packages to communicate with each other to plan ahead, and for results to be shared.

2
An overview of APPLICATE’s management structure take from: https://applicate.eu/about-the-project/project-structure-and-governance

One of the big questions APPLICATE is trying to address is the link between Arctic sea-ice and the Northern Hemisphere mid-latitudes. Many of the presentations covered different aspects of this, such as how including Arctic observations in forecasts affects their skill over Eurasia. There were also initial results from some of the Polar Amplification (PA)MIP experiments, a project that APPLICATE has helped coordinate.

1
Attendees of the Early Career Science event co-organised with APECS

At the end of the week there was the Early Career Science Event which consisted of a number of talks on more soft skills. One of the most interesting activities was based around engaging with stakeholders. To try and understand the different needs of a variety of stakeholders in the Arctic (from local communities to shipping companies) we had to try and lobby for different policies on their behalf. This was also a great chance to meet other early career scientists working in the field and get to know each other a bit more.

What a difference a day makes, heavy snow getting the ECMWF’s ducks in the polar spirit.

Email: sally.woodhouse@pgr.reading.ac.uk

References

Aksenov, Y. et al., 2017. On the future navigability of Arctic sea routes: High-resolution projections of the Arctic Ocean and sea ice. Marine Policy, 75, pp.300–317.

Cohen, J. et al., 2014. Recent Arctic amplification and extreme mid-latitude weather. Nature Geoscience, 7(9), pp.627–637.

Post, E. & Others, 24, 2009. Ecological Dynamics Across the Arctic Associated with Recent Climate Change. Science, 325(September), pp.1355–1358.

Stroeve, J.C. et al., 2012. Trends in Arctic sea ice extent from CMIP5, CMIP3 and observations. Geophysical Research Letters, 39(16), pp.1–7.

Evaluating aerosol forecasts in London

Email: e.l.warren@pgr.reading.ac.uk

Aerosols in urban areas can greatly impact visibility, radiation budgets and our health (Chen et al., 2015). Aerosols make up the liquid and solid particles in the air that, alongside noxious gases like nitrogen dioxide, are the pollution in cities that we often hear about on the news – breaking safety limits in cities across the globe from London to Beijing. Air quality researchers try to monitor and predict aerosols, to inform local councils so they can plan and reduce local emissions.

Figure 1: Smog over London (Evening Standard, 2016).

Recently, large numbers of LiDARs (Light Detection and Ranging) have been deployed across Europe, and elsewhere – in part to observe aerosols. They effectively shoot beams of light into the atmosphere, which reflect off atmospheric constituents like aerosols. From each beam, many measurements of reflectance are taken very quickly over time – and as light travels further with more time, an entire profile of reflectance can be constructed. As the penetration of light into the atmosphere decreases with distance, the reflected light is usually commonly called attenuated backscatter (β). In urban areas, measurements away from the surface like these are sorely needed (Barlow, 2014), so these instruments could be extremely useful. When it comes to predicting aerosols, numerical weather prediction (NWP) models are increasingly being considered as an option. However, the models themselves are very computationally expensive to run so they tend to only have a simple representation of aerosol. For example, for explicitly resolved aerosol, the Met Office UKV model (1.5 km) just has a dry mass of aerosol [kg kg-1] (Clark et al., 2008). That’s all. It gets transported around by the model dynamics, but any other aerosol characteristics, from size to number, need to be parameterised from the mass, to limit computation costs. However, how do we know if the estimates of aerosol from the model are actually correct? A direct comparison between NWP aerosol and β is not possible because fundamentally, they are different variables – so to bridge the gap, a forward operator is needed.

In my PhD I helped develop such a forward operator (aerFO, Warren et al., 2018). It’s a model that takes aerosol mass (and relative humidity) from NWP model output, and estimates what the attenuated backscatter would be as a result (βm). From this, βm could be directly compared to βo and the NWP aerosol output evaluated (e.g. see if the aerosol is too high or low). The aerFO was also made to be computationally cheap and flexible, so if you had more information than just the mass, the aerFO would be able to use it!

Among the aerFO’s several uses (Warren et al., 2018, n.d.), was the evaluation of NWP model output. Figure 2 shows the aerFO in action with a comparison between βm and observed attenuated backscatter (βo) measured at 905 nm from a ceilometer (a type of LiDAR) on 14th April 2015 at Marylebone Road in London. βm was far too high in the morning on this day. We found that the original scheme the UKV used to parameterise the urban surface effects in London was leading to a persistent cold bias in the morning. The cold bias would lead to a high relative humidity, so consequently the aerFO condensed more water than necessary, onto the aerosol particles as a result, causing them to swell up too much. As a result, bigger particles mean bigger βm and an overestimation. Not only was the relative humidity too high, the boundary layer in the NWP model was developing too late in the day as well. Normally, when the surface warms up enough, convection starts, which acts to mix aerosol up in the boundary layer and dilute it near the surface. However, the cold bias delayed this boundary layer development, so the aerosol concentration near the surface remained high for too long. More mass led to the aerFO parameterising larger sizes and total numbers of particles, so overestimated βm. This cold bias effect was reflected across several cases using the old scheme but was notably smaller for cases using a newer urban surface scheme called MORUSES (Met Office – Reading Urban Surface Exchange Scheme). One of the main aims for MORUSES was to improve the representation of energy transfer in urban areas, and at least to us it seemed like it was doing a better job!

Figure 2: Vertical profiles of attenuated backscatter [m−1 sr−1] (log scale) that are (a, g) observed (βo) with estimated mixing layer height (red crosses, Kotthaus and Grimmond,2018) and (b, h) forward modelled (βm) using the aerFO (section 2).(c, i) Attenuated backscatter difference (βm – βo) calculated using the hourly βm vertical profile and the vertical profile of βo nearest in time; (d, j) aerosol mass mixing ratio (m) [μg kg−1]; (e, k) relative humidity (RH) [%] and (f, l) air temperature (T) [°C] at MR on 14th April 2015.

References

Barlow, J.F., 2014. Progress in observing and modelling the urban boundary layer. Urban Clim. 10, 216–240. https://doi.org/10.1016/j.uclim.2014.03.011

Chen, C.H., Chan, C.C., Chen, B.Y., Cheng, T.J., Leon Guo, Y., 2015. Effects of particulate air pollution and ozone on lung function in non-asthmatic children. Environ. Res. 137, 40–48. https://doi.org/10.1016/j.envres.2014.11.021

Clark, P.A., Harcourt, S.A., Macpherson, B., Mathison, C.T., Cusack, S., Naylor, M., 2008. Prediction of visibility and aerosol within the operational Met Office Unified Model. I: Model formulation and variational assimilation. Q. J. R. Meteorol. Soc. 134, 1801–1816. https://doi.org/10.1002/qj.318

Warren, E., Charlton-Perez, C., Kotthaus, S., Lean, H., Ballard, S., Hopkin, E., Grimmond, S., 2018. Evaluation of forward-modelled attenuated backscatter using an urban ceilometer network in London under clear-sky conditions. Atmos. Environ. 191, 532–547. https://doi.org/10.1016/j.atmosenv.2018.04.045

Warren, E., Charlton-Perez, C., Kotthaus, S., Marenco, F., Ryder, C., Johnson, B., Lean, H., Ballard, S., Grimmond, S., n.d. Observed aerosol characteristics to improve forward-modelled attenuated backscatter. Atmos. Environ. Submitted


Quantifying the skill of convection-permitting ensemble forecasts for the sea-breeze occurrence

Email: carlo.cafaro@pgr.reading.ac.uk

On the afternoon of 16th August 2004, the village of Boscastle on the north coast of Cornwall was severely damaged by flooding (Golding et al., 2005). This is one example of high impact hazardous weather associated with small meso- and convective-scale weather phenomena, the prediction of which can be uncertain even a few hours ahead (Lorenz, 1969; Hohenegger and Schar, 2007). Taking advantage of the increased computer power (e.g. https://www.metoffice.gov.uk/research/technology/supercomputer) this has motivated many operational and research forecasting centres to introduce convection-permitting ensemble prediction systems (CP-EPSs), in order to give timely weather warnings of severe weather.

However, despite being an exciting new forecasting technology, CP-EPSs place a heavy burden on the computational resources of forecasting centres. They are usually run on limited areas with initial and boundary conditions provided by global lower resolution ensembles (LR-EPS). They also produce large amounts of data which needs to be rapidly digested and utilized by operational forecasters. Assessing whether the convective-scale ensemble is likely to provide useful additional information is key to successful real-time utilisation of this data. Similarly, knowing where equivalent information can be gained (even if partially) from LR-EPS using statistical/dynamical post-processing both extends lead time (due to faster production time) and also potentially provides information in regions where no convective-scale ensemble is available.

There have been many studies on the verification of CP-EPSs (Klasa et al., 2018, Hagelin et al., 2017, Barret et al., 2016, Beck et al., 2016 amongst the others), but none of them has dealt with the quantification of the skill gained by CP-EPSs in comparison with LR-EPSs, when fully exploited, for specific weather phenomena and for a long enough evaluation period.

In my PhD, I have focused on the sea-breeze phenomenon for different reasons:

  1. Sea breezes have an impact on air quality by advecting pollutants, on heat stress by providing a relief on hot days and also on convection by providing a trigger, especially when interacting with other mesoscale flows (see for examples figure 1 or figures 6, 7 in Golding et al., 2005).
  2. Sea breezes occur on small spatio-temporal scales which are properly resolved at convection-permitting resolutions, but their occurrence is still influenced by synoptic-scale conditions, which are resolved by the global LR-EPS.

Blog_Figure1
Figure 1: MODIS visible of the southeast of Italy on 6th June 2018, 1020 UTC. This shows thunderstorms occurring in the middle of the peninsula, probably triggered by sea-breezes.
Source: worldview.earthdata.nasa.gov

Therefore this study aims to investigate whether the sea breeze is predictable by only knowing a few predictors or whether the better representation of fine-scale structures (e.g. orography, topography) by the CP-EPS implies a better sea-breeze prediction.

In order to estimate probabilistic forecasts from both the models, two different methods have been applied. A novel tracking algorithm for the identification of sea-breeze front, in the domain represented in figure 2, was applied to CP-EPSs data. A Bayesian model was used instead to estimate the probability of sea-breeze conditioned on two LR-EPSs predictors and trained on CP-EPSs data. More details can be found in Cafaro et al. (2018).

Cafaro_Fig2
Figure 2: A map showing the orography over the south UK domain. Orography data are from MOGREPS-UK. The solid box encloses the sub-domain used in this study with red dots indicating the location of synoptic weather stations. Source: Cafaro et al. (2018)

The results of the probabilistic verification are shown in figure 3. Reliability (REL) and resolution (RES) terms have been computed decomposing the Brier score (BS) and Information gain (IGN) score. Finally, scores differences (BSD and IG) have been computed to quantify any gain in the skill by the CP-EPS. Figure 3 shows that CP-EPS forecast is significantly more skilful than the Bayesian forecast. Nevertheless, the Bayesian forecast has more resolution than a climatological forecast (figure 3e,f), which has no resolution by construction.

Cafaro_Fig11
Figure 3: (a)-(d) Reliability and resolution terms calculated for both the forecasts (green for the CP-EPS forecast and blue for LR-EPSs). (e) and (f) represent the Brier score difference (BSD) and Information gain (IG) respectively. Error bars represent the 95th confidence interval. Positive values of BSD and IG indicate that CP-EPS forecast is more skilful. Source: Cafaro et al. (2018)

This study shows the additional skill provided by the Met Office convection-permitting ensemble forecast for the sea-breeze prediction. The ability of CP-EPSs to resolve meso-scale dynamical features is thus proven to be important and only two large-scale predictors, relevant for the sea-breeze, are not sufficient for skilful prediction.

It is believed that both the methodologies can, in principle, be applied to other locations of the world and it is thus hoped they could be used operationally.

References:

Barrett, A. I., Gray, S. L., Kirshbaum, D. J., Roberts, N. M., Schultz, D. M., and Fairman J. G. (2016). The utility of convection-permitting ensembles for the prediction of stationary convective bands. Monthly Weather Review, 144(3):1093–1114, doi: 10.1175/MWR-D-15-0148.1

Beck,  J., Bouttier, F., Wiegand, L., Gebhardt, C., Eagle, C., and Roberts, N. (2016). Development and verification of two convection-allowing multi-model ensembles over Western europe. Quarterly Journal of the Royal Meteorological Society, 142(700):2808–2826, doi: 10.1002/qj.2870

Cafaro C., Frame T. H. A., Methven J., Roberts N. and Broecker J. (2018), The added value of convection-permitting ensemble forecasts of sea breeze compared to a Bayesian forecast driven by the global ensemble, Quarterly Journal of the Royal Meteorological Society., under review.

Golding, B. , Clark, P. and May, B. (2005), The Boscastle flood: Meteorological analysis of the conditions leading to flooding on 16 August 2004. Weather, 60: 230-235, doi: 10.1256/wea.71.05

Hagelin, S., Son, J., Swinbank, R., McCabe, A., Roberts, N., and Tennant, W. (2017). The Met Office convective-scale ensemble, MOGREPS-UK. Quarterly Journal of the Royal Meteorological Society, 143(708):2846–2861, doi: 10.1002/qj.3135

Hohenegger, C. and Schar, C. (2007). Atmospheric predictability at synoptic versus cloud-resolving scales. Bulletin of the American Meteorological Society, 88(11):1783–1794, doi: 10.1175/BAMS-88-11-1783

Klasa, C., Arpagaus, M., Walser, A., and Wernli, H. (2018). An evaluation of the convection-permitting ensemble cosmo-e for three contrasting precipitation events in Switzerland. Quarterly Journal of the Royal Meteorological Society, 144(712):744–764, doi: 10.1002/qj.3245

Lorenz, E. N. (1969). Predictability of a flow which possesses many scales of motion.Tellus, 21:289 – 307, doi: 10.1111/j.2153-3490.1969.tb00444.x

Atmospheric blocking: why is it so hard to predict?

Atmospheric blocks are nearly stationary large-scale flow features that effectively block the prevailing westerly winds and redirect mobile cyclones. They are typically characterised by a synoptic-scale, quasi-stationary high pressure system in the midlatitudes that can remain over a region for several weeks. Blocking events can cause extreme weather: heat waves in summer and cold spells in winter, and the impacts associated with these events can escalate due to a block’s persistence. Because of this, it is important that we can forecast blocking accurately. However, atmospheric blocking has been shown to be the cause of some of the poorest forecasts in recent years. Looking at all occasions when the ECMWF model experienced a period of very low forecast skill, Rodwell et al. (2013) found that the average flow pattern for which these forecasts verified was an easily-distinguishable atmospheric blocking pattern (Figure 1). But why are blocks so hard to forecast?

Fig_1_blogjacob
Figure 1:  Average verifying 500 hPa geopotential height (Z500) field for occasions when the ECMWF model experienced very low skill. From Rodwell et al. (2013).

There are several reasons why forecasting blocking is a challenge. Firstly, there is no universally accepted definition of what constitutes a block. Several different flow configurations that could be referred to as blocks are shown in Figure 2. The variety in flow patterns used to define blocking brings with it a variety of mechanisms that are dynamically important for blocks developing in a forecast (Woollings et al. 2018). Firstly, many phenomena must be well represented in a model for it to forecast all blocking events accurately. Secondly, there is no complete dynamical theory for block onset and maintenance- we do not know if a process key for blocking dynamics is missing from the equation set solved by numerical weather prediction models and is contributing to the forecast error. Finally, many of the known mechanisms associated with block onset and maintenance are also know sources of model uncertainty. For example, diabatic processes within extratropical cyclones have been shown to contribute substantially to blocking events (Pfahl et al. 2015), the parameterisation of which has been shown to affect medium-range forecasts of ridge building events (Martínez-Alvarado et al. 2015).

Fig_2_blogjacob
Figure 2: Different flow patterns, shown using Z500 (contours), that have been defined as blocks. From Woollings et al. (2018).

We do, however, know some ways to improve the representation of blocking: increase the horizontal resolution of the model (Schiemann et al. 2017); improve the parameterisation of subgrid physical processes (Jung et al. 2010); remove underlying model biases (Scaife et al. 2010); and in my PhD we found that improvements to a model’s dynamical core (the part of the model used to solved the governing equations) can also improve the medium-range forecast of blocking. In Figure 3, the frequency of blocking that occurred during two northern hemisphere winters is shown for the ERA-Interim reanalysis and three operational weather forecast centres (the ECMWF, Met Office (UKMO) and the Korean Meteorological Administration (KMA)). Both KMA and UKMO use the Met Office Unified Model – however, before the winter of 2014/15 the UKMO updated the model to use a new dynamical core whilst KMA continued to use the original. This means that for the 2013/14 the UKMO and KMA forecasts are from the same model with the same dynamical core whilst for the 2014/15 winter the UKMO and KMA forecasts are from the same model but with different dynamical cores. The clear improvement in forecast from the UKMO in 2014/15 can hence be attributed to the new dynamical core. For a full analysis of this improvement see Martínez-Alvarado et al. (2018).

Fig_3_blogjacob
Figure 3: The frequency of blocking during winter in the northern hemisphere in ERA-Interim (grey shading) and in seven-day forecasts from the European Centre for Medium-Range Weather Forecasts (ECMWF), the Met Office (UKMO) and the Korean Meteorological Administration (KMA). Box plots show the spread in the ensemble forecast from each centre.

In the remainder of my PhD I aim to investigate the link between errors in forecasts of blocking with the representation of upstream cyclones. I am particularly interested to see if the parameterisation of diabatic processes (a known source of model uncertainty) could be causing the downstream error in Rossby wave amplification and blocking.

Email: j.maddison@pgr.reading.ac.uk.

References:

Rodwell, M. J., and Coauthors, 2013: Characteristics of occasional poor medium-range weather  forecasts for Europe. Bulletin of the American Meteorological Society, 94 (9), 1393–1405.

Woollings, T., and Coauthors, 2018: Blocking and its response to climate change. Current Climate Change Reports, 4 (3), 287–300.

Pfahl, S., C. Schwierz, M. Croci-Maspoli, C. Grams, and H. Wernli, 2015: Importance of latent  heat release in ascending air streams for atmospheric blocking. Nature Geoscience, 8 (8), 610– 614.

Mart´ınez-Alvarado, O., E. Madonna, S. Gray, and H. Joos, 2015: A route to systematic error in forecasts of Rossby waves. Quart. J. Roy. Meteor. Soc., 142, 196–210.

Mart´ınez-Alvarado, O., and R. Plant, 2014: Parametrized diabatic processes in numerical simulations of an extratropical cyclone. Quart. J. Roy. Meteor. Soc., 140 (682), 1742–1755.

Scaife, A. A., T. Woollings, J. Knight, G. Martin, and T. Hinton, 2010: Atmospheric blocking and mean biases in climate models. Journal of Climate, 23 (23), 6143–6152.

Schiemann, R., and Coauthors, 2017: The resolution sensitivity of northern hemisphere blocking in four 25-km atmospheric global circulation models. Journal of Climate, 30 (1), 337–358.

Jung, T., and Coauthors, 2010: The ECMWF model climate: Recent progress through improved physical parametrizations. Quart. J. Roy. Meteor. Soc., 136 (650), 1145–1160.

Communicating uncertainties associated with anthropogenic climate change

Email: j.f.talib@pgr.reading.ac.uk

This week Prof. Ed Hawkins from the Department of Meteorology and NCAS-Climate gave a University of Reading public lecture discussing the science of climate change. A plethora of research was presented, all highlighting that humans are changing our climate. As scientists we can study the greenhouse effect in scientific labs, observe increasing temperatures across the majority of the planet, or simulate the impact of human actions on the Earth’s climate through using climate models.

simulating_temperature_rise
Figure 1. Global-mean surface temperature in observations (solid black line), and climate model simulations with (red shading) and without (blue shading) human actions. Shown during Prof. Ed Hawkins’ University of Reading Public Lecture.

Fig. 1, presented in Ed Hawkins’ lecture, shows the global mean temperature rise associated with human activities. Two sets of climate simulations have been performed to produce this plot. The first set, shown in blue, are simulations controlled solely by natural forcings, i.e. variations in radiation from the sun and volcanic eruptions. The second, shown in red, are simulations which include both natural forcing and forcing associated with greenhouse gas emissions from human activities. The shading indicates the spread amongst climate models, whilst the observed global-mean temperature is shown by the solid black line. From this plot it is evident that all climate models attribute the rising temperatures over the 20th and 21st century to human activity. Climate simulations without greenhouse gas emissions from human activity indicate a much smaller rise, if any, in global-mean temperature.

However, whilst there is much agreement amongst climate scientists and climate models that our planet is warming due to human activity, understanding the local impact of anthropogenic climate change contains its uncertainties.

For example, my PhD research aims to understand what controls the location and intensity of the Intertropical Convergence Zone. The Intertropical Convergence Zone is a discontinuous, zonal precipitation band in the tropics that migrates meridionally over the seasonal cycle (see Fig. 2). The Intertropical Convergence Zone is associated with wet and dry seasons over Africa, the development of the South Asian Monsoon and the life-cycle of tropical cyclones. However, currently our climate models struggle to simulate characteristics of the Intertropical Convergence Zone. This, alongside other issues, results in climate models differing in the response of tropical precipitation to anthropogenic climate change.

animation
Figure 2. Animation showing the seasonal cycle of the observed monthly-mean precipitation rates between 1979-2014.

Figure 3 is a plot taken from a report written by the Intergovernmental Panel on Climate Change (Climate Change 2013: The Physical Science Basis). Both maps show the projected change from climate model simulations in Northern Hemisphere winter precipitation between the years 2016 to 2035 (left) and 2081 to 2100 (right) relative to 1986 to 2005 under a scenario where minimal action is taken to limit greenhouse gas emissions (RCP8.5) . Whilst the projected changes in precipitation are an interesting topic in their own right, I’d like to draw your attention to the lines and dots annotated on each map. The lines indicate where the majority of climate models agree on a small change. The map on the left indicates that most climate models agree on small changes in precipitation over the majority of the globe over the next two decades. Dots, meanwhile, indicate where climate models agree on a substantial change in Northern Hemisphere winter precipitation. The plot on the right indicates that across the tropics there are substantial areas where models disagree on changes in tropical precipitation due to anthropogenic climate change. Over the majority of Africa, South America and the Maritime Continent, models disagree on the future of precipitation due to climate change.

IPCC_plot
Figure 3. Changes in Northern Hemisphere Winter Precipitation between 2016 to 2035 (left) and 2081 to 2100 (right) relative to 1986 to 2005 under a scenario with minimal reduction in anthropogenic greenhouse gas emission. Taken from IPCC – Climate Change 2013: The Physical Science Basis.

How should scientists present these uncertainties?

I must confess that I am nowhere near an expert in communicating uncertainties, however I hope some of my thoughts will encourage a discussion amongst scientists and users of climate data. Here are some of the ideas I’ve picked up on during my PhD and thoughts associated with them:

  • Climate model average – Take the average amongst climate model simulations. With this method though you take the risk of smoothing out large positive and negative trends. The climate model average is also not a “true” projection of changes due to anthropogenic climate change.
  • Every climate model outcome – Show the range of climate model projections to the user. Here you face the risk of presenting the user with too much climate data. The user may also trust certain model outputs which suit their own agenda.
  • Storylines – This idea was first shown to me in a paper by Zappa, G. and Shepherd, T. G., (2017). You present a series of storylines in which you highlight the key processes that are associated with variability in the regional weather pattern of interest. Each change in the set of processes leads to a different climate model projection. However, once again, the user of the climate model data has to reach their own conclusion on which projection to take action on.
  • Probabilities with climate projections – Typically with short- and medium-range weather forecasts probabilities are used to support the user. These probabilities are generated by re-performing the simulations, each with either different initial conditions or a slight change in model physics, to see the percentage of simulations that agree on model output. However, with climate model simulations, it is slightly more difficult to associate probabilities with projections. How do you generate the probabilities? Climate models have similarities in the methods which they use to represent the physics of our atmosphere and therefore you don’t want the probabilities associated with each climate projection due to similarity amongst climate model set-up. You could base the probabilities on how well the climate model simulates the past, however just because a model simulates the past correctly, doesn’t mean it will correctly simulate the forcing in the future.

There is much more that can be said about communicating uncertainty among climate model projections – a challenge which will continue for several decades. As climate scientists we can sometimes fall into the trap on concentrating on uncertainties. We need to keep on presenting the work that we are confident about, to ensure that the right action is taken to mitigate against anthropogenic climate change.

Modelling windstorm losses in a climate model

Extratropical cyclones cause vast amounts of damage across Europe throughout the winter seasons. The damage from these cyclones mainly comes from the associated severe winds. The most intense cyclones have gusts of over 200 kilometres per hour, resulting in substantial damage to property and forestry, for example, the Great Storm of 1987 uprooted approximately 15 million trees in one night. The average loss from these storms is over $2 billion per year (Schwierz et al. 2010) and is second only to Atlantic Hurricanes globally in terms of insured losses from natural hazards. However, the most severe cyclones such as Lothar (26/12/1999) and Kyrill (18/1/2007) can cause losses in excess of $10 billion (Munich Re, 2016). One property of extratropical cyclones is that they have a tendency to cluster (to arrive in groups – see example in Figure 1), and in such cases these impacts can be greatly increased. For example Windstorm Lothar was followed just one day later by Windstorm Martin and the two storms combined caused losses of over $15 billion. The large-scale atmospheric dynamics associated with clustering events have been discussed in a previous blog post and also in the scientific literature (Pinto et al., 2014; Priestley et al. 2017).

Picture1
Figure 1. Composite visible satellite image from 11 February 2014 of 4 extratropical cyclones over the North Atlantic (circled) (NASA).

A large part of my PhD has involved investigating exactly how important the clustering of cyclones is on losses across Europe during the winter. In order to do this, I have used 918 years of high resolution coupled climate model data from HiGEM (Shaffrey et al., 2017) which provides a huge amount of winter seasons and cyclone events for analysis.

In order to understand how clustering affects losses, I first of all need to know how much loss/damage is associated with each individual cyclone. This is done using a measure called the Storm Severity Index (SSI – Leckebusch et al., 2008), which is a proxy for losses that is based on the 10-metre wind field of the cyclone events. The SSI is a good proxy for windstorm loss. Firstly, it scales the wind speed in any particular location by the 98th percentile of the wind speed climatology in that location. This scaling ensures that only the most severe winds at any one point are considered, as different locations have different perspectives on what would be classed as ‘damaging’. This exceedance above the 98th percentile is then raised to the power of 3 due to damage from wind being a highly non-linear function. Finally, we apply a population density weighting to our calculations. This weighting is required because a hypothetical gust of 40 m/s across London will cause considerably more damage than the same gust across far northern Scandinavia, and the population density is a good approximation for the density of insured property. An example of the SSI that has been calculated for Windstorm Lothar is shown in Figure 2.

 

figure_2_blog_2018_new
Figure 2. (a) Wind footprint of Windstorm Lothar (25-27/12/1999) – 10 metre wind speed in coloured contours (m/s). Black line is the track of Lothar with points every 6 hours (black dots). (b) The SSI field of Windstorm Lothar. All data from ERA-Interim.

 

From Figure 2b you can see how most of the damage from Windstorm Lothar was concentrated across central/northern France and also across southern Germany. This is because the winds here were most extreme relative to what is the climatology. Even though the winds are highest across the North Atlantic Ocean, the lack of insured property, and a much high climatological winter mean wind speed, means that we do not observe losses/damage from Windstorm Lothar in these locations.

figure_3_blog_2018_new
Figure 3. The average SSI for 918 years of HiGEM data.

 

I can apply the SSI to all of the individual cyclone events in HiGEM and therefore can construct a climatology of where windstorm losses occur. Figure 3 shows the average loss across all 918 years of HiGEM. You can see that the losses are concentrated in a band from southern UK towards Poland in an easterly direction. This mainly covers the countries of Great Britain, Belgium, The Netherlands, France, Germany, and Denmark.

This blog post introduces my methodology of calculating and investigating the losses associated with the winter season extratropical cyclones. Work in Priestley et al. (2018) uses this methodology to investigate the role of clustering on winter windstorm losses.

This work has been funded by the SCENARIO NERC DTP and also co-sponsored by Aon Benfield.

 

Email: m.d.k.priestley@pgr.reading.ac.uk

 

References

Leckebusch, G. C., Renggli, D., and Ulbrich, U. 2008. Development and application of an objective storm severity measure for the Northeast Atlantic region. Meteorologische Zeitschrift. https://doi.org/10.1127/0941-2948/2008/0323.

Munich Re. 2016. Loss events in Europe 1980 – 2015. 10 costliest winter storms ordered by overall losses. https://www.munichre.com/touch/naturalhazards/en/natcatservice/significant-natural-catastrophes/index.html

Pinto, J. G., Gómara, I., Masato, G., Dacre, H. F., Woollings, T., and Caballero, R. 2014. Large-scale dynamics associated with clustering of extratropical cyclones affecting Western Europe. Journal of Geophysical Research: Atmospheres. https://doi.org/10.1002/2014JD022305.

Priestley, M. D. K., Dacre, H. F., Shaffrey, L. C., Hodges, K. I., and Pinto, J. G. 2018. The role of European windstorm clustering for extreme seasonal losses as determined from a high resolution climate model, Nat. Hazards Earth Syst. Sci. Discuss., https://doi.org/10.5194/nhess-2018-165, in review.

Priestley, M. D. K., Pinto, J. G., Dacre, H. F., and Shaffrey, L. C. 2017. Rossby wave breaking, the upper level jet, and serial clustering of extratropical cyclones in western Europe. Geophysical Research Letters. https://doi.org/10.1002/2016GL071277.

Schwierz, C., Köllner-Heck, P., Zenklusen Mutter, E. et al. 2010. Modelling European winter wind storm losses in current and future climate. Climatic Change. https://doi.org/10.1007/s10584-009-9712-1.

Shaffrey, L. C., Hodson, D., Robson, J., Stevens, D., Hawkins, E., Polo, I., Stevens, I., Sutton, R. T., Lister, G., Iwi, A., et al. 2017. Decadal predictions with the HiGEM high resolution global coupled climate model: description and basic evaluation, Climate Dynamics, https://doi.org/10.1007/s00382-016-3075-x.

The Role of the Cloud Radiative Effect in the Sensitivity of the Intertropical Convergence Zone to Convective Mixing

Email: j.f.talib@pgr.reading.ac.uk

Talib, J., S.J. Woolnough, N.P. Klingaman, and C.E. Holloway, 2018: The Role of the Cloud Radiative Effect in the Sensitivity of the Intertropical Convergence Zone to Convective Mixing. J. Climate, 31, 6821–6838, https://doi.org/10.1175/JCLI-D-17-0794.1

Rainfall in the tropics is commonly associated with the Intertropical Convergence Zone (ITCZ), a discontinuous line of convergence collocated at the ascending branch of the Hadley circulation, where strong moist convection leads to high rainfall. What controls the location and intensity of the ITCZ remains a fundamental question in climate science.

ensemble_precip_neat_thesis
Figure 1: Annual-mean, zonal-mean tropical precipitation (mm day-1) from Global Precipitation Climatology Project (GPCP, observations, solid black line) and CMIP5 (current coupled models) output. Dashed line indicates CMIP5 ensemble mean.

In current and previous generations of climate models, the ITCZ is too intense in the Southern Hemisphere, resulting in two annual-mean, zonal-mean tropical precipitation maxima, one in each hemisphere (Figure 1).  Even if we take the same atmospheric models and couple them to a world with only an ocean surface (aquaplanets) with prescribed sea surface temperatues (SSTs), different models simulate different ITCZs (Blackburn et al., 2013).

Within a climate model parameterisations are used to replace processes that are too small-scale or complex to be physically represented in the model. Parameterisation schemes are used to simulate a variety of processes including processes within the boundary layer, radiative fluxes and atmospheric chemistry. However my work, along with a plethora of others, shows that the representation of the ITCZ is sensitive to the convective parameterisation scheme (Figure 2a). The convective parameterisation scheme simulates the life cycle of clouds within a model grid-box.

Our method of showing that the simulated ITCZ is sensitive to the convective parameterisation scheme is by altering the convective mixing rate in prescribed-SST aquaplanet simulations. The convective mixing rate determines the amount of mixing a convective parcel has with the environmental air, therefore the greater the convective mixing rate, the quicker a convective parcel will become similar to the environmental air, given fixed convective parcel properties.

AEIprecipCREon
Figure 2: Zonal-mean, time-mean (a) precipitation rates (mm day-1}$) and (b) AEI (W m-2) in simulations where the convective mixing rate is varied.

In our study, the structure of the simulated ITCZ is sensitive to the convective mixing rate. Low convective mixing rates simulate a double ITCZ (two precipitation maxima, orange and red lines in Figure 2a), and high convective mixing rates simulate a single ITCZ (blue and black lines).

We then associate these ITCZ structures to the atmospheric energy input (AEI). The AEI is the amount of energy left in the atmosphere once considering the top of the atmosphere and surface energy budgets. We conclude, similar to Bischoff and Schneider, 2016, that when the AEI is positive (negative) at the equator, a single (double) ITCZ is simulated (Figure 2b). When the AEI is negative at the equator, energy is needed to be transported towards the equator for equilibrium. From a mean circulation perspective, this take place in a double ITCZ scenario (Figure 3). A positive AEI at the equator, is associated with poleward energy transport and a single ITCZ.

blog_figure_ITCZ_simulation
Figure 3: Schematic of a single (left) and double ITCZ (right). Blue arrows denote energy transport. In a single ITCZ scenario more energy is transported in the upper branches of the Hadley circulation, resulting in a net-poleward energy transport. In a double ITCZ scenario, more energy is transport equatorward than poleward at low latitudes, leading to an equatorward energy transport.

In our paper, we use this association between the AEI and ITCZ to hypothesize that without the cloud radiative effect (CRE), atmospheric heating due to cloud-radiation interactions, a double ITCZ will be simulated. We also hypothesize that prescribing the CRE will reduce the sensitivity of the ITCZ to convective mixing, as simulated AEI changes are predominately due to CRE changes.

In the rest of the paper we perform simulations with the CRE removed and prescribed to explore further the role of the CRE in the sensitivity of the ITCZ. We conclude that when removing the CRE a double ITCZ becomes more favourable and in both sets of simulations the ITCZ is less sensitive to convective mixing. The remaining sensitivity is associated with latent heat flux alterations.

My future work following this publication explores the role of coupling in the sensitivity of the ITCZ to the convective parameterisation scheme. Prescribing the SSTs implies an arbitary ocean heat transport, however in the real world the ocean heat transport is sensitive to the atmospheric circulation. Does this sensitivity between the ocean heat transport and atmospheric circulation affect the sensitivity of the ITCZ to convective mixing?

Thanks to my funders, SCENARIO NERC DTP, and supervisors for their support for this project.

References:

Blackburn, M. et al., (2013). The Aqua-planet Experiment (APE): Control SST simulation. J. Meteo. Soc. Japan. Ser. II, 91, 17–56.

Bischoff, T. and Schneider, T. (2016). The Equatorial Energy Balance, ITCZ Position, and Double-ITCZ Bifurcations. J. Climate., 29(8), 2997–3013, and Corrigendum, 29(19), 7167–7167.