Understanding the Processes Involved in Electrifying Convective Clouds

All clouds within our atmosphere are charged to some extent (Nicoll & Harrison, 2016), caused by the build-up of charge at the cloud edges caused by the charge travelling from the top of the atmosphere towards the surface. On the other hand, nearly all convective clouds are actively charged, caused by charge separation mechanisms that exchange charge between different sized hydrometeors within the cloud (Saunders, 1992). If these actively charged clouds can separate enough charge, then electrical breakdown can occur in the atmosphere and lightning can be initiated.

As lightning is a substantial hazard to both life and infrastructure, understanding the processes that cause a cloud to become charged are crucial for being able to forecast it. Even though most convective clouds within the UK are charged, most will never produce lightning. This puts us in a unique position of observing clouds which can and cannot produce lightning, providing a contrast of the cloud processes involved.

Cumliform_Electrification_Hypothesis
Figure 1: A conceptual image of a cloud showing the main processes that are thought to be involved in the electrification of a convective cloud. Based on current literature. See MacGorman & Rust (1998) for an overview.

Currently, lightning must be observed before the identification of a thunderstorm can be made, leaving zero lead time. As the first lightning strike can often be the most powerful, (especially in UK winter thunderstorms), improvements to understand the processes that charge a cloud are important to provide a thunderstorm lead time.

There are many processes involved in charging a cloud, a summary of the processes that are thought to have the greatest contribution can be found in Figure 1. The separation of charge through collisions is thought to be the dominant electrification mechanism which occurs in the ice phase of the cloud. The most successful charge separation occurs between growing ice hydrometeors of different sizes (Emersic & Saunders, 2010). The growth of the ice forms an outer shell of supercooled liquid water which contains negative charge. Collisions between different sized hydrometeors cause a net exchange of mass and charge, creating positively and negatively charged ice.

The liquid phase is crucial to maintaining a high moisture content in the ice phase of the cloud brought up by the updraught of the cloud. Once the electric field within the cloud is strong enough, the liquid drops can become polarised which can also help separate charge from collisions of different sized liquid hydrometeors in the later stages of convective cloud development.

Once the charge has been separated, the different polarities must be moved to different regions of the cloud to enhance the electric field. The most well-established method involves gravitational separation (Mason & Dash, 2000). Under the assumption that smaller hydrometeors will often contain negative charge and larger hydrometeors will contain positive charge, there is a distinct separation of hydrometeor sizes. After the formation of an updraught, the smaller hydrometeors can be lifted higher into the cloud overcoming the gravitational forces. Under the right conditions, the larger hydrometeors will be too heavy, and gravity will force the hydrometeors to remain lower in the cloud.

Field_Campaign_Instruments_Small
Figure 2: The Electrostatic Biral Thunderstorm Detector (BTD-300) (a), Electrostatic JCI 131 Field Mill (b), and 35 GHz “Copernicus” Radar installed at Chilbolton Observatory, UK. The Field Mill and BTD-300 were installed on 06/10/16 and 16/11/16 respectively.

To understand if any of the processes discussed in Figure 1 are evident within real-world convective clouds, a field campaign was set-up at Chilbolton Observatory (CO) to measure the properties of the cloud. Two electrical instruments were installed at CO, the Biral Thunderstorm Detector (BTD-300, Figure 2a) and the electrostatic Field Mill (FM, Figure 2b) which can measure the displacement current (jD) and the potential gradient (PG) respectively. The 35GHz “Copernicus” dopplerised radar was used to measure the properties of the cloud. Through a two-year field campaign, 653 convective clouds were identified over CO, with 524 clouds (80.2%) found to be charged and 129 clouds (19.8%) found to be uncharged.

To understand the importance of hydrometeor size for charging a cloud, the 95th percentile of the radar reflectivity (Z) was used. The Z is strongly related to the diameter (sixth power) and number concentration of hydrometeors (first power). Figure 3 shows a boxplot of all 653 clouds, classified by the cloud charge as measured at the surface. For each cloud, the liquid and ice regions were separated (purple and blue boxplots respectively) to highlight the dominant region for charge separation.

Z_Grouped_BoxPlot_LinearScale_v22
Figure 3: A boxplot of the 95th percentile reflectivity for 635 identified clouds (black box) classified into no charge, small charge and large charge groups. The cloud was also decoupled into the liquid (purple box) and ice (blue box) phases. The box-plot shows the mean (purple bar), median (red bar) and upper and lower quartiles (upper and lower limits of black box).

For all phases of the cloud, there was a substantial increase in the reflectivity for all regions of the cloud, especially the liquid phase. This suggests that the size of the hydrometeors in both the ice and liquid phase are indeed important for charging a cloud.

In the remainder of my PhD, the relative importance of each process discussed in Figure 1 will be addressed to try and decouple each process. Further to these observations made at the surface, ten radiosonde flights will be made (from the Reading University Atmospheric Observatory) inside convectively charged clouds. Measurements of the charge, optical thickness, amount of supercooled liquid water and turbulence will be used to increase the robustness of the results presented in the previous works.

Email: james.gilmore@pgr.reading.ac.uk

References

Bouniol, D., Illingworth, A. J. & Hogan, R. J., 2003. Deriving turbulent kinetic energy dissipation rate within clouds using ground-based 94 GHz radar. Seattle, 31st International Conference on Radar Meteorology.

Emersic, C. & Saunders, C. P., 2010. Further laboratory investigations into the Relative Diffusional Growth Rate theory of thunderstorm electrification. Atmos. Res., Volume 98, pp. 327-340.

MacGorman, D. R. & Rust, D. W., 1998. The Electrical Nature of Storms. 1st ed. New York: Oxford University Press.

Mason, B. L. & Dash, J. G., 2000. Charge and mass transfer in ice–ice collisions: Experimental observations of a mechanism in thunderstorm electrification. J. Geophys. Res., Volume 105, pp. 10185-10192.

Nicoll, K. A. & Harrison, R. G., 2016. Stratiform cloud electrification: comparison of theory with multiple in-cloud measurements. Q.J.R. Meteorol. Soc., Volume 142, pp. 2679-2691.

Renzo, M. D. & Urzay, J., 2018. Aerodynamic generation of electric fields in turbulence laden with charged inertial particles. Nat. Comms., 9(1676).

Saunders, C. P. R., 1992. A Review of Thunderstorm Electrification Processes. J. App. Meteo., Volume 32, pp. 642-655.

A New Aviation Turbulence Forecasting Technique

Anyone that has ever been on a plane will probably have experienced turbulence at some point. Most of the time it is not likely to cause injury, but during severe turbulence unsecured objects (including people) can be thrown around the cabin, costing the airline industry millions of dollars every year in compensation (Sharman and Lane, 2016). Recent research has also indicated that in the future the frequency of clear-air turbulence will increase with climate change. Forecasting turbulence is one of the best ways to reduce the number of injuries by giving pilots and flight planners ample warning, so they can put on the seat-belt sign or avoid the turbulent region altogether. The current method used in creating a turbulence forecast is a single ‘deterministic’ forecast – one forecast model, with one forecast output. This shows the region where they suspect turbulence to be, but because the forecast is not perfect, it would be more ideal to show how certain we are that there is turbulence in that region.

To do this, a probabilistic forecast can be created using an ensemble (a collection of forecast model outputs with slightly different model physics or initial conditions). A probabilistic forecast essentially shows model confidence in the forecast, and therefore how likely it is that there will be turbulence in a given region. For example, if all 10 out of 10 forecast outputs predict turbulence in the same location, the pilots would be confident in taking action (such as avoiding the region altogether). However, if only 1 out of 10 models predict turbulence, then the pilot may choose to turn on the seat-belt sign because there is still a chance of turbulence, but not enough to warrant spending time and fuel to fly around the region. A probabilistic forecast not only provides more information in the certainty of the forecast, but it also increases the chances of forecasting turbulence that a single model might miss.

Gill and Buchanan (2014) showed this ensemble forecast method does improve the forecast skill. In my project we have taken this one step further and created a multi-model ensemble, which is combining two different ensembles, each with their own strengths and weaknesses (Storer et al., 2018). We combine the Met Office Global and Regional Ensemble Prediction System (MOGREPS-G), with the European Centre for Medium Range Weather Forecasting (ECMWF) Ensemble Prediction System (EPS).

Analysis_plot_1_new
Figure 1: Plot of a moderate-or-greater turbulence event over the possible sources of turbulence: top left: orography, shear turbulence (bottom left: MOGREPS-G and bottom right: ECMWF EPS probability forecast), and top right: convection from satellite data (colour shading indicates deep convection). Both the MOGREPS-G and ECMWF-EPS ensembles forecast the shear turbulence event. The circles indicate turbulence observations with grey indicating no turbulence, orange indicating light turbulence and red indicating moderate or greater turbulence. The convective classification can be found in Francis and Batstone (2013).

There are three main sources of turbulence. The first is mountain wave turbulence, where gravity waves are produced from mountains that ultimately lead to turbulence. The second is convectively-induced turbulence, which includes in-cloud turbulence and also gravity waves produced as a result of deep convection that also lead to turbulence. The third is shear-induced turbulence, which is the one we are trying to forecast in this example. Figure 1 is an example plot showing orography and thus mountain wave turbulence (top left), convection and thus convectively induced turbulence (top right), the MOGREPS-G ensemble forecast of shear turbulence (bottom left) and the ECMWF ensemble forecast of shear turbulence (bottom right). The red circle indicates a ‘moderate or greater’ turbulence event, and we can see that because it is over the North Atlantic it is not a mountain wave turbulence event, and there is no convection nearby, but both the ensemble forecasts correctly predict the location of the shear-induced turbulence. This shows that there is high confidence in the forecast, and action (such as putting the seat-belt sign on) can be taken.

Value_1
Figure 2: Value plot with a log scale x-axis of the global turbulence with the 98 convective turbulence cases removed showing the forecast skill of the MOGREPS-G (dot-dash), ECMWF (dot), combined multi-model ensemble (dash) and the maximum value using every threshold of the combined multi-model ensemble (solid). The data used has a forecast lead time between +24 hours and +33 hours between May 2016 and April 2017.

To understand the usefulness of the forecast, Figure 2 is a relative economic value plot. It shows the value of the forecast for a given cost/loss ratio (which will vary depending on the end user). The multi-model ensemble is more valuable than both of the single model ensembles for all cost/loss ratios, showing that every end user will benefit from this forecast. Although our results do show an improvement in forecast skill, it is not statistically significant. However, by combining ensemble forecasts we gain consistency and more operational resilience (i.e., we are still able to produce a forecast if one ensemble is not available), and is therefore still worth implementing in the future.

Email: luke.storer@pgr.reading.ac.uk

References

Gill PG, Buchanan P. 2014. An ensemble based turbulence forecasting system. Meteorol. Appl. 21(1): 12–19.

Sharman R, Lane T. 2016. Aviation Turbulence: Processes, Detection, Prediction. Springer.

Storer, L.N., Gill, P.G. and Williams, P.D., 2018. Multi-Model Ensemble Predictions of Aviation Turbulence. Meteorol. Appl., (Accepted for publication).

SPARC (Stratosphere-troposphere Processes And their Role in Climate) General Assembly 2018

I was very fortunate to recently attend the SPARC 6th General Assembly 2018 conference in Kyoto, Japan (1-5 October) – the former imperial capital – where I had the opportunity to give a poster presentation of my research and network with fellow scientists of all ages and nationalities. SPARC is one of five core projects as part of the World Climate Research Program (WCRP), with a focus for coordinated, cutting-edge research on the interactions of both chemical and physical processes on Earth’s climate, at an international level. The main themes of the conference included: chemistry-climate interactions; subseasonal to decadal climate prediction; atmospheric dynamics and their role in climate; the importance of tropical processes; advances in observation and reanalysis datasets; and importantly, societal engagement of climate-related atmospheric research.

fig1
Attendees of the SPARC 6th General Assembly 2018 in Kyoto, Japan (1-5 October 2018)

Despite the best efforts of Typhoon Trami to disrupt the proceedings, the conference went ahead largely as planned with only minor revisions to the schedule. An icebreaker on the Sunday afternoon provided an opportunity to meet a few others who had braved the deteriorating weather over snacks and refreshments. The conference opening ceremony finally got underway at lunchtime the next day with a traditional Japanese Taiko performance (a musical display involving drums and percussion instruments), followed by a talk from Neil Harris (the co-chair of SPARC). He discussed some of the challenges the General Assembly aimed to address over the week, including the provision of information for governments and society to act on climate change and how we as scientists can help to assist governments and society to take action. He emphasised the need for a holistic approach to both atmospheric dynamics and predictability.

Each day contained up to three oral presentation sessions, usually commencing with keynote talks from some of the leading scientists in the field, followed by poster sessions similarly organised by theme. The conference was noteworthy in its absence of parallel sessions and a strong focus on poster sessions, with over 400 posters presented during the course of the entire week! For the early career researchers (ECRs) amongst us, there were prizes for the best received posters in the form of a generous sum of money courtesy of Google’s Project Loon – a mission to increase internet connectivity in remote regions and developing countries by using a network of balloons in the stratosphere. The awards were presented during each of two ECR poster award ceremonies during the week, with the winners determined by a panel of assigned judges during each poster session. A dedicated entertainment and networking session was also organised for us ECRs on the Monday evening. Hosted by several senior scientists, who shared their expertise, the event proved extremely popular.

The Wednesday offered a short window of opportunity for sightseeing around Kyoto in the afternoon before the scheduled conference dinner (followed by dancing) was held in the evening at a local hotel venue. A wide range of Japanese, Chinese and Western buffet food was served, in addition to a variety of Japanese beers, wines and whiskeys. The event was ideal in facilitating networking between different research themes and offered me the chance to hear people’s experiences ranging from their current PhD studies to managing collaborations as leaders of large international working groups.

The conference drew to a close late Friday afternoon and culminated in a roundtable discussion of the future of SPARC initiated by members of the audience. The session helped to clarify aims and working objectives for the future, not only over the next few years but also in decades to come. As a PhD student with hopefully a long career ahead of me, this proved highly stimulating and the thought of actively contributing to achieve these targets in the years to come is a very exciting prospect! I am very grateful for the opportunity to have attended this excellent international meeting and visit Japan, all of which would not have been possible without funding support from my industrial CASE partner, the Rutherford Appleton Laboratory (RAL).

Email: r.s.williams@pgr.reading.ac.uk

 

Atmospheric blocking: why is it so hard to predict?

Atmospheric blocks are nearly stationary large-scale flow features that effectively block the prevailing westerly winds and redirect mobile cyclones. They are typically characterised by a synoptic-scale, quasi-stationary high pressure system in the midlatitudes that can remain over a region for several weeks. Blocking events can cause extreme weather: heat waves in summer and cold spells in winter, and the impacts associated with these events can escalate due to a block’s persistence. Because of this, it is important that we can forecast blocking accurately. However, atmospheric blocking has been shown to be the cause of some of the poorest forecasts in recent years. Looking at all occasions when the ECMWF model experienced a period of very low forecast skill, Rodwell et al. (2013) found that the average flow pattern for which these forecasts verified was an easily-distinguishable atmospheric blocking pattern (Figure 1). But why are blocks so hard to forecast?

Fig_1_blogjacob
Figure 1:  Average verifying 500 hPa geopotential height (Z500) field for occasions when the ECMWF model experienced very low skill. From Rodwell et al. (2013).

There are several reasons why forecasting blocking is a challenge. Firstly, there is no universally accepted definition of what constitutes a block. Several different flow configurations that could be referred to as blocks are shown in Figure 2. The variety in flow patterns used to define blocking brings with it a variety of mechanisms that are dynamically important for blocks developing in a forecast (Woollings et al. 2018). Firstly, many phenomena must be well represented in a model for it to forecast all blocking events accurately. Secondly, there is no complete dynamical theory for block onset and maintenance- we do not know if a process key for blocking dynamics is missing from the equation set solved by numerical weather prediction models and is contributing to the forecast error. Finally, many of the known mechanisms associated with block onset and maintenance are also know sources of model uncertainty. For example, diabatic processes within extratropical cyclones have been shown to contribute substantially to blocking events (Pfahl et al. 2015), the parameterisation of which has been shown to affect medium-range forecasts of ridge building events (Martínez-Alvarado et al. 2015).

Fig_2_blogjacob
Figure 2: Different flow patterns, shown using Z500 (contours), that have been defined as blocks. From Woollings et al. (2018).

We do, however, know some ways to improve the representation of blocking: increase the horizontal resolution of the model (Schiemann et al. 2017); improve the parameterisation of subgrid physical processes (Jung et al. 2010); remove underlying model biases (Scaife et al. 2010); and in my PhD we found that improvements to a model’s dynamical core (the part of the model used to solved the governing equations) can also improve the medium-range forecast of blocking. In Figure 3, the frequency of blocking that occurred during two northern hemisphere winters is shown for the ERA-Interim reanalysis and three operational weather forecast centres (the ECMWF, Met Office (UKMO) and the Korean Meteorological Administration (KMA)). Both KMA and UKMO use the Met Office Unified Model – however, before the winter of 2014/15 the UKMO updated the model to use a new dynamical core whilst KMA continued to use the original. This means that for the 2013/14 the UKMO and KMA forecasts are from the same model with the same dynamical core whilst for the 2014/15 winter the UKMO and KMA forecasts are from the same model but with different dynamical cores. The clear improvement in forecast from the UKMO in 2014/15 can hence be attributed to the new dynamical core. For a full analysis of this improvement see Martínez-Alvarado et al. (2018).

Fig_3_blogjacob
Figure 3: The frequency of blocking during winter in the northern hemisphere in ERA-Interim (grey shading) and in seven-day forecasts from the European Centre for Medium-Range Weather Forecasts (ECMWF), the Met Office (UKMO) and the Korean Meteorological Administration (KMA). Box plots show the spread in the ensemble forecast from each centre.

In the remainder of my PhD I aim to investigate the link between errors in forecasts of blocking with the representation of upstream cyclones. I am particularly interested to see if the parameterisation of diabatic processes (a known source of model uncertainty) could be causing the downstream error in Rossby wave amplification and blocking.

Email: j.maddison@pgr.reading.ac.uk.

References:

Rodwell, M. J., and Coauthors, 2013: Characteristics of occasional poor medium-range weather  forecasts for Europe. Bulletin of the American Meteorological Society, 94 (9), 1393–1405.

Woollings, T., and Coauthors, 2018: Blocking and its response to climate change. Current Climate Change Reports, 4 (3), 287–300.

Pfahl, S., C. Schwierz, M. Croci-Maspoli, C. Grams, and H. Wernli, 2015: Importance of latent  heat release in ascending air streams for atmospheric blocking. Nature Geoscience, 8 (8), 610– 614.

Mart´ınez-Alvarado, O., E. Madonna, S. Gray, and H. Joos, 2015: A route to systematic error in forecasts of Rossby waves. Quart. J. Roy. Meteor. Soc., 142, 196–210.

Mart´ınez-Alvarado, O., and R. Plant, 2014: Parametrized diabatic processes in numerical simulations of an extratropical cyclone. Quart. J. Roy. Meteor. Soc., 140 (682), 1742–1755.

Scaife, A. A., T. Woollings, J. Knight, G. Martin, and T. Hinton, 2010: Atmospheric blocking and mean biases in climate models. Journal of Climate, 23 (23), 6143–6152.

Schiemann, R., and Coauthors, 2017: The resolution sensitivity of northern hemisphere blocking in four 25-km atmospheric global circulation models. Journal of Climate, 30 (1), 337–358.

Jung, T., and Coauthors, 2010: The ECMWF model climate: Recent progress through improved physical parametrizations. Quart. J. Roy. Meteor. Soc., 136 (650), 1145–1160.

Communicating uncertainties associated with anthropogenic climate change

Email: j.f.talib@pgr.reading.ac.uk

This week Prof. Ed Hawkins from the Department of Meteorology and NCAS-Climate gave a University of Reading public lecture discussing the science of climate change. A plethora of research was presented, all highlighting that humans are changing our climate. As scientists we can study the greenhouse effect in scientific labs, observe increasing temperatures across the majority of the planet, or simulate the impact of human actions on the Earth’s climate through using climate models.

simulating_temperature_rise
Figure 1. Global-mean surface temperature in observations (solid black line), and climate model simulations with (red shading) and without (blue shading) human actions. Shown during Prof. Ed Hawkins’ University of Reading Public Lecture.

Fig. 1, presented in Ed Hawkins’ lecture, shows the global mean temperature rise associated with human activities. Two sets of climate simulations have been performed to produce this plot. The first set, shown in blue, are simulations controlled solely by natural forcings, i.e. variations in radiation from the sun and volcanic eruptions. The second, shown in red, are simulations which include both natural forcing and forcing associated with greenhouse gas emissions from human activities. The shading indicates the spread amongst climate models, whilst the observed global-mean temperature is shown by the solid black line. From this plot it is evident that all climate models attribute the rising temperatures over the 20th and 21st century to human activity. Climate simulations without greenhouse gas emissions from human activity indicate a much smaller rise, if any, in global-mean temperature.

However, whilst there is much agreement amongst climate scientists and climate models that our planet is warming due to human activity, understanding the local impact of anthropogenic climate change contains its uncertainties.

For example, my PhD research aims to understand what controls the location and intensity of the Intertropical Convergence Zone. The Intertropical Convergence Zone is a discontinuous, zonal precipitation band in the tropics that migrates meridionally over the seasonal cycle (see Fig. 2). The Intertropical Convergence Zone is associated with wet and dry seasons over Africa, the development of the South Asian Monsoon and the life-cycle of tropical cyclones. However, currently our climate models struggle to simulate characteristics of the Intertropical Convergence Zone. This, alongside other issues, results in climate models differing in the response of tropical precipitation to anthropogenic climate change.

animation
Figure 2. Animation showing the seasonal cycle of the observed monthly-mean precipitation rates between 1979-2014.

Figure 3 is a plot taken from a report written by the Intergovernmental Panel on Climate Change (Climate Change 2013: The Physical Science Basis). Both maps show the projected change from climate model simulations in Northern Hemisphere winter precipitation between the years 2016 to 2035 (left) and 2081 to 2100 (right) relative to 1986 to 2005 under a scenario where minimal action is taken to limit greenhouse gas emissions (RCP8.5) . Whilst the projected changes in precipitation are an interesting topic in their own right, I’d like to draw your attention to the lines and dots annotated on each map. The lines indicate where the majority of climate models agree on a small change. The map on the left indicates that most climate models agree on small changes in precipitation over the majority of the globe over the next two decades. Dots, meanwhile, indicate where climate models agree on a substantial change in Northern Hemisphere winter precipitation. The plot on the right indicates that across the tropics there are substantial areas where models disagree on changes in tropical precipitation due to anthropogenic climate change. Over the majority of Africa, South America and the Maritime Continent, models disagree on the future of precipitation due to climate change.

IPCC_plot
Figure 3. Changes in Northern Hemisphere Winter Precipitation between 2016 to 2035 (left) and 2081 to 2100 (right) relative to 1986 to 2005 under a scenario with minimal reduction in anthropogenic greenhouse gas emission. Taken from IPCC – Climate Change 2013: The Physical Science Basis.

How should scientists present these uncertainties?

I must confess that I am nowhere near an expert in communicating uncertainties, however I hope some of my thoughts will encourage a discussion amongst scientists and users of climate data. Here are some of the ideas I’ve picked up on during my PhD and thoughts associated with them:

  • Climate model average – Take the average amongst climate model simulations. With this method though you take the risk of smoothing out large positive and negative trends. The climate model average is also not a “true” projection of changes due to anthropogenic climate change.
  • Every climate model outcome – Show the range of climate model projections to the user. Here you face the risk of presenting the user with too much climate data. The user may also trust certain model outputs which suit their own agenda.
  • Storylines – This idea was first shown to me in a paper by Zappa, G. and Shepherd, T. G., (2017). You present a series of storylines in which you highlight the key processes that are associated with variability in the regional weather pattern of interest. Each change in the set of processes leads to a different climate model projection. However, once again, the user of the climate model data has to reach their own conclusion on which projection to take action on.
  • Probabilities with climate projections – Typically with short- and medium-range weather forecasts probabilities are used to support the user. These probabilities are generated by re-performing the simulations, each with either different initial conditions or a slight change in model physics, to see the percentage of simulations that agree on model output. However, with climate model simulations, it is slightly more difficult to associate probabilities with projections. How do you generate the probabilities? Climate models have similarities in the methods which they use to represent the physics of our atmosphere and therefore you don’t want the probabilities associated with each climate projection due to similarity amongst climate model set-up. You could base the probabilities on how well the climate model simulates the past, however just because a model simulates the past correctly, doesn’t mean it will correctly simulate the forcing in the future.

There is much more that can be said about communicating uncertainty among climate model projections – a challenge which will continue for several decades. As climate scientists we can sometimes fall into the trap on concentrating on uncertainties. We need to keep on presenting the work that we are confident about, to ensure that the right action is taken to mitigate against anthropogenic climate change.

International Conferences on Subseasonal to Decadal Prediction

I was recently fortunate enough to attend the International Conferences on Subseasonal to Decadal Prediction in Boulder, Colorado. This was a week-long event organised by the World Climate Research Programme (WCRP) and was a joint meeting with two conferences taking place simultaneously: the Second International Conference on Subseasonal to Seasonal Prediction (S2S) and the Second International Conference on Seasonal to Decadal Prediction (S2D). There were also joint sessions addressing common issues surrounding prediction on these timescales.

Weather and climate variations on subseasonal to seasonal (from around 2 weeks to a season) to decadal timescales can have enormous social, economic, and environmental impacts, making skillful predictions on these timescales a valuable tool for policymakers. As a result, there is an increasingly large interest within the scientific and operational forecasting communities in developing forecasts to improve our ability to predict severe weather events. On S2S timescales, these include high-impact meteorological events such as tropical cyclones, floods, droughts, and heat and cold waves. On S2D timescales, while the focus broadly remains on similar events (such as precipitation and surface temperatures), deciphering the roles of internal and externally-forced variability in forecasts also becomes important.

IMG_6994.HEIC
Attendees of the International Conferences on Subseasonal to Decadal Prediction

The conferences were attended by nearly 350 people, of which 92 were Early Career Scientists (either current PhD students or those who completed their PhD within the last 5-7 years), from 38 different countries. There were both oral and poster presentations on a wide variety of topics, including mechanisms of S2S and S2D predictability (e.g. the stratosphere and tropical-extratropical teleconnections) and current modelling issues in S2S and S2D prediction. I was fortunate to be able to give an oral presentation about some of my recently published work, in which we examine the performance of the ECMWF seasonal forecast model at representing a teleconnection mechanism which links Indian monsoon precipitation to weather and climate variations across the Northern Hemisphere. After my talk I spoke to several other people who are working on similar topics, which was very beneficial and helped give me ideas for analysis that I could carry out as part of my own research.

One of the best things about attending an international conference is the networking opportunities that it presents, both with people you already know and with potential future collaborators from other institutions. This conference was no exception, and as well as lunch and coffee breaks there was an Early Career Scientists evening meal. This gave me a chance to meet scientists from all over the world who are at a similar stage of their career to myself.

IMG_5205
The view from the NCAR Mesa Lab

Boulder is located at the foot of the Rocky Mountains, so after the conference I took the opportunity to do some hiking on a few of the many trails that lead out from the city. I also took a trip up to NCAR’s Mesa Lab, which is located up the hillside away from the city and has spectacular views across Boulder and the high plains of Colorado, as well as a visitor centre with meteorological exhibits. It was a great experience to attend this conference and I am very grateful to NERC and the SummerTIME project for funding my travel and accommodation.

Email: j.beverley@pgr.reading.ac.uk

Modelling windstorm losses in a climate model

Extratropical cyclones cause vast amounts of damage across Europe throughout the winter seasons. The damage from these cyclones mainly comes from the associated severe winds. The most intense cyclones have gusts of over 200 kilometres per hour, resulting in substantial damage to property and forestry, for example, the Great Storm of 1987 uprooted approximately 15 million trees in one night. The average loss from these storms is over $2 billion per year (Schwierz et al. 2010) and is second only to Atlantic Hurricanes globally in terms of insured losses from natural hazards. However, the most severe cyclones such as Lothar (26/12/1999) and Kyrill (18/1/2007) can cause losses in excess of $10 billion (Munich Re, 2016). One property of extratropical cyclones is that they have a tendency to cluster (to arrive in groups – see example in Figure 1), and in such cases these impacts can be greatly increased. For example Windstorm Lothar was followed just one day later by Windstorm Martin and the two storms combined caused losses of over $15 billion. The large-scale atmospheric dynamics associated with clustering events have been discussed in a previous blog post and also in the scientific literature (Pinto et al., 2014; Priestley et al. 2017).

Picture1
Figure 1. Composite visible satellite image from 11 February 2014 of 4 extratropical cyclones over the North Atlantic (circled) (NASA).

A large part of my PhD has involved investigating exactly how important the clustering of cyclones is on losses across Europe during the winter. In order to do this, I have used 918 years of high resolution coupled climate model data from HiGEM (Shaffrey et al., 2017) which provides a huge amount of winter seasons and cyclone events for analysis.

In order to understand how clustering affects losses, I first of all need to know how much loss/damage is associated with each individual cyclone. This is done using a measure called the Storm Severity Index (SSI – Leckebusch et al., 2008), which is a proxy for losses that is based on the 10-metre wind field of the cyclone events. The SSI is a good proxy for windstorm loss. Firstly, it scales the wind speed in any particular location by the 98th percentile of the wind speed climatology in that location. This scaling ensures that only the most severe winds at any one point are considered, as different locations have different perspectives on what would be classed as ‘damaging’. This exceedance above the 98th percentile is then raised to the power of 3 due to damage from wind being a highly non-linear function. Finally, we apply a population density weighting to our calculations. This weighting is required because a hypothetical gust of 40 m/s across London will cause considerably more damage than the same gust across far northern Scandinavia, and the population density is a good approximation for the density of insured property. An example of the SSI that has been calculated for Windstorm Lothar is shown in Figure 2.

 

figure_2_blog_2018_new
Figure 2. (a) Wind footprint of Windstorm Lothar (25-27/12/1999) – 10 metre wind speed in coloured contours (m/s). Black line is the track of Lothar with points every 6 hours (black dots). (b) The SSI field of Windstorm Lothar. All data from ERA-Interim.

 

From Figure 2b you can see how most of the damage from Windstorm Lothar was concentrated across central/northern France and also across southern Germany. This is because the winds here were most extreme relative to what is the climatology. Even though the winds are highest across the North Atlantic Ocean, the lack of insured property, and a much high climatological winter mean wind speed, means that we do not observe losses/damage from Windstorm Lothar in these locations.

figure_3_blog_2018_new
Figure 3. The average SSI for 918 years of HiGEM data.

 

I can apply the SSI to all of the individual cyclone events in HiGEM and therefore can construct a climatology of where windstorm losses occur. Figure 3 shows the average loss across all 918 years of HiGEM. You can see that the losses are concentrated in a band from southern UK towards Poland in an easterly direction. This mainly covers the countries of Great Britain, Belgium, The Netherlands, France, Germany, and Denmark.

This blog post introduces my methodology of calculating and investigating the losses associated with the winter season extratropical cyclones. Work in Priestley et al. (2018) uses this methodology to investigate the role of clustering on winter windstorm losses.

This work has been funded by the SCENARIO NERC DTP and also co-sponsored by Aon Benfield.

 

Email: m.d.k.priestley@pgr.reading.ac.uk

 

References

Leckebusch, G. C., Renggli, D., and Ulbrich, U. 2008. Development and application of an objective storm severity measure for the Northeast Atlantic region. Meteorologische Zeitschrift. https://doi.org/10.1127/0941-2948/2008/0323.

Munich Re. 2016. Loss events in Europe 1980 – 2015. 10 costliest winter storms ordered by overall losses. https://www.munichre.com/touch/naturalhazards/en/natcatservice/significant-natural-catastrophes/index.html

Pinto, J. G., Gómara, I., Masato, G., Dacre, H. F., Woollings, T., and Caballero, R. 2014. Large-scale dynamics associated with clustering of extratropical cyclones affecting Western Europe. Journal of Geophysical Research: Atmospheres. https://doi.org/10.1002/2014JD022305.

Priestley, M. D. K., Dacre, H. F., Shaffrey, L. C., Hodges, K. I., and Pinto, J. G. 2018. The role of European windstorm clustering for extreme seasonal losses as determined from a high resolution climate model, Nat. Hazards Earth Syst. Sci. Discuss., https://doi.org/10.5194/nhess-2018-165, in review.

Priestley, M. D. K., Pinto, J. G., Dacre, H. F., and Shaffrey, L. C. 2017. Rossby wave breaking, the upper level jet, and serial clustering of extratropical cyclones in western Europe. Geophysical Research Letters. https://doi.org/10.1002/2016GL071277.

Schwierz, C., Köllner-Heck, P., Zenklusen Mutter, E. et al. 2010. Modelling European winter wind storm losses in current and future climate. Climatic Change. https://doi.org/10.1007/s10584-009-9712-1.

Shaffrey, L. C., Hodson, D., Robson, J., Stevens, D., Hawkins, E., Polo, I., Stevens, I., Sutton, R. T., Lister, G., Iwi, A., et al. 2017. Decadal predictions with the HiGEM high resolution global coupled climate model: description and basic evaluation, Climate Dynamics, https://doi.org/10.1007/s00382-016-3075-x.