Quantifying Arctic Storm Risk in a Changing Climate

Alec Vessey (Final Year PhD Student) – alexandervessey@pgr.reading.ac.uk 
Supervisors: Kevin Hodges (UoR), Len Shaffrey (UoR), Jonny Day (ECMWF), John Wardman (AXA XL)
 

Arctic sea ice extent has reduced dramatically since it was first monitored by satellites in 1979 – at a rate of 60,000 km2 per year (see Figure 1a). This is equivalent to losing an ice sheet the size of London every 10 days. This dramatic reduction in sea ice extent has been caused by global temperatures increasing, which is a result of anthropogenic climate change. The Arctic is the region of Earth that has undergone the greatest warming in recent decades, due to the positive feedback mechanism of Arctic Amplification. Global temperatures are expected to continue to increase into the 21st century, further reducing Arctic sea ice extent. 

Consequently, the Arctic Ocean has become increasingly open and navigable for ships (see Figure 1b and 1c). The Arctic Ocean provides shorter distances between ports in Europe and North America to ports in Asia than more traditional routes in the mid-latitudes that include the Suez Canal Route and the routes through the Panama Canal. There are two main shipping routes in the Arctic, the Northern Sea Route (along the coastline of Eurasia) and the Northwest Passage (through the Canadian Archipelago) (see Figure 2). For example, the distance between the Ports of Rotterdam and Tokyo can be reduced by 4,300 nautical-miles if ships travel through the Arctic (total distance: 7,000 nautical-miles) rather than using the mid-latitude route through the Suez Canal (total distance: 11,300 nautical-miles). Travelling through the Arctic could increase profits for shipping companies. Shorter journeys will require less fuel to be spent on between destinations and allow more time for additional shipping contracts to be pursued. It is expected that the number of ships in the Arctic will increase exponentially in the near future, when infrastructure is developed, and sea ice extent reduces further.  

Figure 1. Reductions in Arctic sea ice extent from 1979 – 2020. a) Annual Arctic sea ice extent per year between 1979-2020. b) Spatial distribution of Arctic sea ice in September 1980. c) Spatial distribution of Arctic sea ice in September 2012 (the lowest sea ice extent on record). Sourced from the National Sea and Ice Data Center.
Figure 2. A map of the two main shipping routes through the Arctic. The Northwest Passage connects North America with the Bering Strait (and onto Asia), and the Northern Sea Route connects Europe with the Bering Strait (and onto Asia). Source: BBC (2016).

However, as human activity in the Arctic increases, the vulnerability of valuable assets and the risk to life due to exposure to hazardous weather conditions also increases.  Hazardous weather conditions often occur during the passage of storms.  Storms cause high surface wind speeds and high ocean waves. Arctic storms have also been shown to lead to enhanced break up of sea ice, resulting in additional hazards when ice drifts towards shipping lanes. Furthermore, the Arctic environment is extremely cold, with search and rescue and other support infrastructure poorly established. Thus, the Arctic is a very challenging environment for human activity. 

Over the last century, the risks of mid-latitude storms and hurricanes have been a focal-point of research in the scientific community, due to their damaging impact in densely populated areas. Population in the Arctic has only just started to increase. It was only in 2008 that sea ice had retreated far enough for both of the Arctic shipping lanes to be open simultaneously (European Space Agency, 2008). Arctic storms are less well understood than these hazards, mainly because they have not been a primary focus of research. Reductions in sea ice extent and increasing human activity mean that it is imperative to further the understanding of Arctic storms. 

This is what my PhD project is all about – quantifying the risk of Arctic storms in a changing climate. My project has four main questions, which try to fill the research gaps surrounding Arctic storm risk. These questions include: 

  1. What are the present characteristics (frequency, spatial distribution, intensity) of Arctic storms, and, what is the associated uncertainty of this when using different datasets and storm tracking algorithms? 
  1. What is the structure and development of Arctic storms, and how does this differ to that of mid-latitude storms? 
  1. How might Arctic storms change in a future climate in response to climate change? 
  1. Can the risk of Arctic storms impacting shipping activities be quantified by combining storm track data and ship track data? 

Results of my first research question are summarised in a recent paper (https://link.springer.com/article/10.1007/s00382-020-05142-4 – Vessey et al. 2020).  I previously wrote a blog post on the The Social Metwork summarising this paper, which can be found at https://socialmetwork.blog/2020/02/21/arctic-storms-in-multiple-global-reanalysis-datasets/. This showed that there is a seasonality to Arctic storms, with most winter (DJF) Arctic storms occurring in the Greenland, Norwegian and Barents Sea region, whereas, summer (JJA) Arctic storms generally occur over the coastline of Eurasia and the high Arctic Ocean. Despite the dramatic reductions in Arctic sea ice over the past few decades (see Figure 1), there is no trend in Arctic storm frequency. In the paper, the uncertainty in the present climate characteristics of Arctic storms is assessed, by using multiple reanalysis datasets and tracking methods. A reanalysis datasets is our best approximation of past atmospheric conditions, that combines past observations with state-of-the-art Numerical Weather Prediction Models. 

The deadline for my PhD project is the 30th of June 2021, so I am currently experiencing the very busy period of writing up my Thesis. Hopefully, there aren’t too many hiccups over the next few months, and perhaps I will be able to write some of my research chapters up as papers.  

References: 

BBC, 2016, Arctic Ocean shipping routes ‘to open for months’. https://www.bbc.com/news/science-environment-37286750. Accessed 18 March 2021. 

European Space Agency, 2008: Arctic sea ice annual freeze-up underway. https://www.esa.int/Applications/Observing_the_Earth/Space_for_our_climate/Arctic_sea_ice_annual_freeze_nobr_-up_nobr_underway. Accessed 18 March 2021. 

National Snow & Ice Data Centre, (2021), Sea Ice Index. https://nsidc.org/data/seaice_index. Accessed 18 March 2021. 

Vessey, A.F., K.I., Hodges, L.C., Shaffrey and J.J. Day, 2020: An Inter-comparison of Arctic synoptic scale storms between four global reanalysis datasets. Climate Dynamics, 54 (5), 2777-2795. 

Dialogue Concerning the Obvious and Obscurity of Scientific Programming in Python: A Lost Script

Disclaimer: The characters and events depicted in this blog post are entirely fictitious. Any similarities to names, incidents or source code are entirely coincidental.

Antonio (Professor): Welcome to the new Masters module and PhD training course, MTMX101: Introduction to your worst fears of scientific programming in Python. In this course, you will be put to the test–to spot and resolve the most irritating ‘bugs’ that will itch, sorry, gl-itch your software and intended data analysis.

Berenice (A PhD student): Before we start, would you have any tips on how to do well for this module?

Antonio: Always read the code documentation. Otherwise, there’s no reason for code developers to host their documentation on sites like rtfd.org (Read the Docs).

Cecilio (An MSc student): But… didn’t we already do an introductory course on scientific computing last term? Why this compulsory module?

Antonio: The usual expectation is for you to have to completed last term’s introductory computing module, but you may also find that this course completely changes what you used to consider to be your “best practices”… In other words, it’s a bad thing that you have taken that module last term but also a good thing you have taken that module last term. There may be moments where you may think “Why wasn’t I taught that?” I guess you’ll all understand soon enough! As per the logistics of the course, you will be assessed in the form of quiz questions such as the following:

Example #0: The Deviations in Standard Deviation

Will the following print statements produce the same numeric values?

import numpy as np
import pandas as pd

p = pd.Series(np.array([1,1,2,2,3,3,4,4]))
n = np.array([1,1,2,2,3,3,4,4])

print(p.std())
print(n.std())

Without further ado, let’s start with an easy example to whet your appetite.

Example #1: The Sum of All Fears

Antonio: As we all know, numpy is an important tool in many calculations and analyses of meteorological data. Summing and averaging are common operations. Let’s import numpy as np and consider the following line of code. Can anyone tell me what it does?

>>> hard_sum = np.sum(np.arange(1,100001))

Cecilio: Easy! This was taught in the introductory course… Doesn’t this line of code sum all integers from 1 to 100 000?

Antonio: Good. Without using your calculators, what is the expected value of hard_sum?

Berenice: Wouldn’t it just be 5 000 050 000?

Antonio: Right! Just as quick as Gauss. Let’s now try it on Python. Tell me what you get.

Cecilio: Why am I getting this?

>>> hard_sum = np.sum(np.arange(1,100001))
>>> print(hard_sum)
705082704

Berenice: But I’m getting the right answer instead with the same code! Would it be because I’m using a Mac computer and my MSc course mate is using a Windows system?

>>> hard_sum = np.sum(np.arange(1,100001))
>>> print(hard_sum)
5000050000

Antonio: Well, did any of you get a RuntimeError, ValueError or warning from Python, despite the bug? No? Welcome to MTMX101!

Cecilio: This is worrying! What’s going on? Nobody seemed to have told me about this.

Berenice: I recall learning something about the computer’s representation of real numbers in one of my other modules. Would this be problem?

Antonio: Yes, I like your thinking! But that still doesn’t explain why you both got different values in Python. Any deductions…? At this point, I will usually set this as a homework assignment, but since it’s your first MTMX101 lecture, here is the explanation on Section 2.2.5 of your notes.

If we consider the case of a 4-bit binary, and start counting from 0, the maximum number we can possibly represent is 15. Adding 1 to 15, would lead to 0 being represented, as shown in Figure 1. This is called integer overflow, just like how an old car’s analogue 4-digit odometer “resets” to zero after recording 9999 km. As for the problem of running the same code and getting different results on a Windows and Mac machine, a numpy integer array on Windows defaults to 32-bit integer, whereas it is 64-bit on Mac/Linux, and as expected, the 64-bit integer has more bits and can thus represent our expected value of 5000050000. So, how do we mitigate this problem when writing future code? Simply specify the type argument and force Python to use 64-bit integers when needed.

>>> hard_sum = np.sum(np.arange(1,100001), dtype=np.int64)
>>> print(type(hard_sum))
<class 'numpy.int64'>
>>> print(hard_sum)
5000050000

As to why we got the spurious value of 705082704 from using 32-bit integers, I will leave it to you to understand it from the second edition of my book, Python Puzzlers!

Figure 1: Illustration of overflow in a 4-bit unsigned integer

Example #2: An Important Pointer for Copying Things

Antonio: On to another simple example, numpy arrays! Consider the following two-dimensional array of temperatures in degree Celsius.

>>> t_degrees_original = np.array([[2,1,0,1], [-1,0,-1,-1], [-3,-5,-2,-3], [-5,-7,-6,-7]])

Antonio: Let’s say we only want the first three rows of data, and in this selection would like to set all values on the zeroth row to zero, while retaining the values in the original array. Any ideas how we could do that?

Cecilio: Hey! I’ve learnt this last term, we do array slicing.

>>> t_degrees_slice = t_degrees_original[0:3,:]
>>> t_degrees_slice[0,:] = 0

Antonio: I did say to retain the values in the original array…

>>> print(t_degrees_original)
[[ 0  0  0  0]
[-1  0 -1 -1]
[-3 -5 -2 -3]
[-5 -7 -6 -7]]

Cecilio: Oh oops.

Berenice: Let me suggest a better solution.

>>> t_degrees_original = np.array([[2,1,0,1], [-1,0,-1,-1], [-3,-5,-2,-3], [-5,-7,-6,-7]])
>>> t_degrees_slice = t_degrees_original[[0,1,2],:]
>>> t_degrees_slice[0,:] = 0
>>> print(t_degrees_original)
[[ 2  1  0  1]
[-1  0 -1 -1]
[-3 -5 -2 -3]
[-5 -7 -6 -7]]

Antonio: Well done!

Cecilio: What? I thought the array indices 0:3 and 0,1,2 would give you the same slice of the numpy array.

Antonio: Let’s clarify this. The former method of using 0:3 is standard indexing and only copies the information on where the original array was stored (i.e. “view” of the original array, or “shallow copy”), while the latter 0,1,2 is fancy indexing and actually makes a new separate array with the corresponding values from the original array (i.e. “deep copy”). This is illustrated in Figure 2 showing variables and their respective pointers for both shallow and deep copying. As you now understand, numpy, is really not as easy as pie…

Figure 2: Simplified diagram showing differences in variable pointers and computer memory for shallow and deep copying

Cecilio: That was… deep.

Berenice: Is there a better way to deep copy numpy arrays rather than having to type in each index like I did e.g. [0,1,2]?

Antonio: There is definitely a better way! If we replace the first line of your code with the line below, you should be able to do a deep copy of the original array. Editing the copied array will not affect the original array.

>>> t_degrees_slice = np.copy(t_degrees_original[0:2,:])

I would suggest this method of np.copy to be your one– and preferably only one –obvious way to do a deep copy of a numpy array, since it’s the most intuitive and human-readable! But remember, deep copy only if you have to, since deep copying a whole array of values takes computation time and space! It’s now time for a 5-minute break.

Cecilio: More like time for me to eat some humble (num)py.


Antonio: Hope you all had a nice break. We now start with a fun pop quiz!

Consider the following Python code:

short_a = "galileo galilei"
short_b = "galileo galilei"
long_a = "galileo galilei " + "linceo"
long_b = "galileo galilei " + "linceo"
 
print(short_a == short_b)
print(short_a is short_b)
print(long_a == long_b)
print(long_a is long_b)

Which the correct sequence of booleans that will be printed out?
1. True, True, True, True
2. True, False, True, False
3. True, True, True, False

Antonio: In fact, they are all correct answers. It depends on whether you are running Python 3.6.0, Python 3.8.5 and whether you ran the code in a script or in the console! Although there is much more to learn about “string interning”, the quick lesson here is to always compare the value of strings using double equal signs (==) instead of using is.

Example #3: Array manipulation – A Sleight of Hand?

Antonio: Let’s say you are asked to calculate the centered difference of some quantity (e.g. temperature) in one dimension \frac{\partial T}{\partial x} with gird points uniformly separated by \Delta x of 1 metre. What is some code that we could use to do this?

Berenice: I remember this from one of the modelling courses. We could use a for loop to calculate most elements of \frac{\partial T}{\partial x} \approx \frac{T_{i+1} - T_{i-1}}{2\Delta x} then deal with the boundary conditions. The code may look something like this:

delta_x = 1.0
temp_x = np.random.rand(1000)
dtemp_dx = np.empty_like(temp_x)
for i in range(1, len(temp_x)-1):
    dtemp_dx[i] = (temp_x[i+1] - temp_x[i-1]) / (2*delta_x)

# Boundary conditions
dtemp_dx[0] = dtemp_dx[1]
dtemp_dx[-1] = dtemp_dx[-2]

Antonio: Right! How about we replace your for loop with this line?

dtemp_dx[1:-1] = (temp_x[2:] - temp_x[0:-2]) / (2*delta_x)

Cecilio: Don’t they just do the same thing?

Antonio: Yes, but would you like to have a guess which one might be the “better” way?

Berenice: In last term’s modules, we were only taught the method I proposed just now. I would have thought both methods were equally good.

Antonio: On my computer, running your version of code 10000 times takes 6.5 seconds, whereas running my version 10000 times takes 0.1 seconds.

Cecilio: That was… fast!

Berenice: Only if my research code could run with that kind of speed…

Antonio: And that is what we call vectorisation, the act of taking advantage of numpy’s optimised loop implementation instead of having to write your own!

Cecilio: I wish we knew all this earlier on! Can you tell us more?

Antonio: Glad your interest is piqued! Anyway, that’s all the time we have today. For this week’s homework, please familiarise yourself so you know how to

import this

module or import that package. In the next few lectures, we will look at more bewildering behaviours such as the “Mesmerising Mutation”, the “Out of Scope” problem that is in the scope of this module. As we move to more advanced aspects of this course, we may even come across the “Dynamic Duck” and “Mischievous Monkey”. Bye for now!

Do local or non-local sources of moisture contribute to extratropical cyclone precipitation?

Sophie Cuckow – s.cuckow@pgr.reading.ac.uk 

Introduction 

Transient corridors of strong horizontal water vapour transport, called atmospheric rivers have been linked to flooding over Europe and the US (Ralph et al. 2004, Lavers et al. (2011), Corringham et al. (2019)). Despite this, the relationship between atmospheric rivers and the precipitation associated with extratropical cyclones is debated in literature. It is often thought that atmospheric rivers feed moisture from the tropics directly to the cyclone where it rises to form precipitation (Ralph et al. (2004), Neiman et al. 2008)). However, this would only be the case if the cyclone propagation velocity is slower than the vapour transport, which might not occur when a cyclone is developing. Thus, arises the question, where does the moisture that produces the precipitation come from? The tropics via atmospheric rivers or from another location via a different mechanism? Understanding which moisture sources contribute to extratropical precipitation would help to improve forecasts and mitigate the risk of damage from flooding events. 

Case Study – Storm Bronagh

To investigate different moisture sources, we examined our case study, storm Bronagh, in an Earth relative and system relative framework. Storm Bronagh tracked over the UK during the 20th and 21st September 2018 and bought over 50mm of rainfall in 24 hours to parts of Wales and England. This led to flooding in mid-Wales and Sheffield (Met Office). The Earth relative framework allows us to investigate whether the storm has an associated atmospheric river. The cyclone relative framework allows us to investigate airstreams called conveyor belts, which are moving faster than the cyclone propagation velocity. To transition to this framework, we calculated the propagation velocity of the cyclone using the tracks produced by the tracking algorithm of Hodges (1995). We then subtracted the velocity from the Earth relative wind fields (European Centre for Mid-range Weather Forecasts Re-analysis 5, ERA5) to give the cyclone relative wind fields (Carlson (1980)).

Figure 1: The 300K isentropic surface on 21st September 2018 00:00UTC with the center of the storm (red cross, the isobars (white contours) and masked areas depicting where the surface intersects the ground are shown. Left hand side: The Earth relative moisture flux (streamlines) and the magnitude of the Earth relative moisture flux (filled contours). Right hand side: The system relative moisture flux (streamlines) and the magnitude of the system relative moisture flux (filled contours).

The Earth relative and system relative moisture flux (q\overline{U}) on the 300K isentropic surface for storm Bronagh on 21st September 2018 00:00UTC are shown in figure 1. In the Earth relative framework on the left-hand side of this figure, there is an atmospheric river approaching from the West as shown by the blue arrow. This suggests that the source of moisture for this storm was the tropics. However, the cyclone relative framework suggests there is in fact a local source of moisture. This can be seen on the right-hand side of figure 1 where three important airstreams can be seen: the warm conveyor belt (red), the dry intrusion (blue) and the feeder airstream (green). 

The warm conveyor belt is responsible for most of the cloud and precipitation associated with the cyclone. As shown in figure 1, it ascends ahead of the cold front and turns cyclonically to form the upper part of the cloud head, resulting in the iconic comma shape. Also shown in figure 1 is the dry intrusion which descends from behind the cold front into the centre of the cyclone. As this is a dry airflow, it creates a cloud free area between the cold frontal cloud band and the cloud head.

The feeder airstream is a low-level moist airflow that supplies moisture to the base of the warm conveyor belt where it rises. This can be seen in figure 1 where an airstream approaches from the East and splits into two branches, one of which joins the base of the warm conveyor belt. Therefore, in the cyclone relative framework, the moisture originates in the environment ahead of the cyclone rather than the tropics. Furthermore, the other branch of the feeder airstream indicates that the atmospheric river is a result of the moisture being left behind by the cyclone as it propagates. This supports the findings of Dacre et al. (2019) where the feeder airstream was identified by examining 200 of the most intense winter storms over 20 years. 

Therefore, the question arises, which cyclones have a local moisture source? Is it just the intense cyclones or do weaker ones have one too? In order to answer these questions, a diagnostic that identifies the feeder airstream has been developed thus, determining whether there is a local or non-local source of moisture.  

Identification Diagnostic

As seen in figure 1, the feeder airstream is synonymous with a saddle point where it splits into two branches. Therefore, the basis of the feeder airstream’s identification is a saddle point in the system relative moisture flux on an isentropic surface. Utilising theory from non-linear dynamics, the flow around a minimum or fixed point can be identified. Taking the Jacobian matrix of a field and Taylor expanding around a fixed-point, results in a quadratic equation which includes the determinant and trace of the field. By solving this equation and plotting the trace and determinant of the field gives insight into how each flow can be characterised (Drazin (1992)). This is shown in figure 2 where positive values of determinant of the field characterises spiral sources and sinks, whereas the negative values of determinant of the field characterises a saddle point. The determinant of a field is calculated using an equation which calculates the gradient of the field around the fixed point. Therefore, the feeder airstream can be identified by a minimum in the system relative moisture flux field coinciding with an area of negative determinant of the system relative moisture flux field.  Applying this theory to the case study, the feeder air stream for storm Bronagh was successfully identified for 21st September 2018 at 00:00UTC.

Figure 2: Poincare diagram based on Hundley (2012). This diagram describes how the flow around a fixed point in field A can be characterised using the determinant and trace of the field.

Conclusion and Future Work

In conclusion, the moisture source for the precipitation associated with storm Bronagh on 21st September 2018 00:00UTC is ahead of the environment rather than the tropics. This moisture is transported to the base of the warm conveyor belt via one branch of a low-level moist airflow called the feeder airstream. The second branch forms the atmospheric river which is a result of moisture being left behind by the cyclone as it propagates. To determine the source of moisture associated with Bronagh in an objective manner, an identification diagnostic has successfully been developed using the determinant of the system relative moisture flux field on an isentropic surface.

In order to develop the identification diagnostic further, it will be adapted to identify the feeder airstream in different stages of storm Bronagh’s evolution. This would verify whether the diagnostic is successfully identifying the feeder airstream and will give us more insight into the relative sources of moisture as the storm evolves. Future work would involve applying the identification diagnostic to a climatology of cyclones with varying intensity, genesis location and durations so that we can ascertain the dependance of the moisture sources on these parameters.

References

Carlson, T. N. (1980), ‘Airflow Through Midlatitude Cyclones and the Comma Cloud Pattern’, Monthly Weather Review

Corringham, T. W., Martin Ralph, F., Gershunov, A., Cayan, D. R., & Talbot, C. A. (2019).

Atmospheric rivers drive flood damages in the western United States. Science Advances, 5(12). https://doi.org/10.1126/sciadv.aax4631

Dacre, H. F., Martınez-Alvarado, O. & Mbengue, C. O. (2019), ‘Linking Atmospheric Rivers and Warm Conveyor Belt Airflows’, Journal of Hydrometeorology

Douglas R. Hundley. “Poincare Diagram: Classification of phase portraits in (detA, TrA) – plane.” Whitman College, WA, Fall 2012. http://people.whitman.edu/~hundledr/ courses/M244F12/M244/PoincareDiagram.jpg

Drazin, P. G. (1992), Nonlinear Systems, Cambridge Texts in Applied Mathematics, Cambridge University Press

Hodges, K. I. (1995), ‘Feature Tracking on the Unit Sphere’, Monthly Weather Review 123(12)

Lavers, D. A., Villarini, G., Allan, R. P., Wood, E. F. & Wade, A. J. (2012), ‘The detection of atmospheric rivers in atmospheric reanalyses and their links to British winter floods and the large-scale climatic circulation’, Journal of Geophysical Research Atmospheres

Neiman, P. J., Ralph, F. M., Wick, G. A., Lundquist, J. D. & Dettinger, M. D. (2008), ‘Meteorological Characteristics and Overland Precipitation Impacts of Atmospheric Rivers Affecting the West Coast of North America Based on Eight Years of SSM/I Satellite Observations’, Journal of Hydrometeorology

Met Office – “Strong Winds and Heavy Rain from Storms Ali and Bronagh” https://www.metoffice.gov.uk/binaries/content/assets/metofficegovuk/pdf/weather/learn-about/uk-past-events/interesting/2018/strong-winds-and-heavy-rain-from-storms-ali-and-bronagh—met-office.pdf

Ralph, F. M., Neiman, P. J. & Wick, G. A. (2004), ‘Satellite and CALJET Aircraft Observations of Atmospheric Rivers over the Eastern North Pacific Ocean during the Winter of 1997/98’, Monthly Weather Review

High-resolution Dispersion Modelling in the Convective Boundary Layer

Lewis Blunn – l.p.blunn@pgr.reading.ac.uk

In this blog I will first give an overview of the representation of pollution dispersion in regional air quality models (AQMs). I will then show that when pollution dispersion simulations in the convective boundary layer (CBL) are run at \mathcal{O}(100 m) horizontal grid length, interesting dynamics emerge that have significant implications for urban air quality. 

Modelling Pollution Dispersion 

AQMs are a critical tool in the management of urban air pollution. They can be used for short-term air quality (AQ) forecasts, and in making planning and policy decisions aimed at abating poor AQ. For accurate AQ prediction the representation of vertical dispersion in the urban boundary layer (BL) is key because it controls the transport of pollution away from the surface. 

Current regional scale Eulerian AQMs are typically run at \mathcal{O}(10 km) horizontal grid length (Baklanov et al., 2014). The UK Met Office’s regional AQM runs at 12 km horizontal grid length (Savage et al., 2013) and its forecasts are used by the Department for Environment Food and Rural Affairs (DEFRA) to provide a daily AQ index across the UK (today’s DEFRA forecast). At such horizontal grid lengths turbulence in the BL is sub-grid.  

Regional AQMs and numerical weather prediction (NWP) models typically parametrise vertical dispersion of pollution in the BL using K-theory and sometimes with an additional non-local component so that 

F=-K_z \frac{\partial{c}}{\partial{z}} +N_l 

where F is the flux of pollution, c is the pollution concentration, K(z) is a turbulent diffusion coefficient and z is the height from the ground. N_l is the non-local term which represents vertical turbulent mixing under convective conditions due to buoyant thermals (Lock et al., 2000; Siebesma et al., 2007).  

K-theory (i.e. N_l=0) parametrisation of turbulent dispersion is consistent mathematically with Fickian diffusion of particles in a fluid. If K(z) is taken as constant and particles are released far from any boundaries (i.e. away from the ground and BL capping inversion), the mean square displacement of pollution particles increases proportional to the time since release. Interestingly, Albert Einstein showed that Brownian motion obeys Fickian diffusion. Therefore, pollution particles in K-theory dispersion parametrisations are analogous to memoryless particles undergoing a random walk. 

It is known however that at short timescales after emission pollution particles do have memory. In the CBL, far from undergoing a random trajectory, pollution particles released in the surface layer initially tend to follow the BL scale overturning eddies. They horizontally converge before being transported to near the top of the BL in updrafts. This results in large pollution concentrations in the upper BL and low concentrations near the surface at times on the order of one CBL eddy turnover period since release (Deardorff, 1972; Willis and Deardorff, 1981). This has important implications for ground level pollution concentration predicted by AQMs (as demonstrated later). 

Pollution dispersion can be thought of as having two different behaviours at short and long times after release. In the short time “ballistic” limit, particles travel at the velocity within the eddy they were released into, and the mean square displacement of pollution particles increases proportional to the time squared. At times greater than the order of one eddy turnover (i.e. the long time “diffusive” limit) dispersion is less efficient, since particles have lost memory of the initial conditions that they were released into and undergo random motion.  For further discussion of atmospheric diffusion and memory effects see this blog (link).

In regional AQMs, the non-local parametrisation component does not capture the ballistic dynamics and K-theory treats dispersion as being “diffusive”. This means that at CBL eddy turnover timescales it is possible that current AQMs have large errors in their predicted concentrations. However, with increases in computing power it is now possible to run NWP for research purposes at \mathcal{O}(100 m) horizontal grid length (e.g. Lean et al., 2019) and routinely at 300 m grid length (Boutle et. al., 2016). At such grid lengths the dominant CBL eddies transporting pollution (and therefore the “ballistic” diffusion) becomes resolved and does not require parametrisation. 

To investigate the differences in pollution dispersion and potential benefits that can be expected when AQMs move to \mathcal{O}(100 m) horizontal grid length, I have run NWP at horizontal grid lengths ranging from 1.5 km (where CBL dispersion is parametrised) to 55 m (where CBL dispersion is mostly resolved). The simulations are unique in that they are the first at such grid lengths to include a passive ground source of scalar representing pollution, in a domain large enough to let dispersion develop for tens of kilometres downstream. 

High-Resolution Modelling Results 

A schematic of the Met Office Unified Model nesting suite used to conduct the simulations is shown in Fig. 1. The UKV (1.5 km horizontal grid length) model was run first and used to pass boundary conditions to the 500 m model, and so on down to the 100 m and 55 m models. A puff release, homogeneous, ground source of passive scalar was included in all models and its horizontal extent covered the area of the 55 m (and 100 m) model domains. The puff releases were conducted on the hour, and at the end of each hour scalar concentration was set to zero. The case study date was 05/05/2016 with clear sky convective conditions.  

Figure 1: Schematic of the Unified Model nesting suite.

Puff Releases  

Figure 2 shows vertical cross-sections of puff released tracer in the UKV and 55 m models at 13-05, 13-20 and 13-55 UTC. At 13-05 UTC the UKV model scalar concentration is very large near the surface and approximately horizontally homogeneous. The 55 m model concentrations however are either much closer to the surface or elevated to great heights within the BL in narrow vertical regions. The heterogeneity in the 55 m model field is due to CBL turbulence being largely resolved in the 55 m model. Shortly after release, most scalar is transported predominantly horizontally rather than vertically, but at localised updrafts scalar is transported rapidly upwards. 

Figure 2: Vertical cross-sections of puff released passive scalar. (a), (b) and (c) are from the UKV model at 13-05, 13-20 and 13-55 UTC respectively. (d), (e) and (f) are from the 55 m model at 13-05, 13-20 and 13-55 UTC respectively. The x-axis is from south (left) to north (right) which is approximately the direction of mean flow. The green line is the BL scheme diagnosed BL height.

By 13-20 UTC it can be seen that the 55 m model has more scalar in the upper BL than lower BL and lowest concentrations within the BL are near the surface. However, the scalar in the UKV model disperses more slowly from the surface. Concentrations remain unrealistically larger in the lower BL than upper BL and are very horizontally homogeneous, since the “ballistic” type dispersion is not represented. By 13-55 UTC the concentration is approximately uniform (or “well mixed”) within the BL in both models and dispersion is tending to the “diffusive” limit. 

It has thus been demonstrated that unless “ballistic” type dispersion is represented in AQMs the evolution of the scalar concentration field will exhibit unphysical behaviour. In reality, pollution emissions are usually continuously released rather than puff released. One could therefore ask the question – when pollution is emitted continuously are the detailed dispersion dynamics important for urban air quality or does the dynamics of particles released at different times cancel out on average?  

Continuous Releases  

To address this question, I included a continuous release, homogeneous, ground source of passive scalar. It was centred on London and had dimensions 50 km by 50 km which is approximately the size of Greater London. Figure 3a shows a schematic of the source.  

The ratio of the 55 m model and UKV model zonally averaged surface concentration with downstream distance from the southern edge of the source is plotted in Fig. 3b. The largest difference in surface concentration between the UKV and 55m model occurs 9 km downstream, with a ratio of 0.61. This is consistent with the distance calculated from the average horizontal velocity in the BL (\approx7 ms-1) and the time at which there was most scalar in the upper BL compared to the lower BL in the puff release simulations (\approx 20 min). The scalar is lofted high into the BL soon after emission, causing reductions in surface concentrations downstream. Beyond 9 km downstream distance a larger proportion of the scalar in the BL has had time to become well-mixed and the ratio increases.  

Figure 3: (a) Schematic of the continuous release source of passive scalar. (b) Ratio of the 55 m model and UKV model zonally averaged surface concentration with downstream distance from the southern edge of the source at 13-00 UTC.

Summary  

By comparing the UKV and 55 m model surface concentrations, it has been demonstrated that “ballistic” type dispersion can influence city scale surface concentrations by up to approximately 40%. It is likely that by either moving to \mathcal{O}(100 m) horizontal grid length or developing turbulence parametrisations that represent “ballistic” type dispersion, that substantial improvements in the predictive capability of AQMs can be made. 

References 

  1. Baklanov, A. et al. (2014) Online coupled regional meteorology chemistry models in Europe: Current status and prospects https://doi.org/10.5194/acp-14-317-2014 
  1. Boutle, I. A. et al. (2016) The London Model: Forecasting fog at 333 m resolution https://doi.org/10.1002/qj.2656 
  1. Deardorff, J. (1972) Numerical Investigation of Neutral and Unstable Planetary Boundary Layers https://doi.org/10.1175/1520-0469(1972)029<0091:NIONAU>2.0.CO;2 
  1. DEFRA – air quality forecast https://uk-air.defra.gov.uk/index.php/air-pollution/research/latest/air-pollution/daqi 
  1. Lean, H. W. et al. (2019) The impact of spin-up and resolution on the representation of a clear convective boundary layer over London in order 100 m grid-length versions of the Met Office Unified Model https://doi.org/10.1002/qj.3519 
  1. Lock, A. P. et al. A New Boundary Layer Mixing Scheme. Part I: Scheme Description and Single-Column Model Tests https://doi.org/10.1175/1520-0493(2000)128<3187:ANBLMS>2.0.CO;2 
  1. Savage, N. H. et al. (2013) Air quality modelling using the Met Office Unified Model (AQUM OS24-26): model description and initial evaluation https://doi.org/10.5194/gmd-6-353-2013 
  1. Siebesma, A. P. et al. (2007) A Combined Eddy-Diffusivity Mass-Flux Approach for the Convective Boundary Layer https://doi.org/10.1175/JAS3888.1 
  1. Willis. G and J. Deardorff (1981) A laboratory study of dispersion from a source in the middle of the convectively mixed layer https://doi.org/10.1016/0004-6981(81)90001-9 

Weather Variability and its Energy Impacts

James Fallon & Brian Lo –  j.fallon@pgr.reading.ac.ukbrian.lo@pgr.reading.ac.uk 

One in five people still do not have access to modern electricity supplies, and almost half the global population rely on burning wood, charcoal or animal waste for cooking and eating (Energy Progress Report). Having a reliable and affordable source of energy is crucial to human wellbeing: including healthcare, education, cooking, transport and heating. 

Our worldwide transition to renewable energy faces the combined challenge of connecting neglected regions and vulnerable communities to reliable power supplies, and also decarbonising all energy. An assessment on supporting the world’s 7 billion humans to live a high quality of life within planetary boundaries calculated that resource provisioning across sectors including energy must be restructured to enable basic needs to be met at a much lower level of resource use [O’Neill et al. 2018]. 

Adriaan Hilbers recently wrote for the Social Metwork about the renewable energy transition (Why renewables are difficult), and challenges and solutions for modern electricity grids under increased weather exposure. (Make sure to read that first, as it provides an important background for problems associate with meso to synoptic scale variability that we won’t cover here!)  

In this blog post, we highlight the role of climate and weather variability in understanding the risks future electricity networks face. 

Climate & weather variability 

Figure 1 – Stommel diagram of the Earth’s atmosphere 

A Stommel diagram [Stommel, 1963] is used to categorise climate and weather events of different temporal and spatial scales. Logarithmic axes describe time period and size; contours (coloured areas) depict the spectral intensity of variation in sea level. It allows us to identify a variety of dynamical features in the oceans that traverse magnitudes of spatial and temporal scales. Figure 1 is a Stommel diagram adapted to describe the variability of our atmosphere.  

Microscale Smallest scales to describe features generally of the order 2 km or smaller 
Mesoscale Scale for describing atmospheric phenomena having horizontal scales ranging from a few to several hundred kilometres 
Synoptic Largest scale used to describe meteorological phenomena, typically high hundreds or 1000 km or more 

Micro Impacts on Energy 

Microscale weather processes include more predictable phenomena such as heat and moisture flux events, and unpredictable turbulence events. These generally occur at scales much smaller than the grid scale represented in numerical weather prediction models, and instead are represented through parametrisation. The most important microscale weather impacts are for isolated power grids (for example a community reliant on solar power and batteries, off-grid). Microscale weather events can also make reliable supply difficult for grids reliant on a few geographically concentrated renewable energy supplies. 

Extended Range Weather Impacts on Energy 

Across the Stommel diagram, above the synoptic scale are seasonal and intraseasonal cycles, decadal and climate variations. 

Subseasonal-to-Seasonal (S2S) forecasts are an exciting development for decision-makers across a diverse range of sectors – including agriculture, hydrology, the humanitarian sector [White et al. 2017]. In the energy sector, skilful subseasonal energy forecasts are now production ready (S2S4E DST).  Using S2S forecasts can help energy users anticipate electricity demand peaks and troughs, levels of renewable production, and their combined impacts several weeks in advance. Such forecasts will have an increasingly important role as more countries have higher renewable energy penetration (increasing their electricity grid’s weather exposure). 

Decadal Weather Cycle and Climate Impacts on Energy 

Energy system planners and operators are increasingly trying to address risks posed by climate variability, climate change, and climate uncertainty.  

Figure 2 was constructed from the record of Central England temperatures spanning the years of 1659 to 1995 and highlights the modes of variability in our atmosphere on the order of 5 to 50 years. Even without the role of climate change, constraining the boundary conditions of our weather and climate is no small task. The presence of meteorologically impactful climate variability at many different frequencies increases the workload for energy modellers, requiring many decades of climate data in order to understand the true system boundaries. 

Figure 2  – Power spectra of central England from mid 17th century, explaining variability with physical phenomena [Ghil and Lucarini 2020]

When making models of regional, national or continental energy networks, it is now increasingly common for energy modellers to consider several decades of climate data, instead of sampling a small selection of years. Figure 2 shows the different frequencies of climate variability – relying on only a limited few years of data cannot explore the extent of this variability. However significant challenges remain in sampling long-term variability and change in models [Hilbers et al. 2019], and it is the role of weather and climate scientists to communicate the importance of addressing this. 

Important contributions to uncertainty in energy system planning don’t just come from weather and climate. Variability in future energy systems will depend on technological, socioeconomic and political outcomes. Predictions of which future technologies and approaches will be most sustainable and economical are not always clear cut and easy to anticipate. A virtual workshop hosted by Reading’s energy-met group last summer [Bloomfield et al. 2020] facilitated discussions between energy and climate researchers. The workshop identified the need to better understand how contributions of all these different uncertainties propagate through complex modelling chains. 

An Energy-Meteorologist’s Journey through Time and Space 

Research is underway into tackling the uncertainties and understanding of energy risks and impacts across the spectra of spatial and temporal scales. But understanding of energy systems, and successful future planning requires decision-making involving a broad (and perhaps not fully identified) group of important technological and other factors, as well as the weather and climate impacts. It is not enough to consider any one of these alone! It is vital experts across different fields collaborate on working towards what will be best for our future energy grids. 

Tracking SDG7 – The Energy Progress Report https://trackingsdg7.esmap.org 

Why renewables are difficult – Adriaan Hilbers Social Metwork 2021 https://socialmetwork.blog/2021/01/15/why-renewables-are-difficult

O’Neill, D.W., Fanning, A.L., Lamb, W.F. et al. A good life for all within planetary boundaries https://doi.org/10.1038/s41893-018-0021-4 

Stommel, H., 1963. Varieties of oceanographic experience. Science, 139(3555), pp.572-576. https://www.jstor.org/stable/1709894

White et al (2017) Potential applications of subseasonal‐to‐seasonal (S2S) predictions https://doi.org/10.1002/met.1654 

M Ghil, V Lucarini (2020) The physics of climate variability and climate change https://doi.org/10.1103/RevModPhys.92.035002

AP Hilbers, DJ Brayshaw, A Gandy (2019) Importance subsampling: improving power system planning under climate-based uncertainty https://doi.org/10.1016/j.apenergy.2019.04.110 

Bloomfield, H. et al. (2020) The importance of weather and climate to energy systems: a workshop on next generation challenges in energy-climate modelling https://doi.org/10.1175/BAMS-D-20-0256.1 

The role of climate change in the 2003 European and 2010 Russian heatwaves using nudged storylines

Linda van Garderen – linda.vangarderen@hzg.de

During the summer of 2003, Europe experienced two heatwaves with, until then, unprecedented temperatures. The 2003 summer temperature record was shattered in 2010 by the Russian heatwave, which broke even Paleo records. The question remained, if climate change influenced these two events. Many contribution studies based on the likelihood of the dynamical situation were published, providing important input to answering this question. However, the position of low and high-pressure systems and other dynamical aspects of climate change are noisy and uncertain. The storyline method attributes the thermodynamic aspects of climate change (e.g. temperature), which are visible in observations and far more certain. 

Storylines 

All of us regularly think in terms of what if and if only. It is the human way of calculating hypothetic results in case we would have made a different choice. This helps us think in future scenarios, trying to figure out what choice will lead to which consequence. It is a tool to reduce risk by finding a future scenario that seems the best or safest outcome. In the storyline method, we use this exact mind-set. What if there was no climate change, would this heatwave be the same? What if the world was 2°C warmer, what would this heatwave have looked like then? With the help of an atmospheric model we can calculate what a heatwave would have been like in a world without climate change or increased climate change. 

In our study, we have two storylines: 1) the world as we know it that includes a changing climate, which we call the ‘factual’ storyline and 2) a world that could have been without climate change, which we call the ‘counterfactual’ storyline. We simulate the dynamical aspects of the weather extreme exactly the same in both storylines using a spectral nudging technique and compare the differences in temperatures.  To put it more precise, the horizontal wind flow is made up out of vorticity (circular movement) and divergence (spreading out or closing in). We nudge (or push) these two variables in the higher atmosphere to, on large scale, be the same in the factual and counterfactual simulations. 

Figure 1. What if we had another world where climate change did not happen? Would the heatwave have been different? Thinking in counterfactual worlds where we made (or will make) different decisions is a common way of thinking to estimate risk. Now we apply this idea in atmospheric modelling.  

European 2003 and Russian 2010 heatwaves 

Both the European heatwave in 2003 and the Russian heatwave in 2010 were extremes with unprecedented high temperatures for long periods of time. Besides, there had been little rain already from spring  in either case, which reduced the cooling effect from moisty soil to nearly nothing.  In our analysis we averaged the near surface temperatures in both storylines and compared their output to each other as well as the local climatology. Figure 2 shows the results of that averaging for the European heatwave in panel a and the Russian heatwave in panel b. We focus on the orange boxes, where the blue lines (factual storyline) and the red lines (counterfactual storyline) exceed the 5th-95th percentile climatology (green band). This means that during those days the atmosphere near the surface was uncommonly hot (thus a heatwave). The most important result in this graph is that the blue and red lines are separate from each other in the orange boxes. This means that the average temperature of the world with climate change (blue, factual) is higher than in the world without climate change (red, counterfactual).  

“Even though there would have been a heatwave with or without climate change, climate change has made the heat more extreme” 

Figure 2. Daily mean temperature at 2 meters height for (a) European summer 2003 and (b) Russian summer 2010. The orange boxes are the heatwaves, where the temperatures of the factual (blue) and counterfactual (red) are above the green band of 5th – 95th percentile climatology temperatures.  

The difference between these temperatures are not the same everywhere, it strongly depends on where you are in Europe or Russia. Let me explain what I mean with the help of Figure 3 with the difference between factual and counterfactual temperatures (right panels) on a map. In both Europe and Russia, we see that there are local regions with temperature differences of almost 0°C, and we see regions where the differences are almost 2.5°C (for Europe) or even 4°C (for Russia). A person living south from Moscow would therefore not have experienced 33°C but 29°C in a world without climate change. It is easy to imagine that such a temperature difference changes the impacts a heatwave has on e.g. public health and agriculture.  

Figure 3. Upper left: Average Temperature at 2 meter height and Geopotential height over Europe at z500 for 1-15th of August 2003, Lower left: Same as upper left but for 1-15th of Russia August 2010. Upper right: Factual minus Counterfactual average temperature at 2 meter height over Europe for 1-15th of August 2003, Lower right: same as lower left but for 1-15th of Russia August 2010. Stippling indicates robust results (all factuals are > 0.1°C warmer than all counterfactuals) 

 “The 2003 European and 2010 Russian heatwaves could locally have been 2.5°C – 4°C cooler in a world without climate change” 

We can conclude therefore, that with the help of our nudged storyline method, we can study the climate signal in extreme events with larger certainty. 

If you are interested in the elaborate explanation of the method and analysis of the two case studies, please take a look at our paper: 

van Garderen, L., Feser, F., and Shepherd, T. G.: A methodology for attributing the role of climate change in extreme events: a global spectrally nudged storyline, Nat. Hazards Earth Syst. Sci., 21, 171–186, https://doi.org/10.5194/nhess-21-171-2021 , 2021. 

If you have questions or remarks, please contact Linda van Garderen at linda.vangarderen@hzg.de

Main challenges for extreme heat risk communication

Chloe Brimicombe – c.r.brimicombe@pgr.reading.ac.uk, @ChloBrim

For my PhD, I research heatwaves and heat stress, with a focus on the African continent. Here I show what the main challenges are for communicating heatwave impacts inspired by a presentation given by Roop Singh of the Red Cross Climate Center at Understanding Risk Forum 2020.  

There is no universal definition of heatwaves 

Having no agreed definition of a heatwave (also known as extreme heat events) is a huge challenge in communicating risk. However, there is a guideline definition by the World Meteorological Organisation and for the UK an agreed definition as of 2019. In simple terms a heatwave is: 

“A period of above average temperatures of 3 or more days in a region’s warm season (i.e. all year in the tropics and in the summer season elsewhere)”  

We then have heat stress which is an impact of heatwaves, and is the killer aspect of heat. Heat stress is: 

“Build-up of body heat as a result of exertion or external environment”(McGregor, 2018) 

Attention Deficit 

Heatwaves receive low attention in comparison to other natural hazards I.e., Flooding, one of the easiest ways to appreciate this attention deficit is through Google search trends. If we compare ‘heat wave’ to ‘flood’ both designated as disaster search types, you can see that a larger proportion of searches over time are for ‘flood’ in comparison to ‘heat wave’.  

Figure 1: Showing ‘Heat waves’ (blue)  vs ‘Flood’ (red) Disaster Search Types interest over time taken from: https://trends.google.com/trends/explore?date=all&q=%2Fm%2F01qw8g,%2Fm%2F0dbtv 

On average flood has 28% search interest which is over 10 times the amount of interest for heat wave. And this is despite Heatwaves being named the deadliest hydro-meteorological hazard from 2015-2019 by the World Meteorological Organization. Attention is important if someone can remember an event and its impacts easily, they can associate this with the likelihood of it happening. This is known as the availability bias and plays a key role in risk perception. 

Lack of Research and Funding 

One impact of the attention deficit on extreme heat risk, is there is not ample research and funding on the topic – it’s very patchy. Let’s consider a keyword search of academic papers for ‘heatwave*’ and ‘flood*’ from Scopus an academic database.  

Figure 2: Number of ‘heatwave*’ vs number of ‘flood*’ academic papers from Scopus. 

Research on floods is over 100 times bigger in quantity than heatwaves. This is like what we find for google searches and the attention deficit, and reveals a research bias amongst these hydro-meteorological hazards. And is mirrored by what my research finds for the UK, much more research on floods in comparison to heatwaves (https://doi.org/10.1016/j.envsci.2020.10.021). Our paper is the first for the UK to assess the barriers, causes and solutions for providing adequate research and policy for heatwaves. The motivation behind the paper came from an assignment I did during my masters focusing on UK heatwave policy, where I began to realise how little we in the UK are prepared for these events, which links up nicely with my PhD. For more information you can see my article and press release on the same topic. 

Heat is an invisible risk 

Figure 3: Meme that sums up not perceiving heat as a risk, in comparison, to storms and flooding.

Heatwaves are not something we can touch and like Climate Change, they are not ‘lickable’ or visible. This makes it incredibly difficult for us to perceive them as a risk. And this is compounded by the attention deficit; in the UK most people see heatwaves as a ‘BBQ summer’ or an opportunity to go wild swimming or go to the beach.  

And that’s really nice, but someone’s granny could be experiencing hospitalising heat stress in a top floor flat as a result of overheating that could result in their death. Or for example signal failures on your railway line as a result of heat could prevent you from getting into work, meaning you lose out on pay. I even know someone who got air lifted from the Lake District in their youth as a result of heat stress.  

 A quote from a BBC one program on wild weather in 2020 sums up overheating in homes nicely:

“It is illegal to leave your dog in a car to overheat in these temperatures in the UK, why is it legal for people to overheat in homes at these temperatures

For Africa the perception amongst many is ‘Africa is hot’ so heatwaves are not a risk, because they are ‘used to exposure’ to high temperatures. First, not all of Africa is always hot, that is in the same realm of thinking as the lyrics of the 1984 Band Aid Single. Second, there is not a lot of evidence, with many global papers missing out Africa due to a lack of data. But, there is research on heatwaves and we have evidence they do raise death rates in Africa (research mostly for the West Sahel, for example Burkina Faso) amongst other impacts including decreased crop yields.  

What’s the solution? 

Talk about heatwaves and their impacts. This sounds really simple, but I’ve noticed a tendency of a proportion of climate scientists to talk about record breaking temperatures and never mention land heatwaves (For example the Royal Institute Christmas Lectures 2020). Some even make a wild leap from temperature straight to flooding, which is just painful for me as a heatwave researcher. 

Figure 4: A schematic of heatwaves researchers and other climate scientists talking about climate change. 

So let’s start by talking about heatwaves, heat stress and their impacts.  

Air-sea heat fluxes at the oceanic mesoscale: the impact of the ratio of ocean-to-atmosphere grid resolution

Sophia Moreton – s.moreton@pgr.reading.ac.uk

Sea surface temperature (SST) anomalies are vital for regulating the earth’s weather and climate.  The generation and reduction of these SST anomalies are largely determined by air-sea heat fluxes, particularly turbulent heat fluxes (latent and sensible).

The turbulent heat flux feedback (THFF) is a critical parameter, which measures the change in the net air-sea turbulent heat flux in response to a 1 K change in SST. So far in current research, this feedback is well known at large scales, i.e. over the whole ocean basin. However, a quantification of this feedback at much smaller spatial scales (10-100km) over individual mesoscale ocean eddies remains absent.

Why do we care about air-sea feedbacks at the oceanic mesoscale?

Both heat and momentum air-sea exchanges at the mesoscale impact the local and large-scale atmosphere (e.g. shifting storm tracks) and alter the strength of western boundary currents and the large-scale ocean gyre circulation. However, research into this field to date is hindered by the lack of high spatial resolution in observational data at the air-sea interface.

Therefore our study uses three high-resolution configurations from the UK Met Office coupled climate model (HadGEM3-GC3). We provide the first global estimate of turbulent heat flux feedback (α) over individually tracked and composite-averaged coherent mesoscale eddies, which ranges between 35 to 45 Wm-2K-1 depending on eddy amplitude.

Estimates of the turbulent heat flux feedback (THFF) are split, depending if the feedback is calculated using SST on the ocean grid (α0) or after regridding SST to the atmosphere (αA). An example of αA using regridded SST anomalies (SSTA) is given in Fig.1 for large-amplitude eddies in the highest ocean-atmosphere resolution available (a 25km atmosphere coupled to a 1/12° ocean, labelled ‘N512-12’).

Figure 1: A scatter plot of the relationship (THFF, αA) between regridded SST (SSTA) and THF anomalies. αA is the gradient of the linear regression line (black) +/- the 95% confidence interval (shown by the text). The data is from eddy snapshots averaged over 1 year, denoted by ‘< >’. Only large-amplitude eddies in the N512-12 configuration (25km atmosphere – 1/12° ocean) are plotted.

Why is the feedback so sensitive to the ratio of grid resolution?

In high-resolution coupled climate models, the atmospheric resolution is typically coarser than in its ocean component although, to date, a quantification of what the ocean-atmosphere ratio of grid resolution should be remains absent.

We prove increasing the ratio of atmosphere-to-ocean grid resolution in coupled climate models can lead to a large underestimation of turbulent heat flux feedback over mesoscale eddies, by as much as 75% for a 6:1 resolution ratio, as circled in Fig. 2 from a 60km atmosphere coupled to a 1/12° ocean. An underestimation of the feedback is consistent across all eddy amplitudes (A) and all three model configurations shown (Fig. 2); it suggests SST anomalies within these eddies are likely to be not reduced enough by air-sea fluxes of heat, and consequently will remain too large.

The underestimation stems from the calculation of the air-sea heat fluxes in the HadGEM3-GC3.1 model on the coarser atmospheric grid, instead of the finer ocean grid. Many other climate models do the same. At present, for the long spin-ups needed for climate simulations, it is unrealistic to expect the atmospheric resolution to match the very fine (10km) ocean resolution in coupled climate models, i.e. to create a one-to-one grid ratio. Therefore, to minimise this underestimation in the feedback at mesoscales, we advise air-sea heat fluxes should be computed on the finer oceanic grid.

Figure 2: Estimates of the turbulent heat flux feedback (THFF) across different eddy amplitudes (A) for α0 (lighter colours) and αA (darker colours, using regridded SST) for three model configurations: N512-12, N216-12 and N216-025. The ocean and atmosphere resolutions are added in red for each. Increasing the ratio of grid resolution, underestimates the THFF (as α0 differs from αA). The horizontal bars indicate the width of the eddy amplitude bins, and the vertical error bars indicate 95% confidence intervals.

Correctly simulating the air-sea heat flux feedback over mesoscale eddies is fundamental to realistically represent their interaction with the local and large-scale atmosphere and feedback on the ocean, to improve our predictions of the earth’s climate.

For a full analysis of the results, including a decomposition of the turbulent heat flux feedback, the reader is referred to Moreton et al., 2021, Air-Sea Turbulent Heat Flux Feedback over Mesoscale Eddies, GRL (in review).

Manuscript available: https://doi.org/10.1002/essoar.10505981.1

This work lays the foundation for my current work, evaluating how mesoscale air-sea heat fluxes feedback and alter the strength of large-scale ocean gyre circulation, using the MIT general circulation model (MITgcm).

This work is funded by a NERC CASE studentship with the Met Office, UK.

Forecasting space weather using “similar day” approach

Carl Haines – carl.haines@pgr.reading.ac.uk

Space weather is a natural threat that requires good quality forecasting with as much lead time as possible. In this post I outline the simple and understandable analogue ensemble (AnEn) or “similar day” approach to forecasting. I focus mainly on exploring the method itself and, although this work forecasts space weather through a timeseries of ground level observations, AnEn can be applied to many prediction tasks, particularly time series with strong auto-correlation. AnEn has previously been used to predict wind speed [1], temperature [1] and solar wind [2]. The code for AnEn is available at https://github.com/Carl-Haines/AnalogueEnsemble should you wish to try out the method for you own application. 

The idea behind AnEn is to take a set of recent observations, look back in a historic dataset for analogous periods, then take what happened following those analogous periods as the forecast. If multiple analogous periods are used, then an ensemble of forecasts can be created giving a distribution of possible outcomes with probabilistic information. 

Figure 1 – An example of AnEn applied to a space weather event with forecast time t0. The black line shows the observations, the grey line shows the ensemble members, the red line shows the median of the ensemble and the yellow and green lines are reference forecasts. 

Figure 1 is an example of a forecast made using the AnEn method where the forecast is made at t0. The 24-hours of observations (black) prior to tare matched to similar periods in the historic dataset (grey). Here I have chosen to give the most recent observations the most weighting as they hold the most relevant information. The grey analogue lines then flow on after t0 forming the forecast. Combined, these form an ensemble and the median of these is shown in red. The forecast can be chosen to be the median (or any percentile) of the ensemble or a probability of an event occurring can be given by counting how many of the ensemble member do/don’t experience the event.  

Figure 1 also shows two reference forecasts, namely 27-day recurrence and climatology, as benchmarks to beat. 27-day recurrence uses the observation from 27-days ago as the forecast for today. This is reasonable because the Sun rotates every 27-days as seen from earth so broadly speaking the same part of the Sun is emitting the relevant solar wind on timescales larger than 27-days. 

To quantify how well AnEn works as a forecast I ran the forecast on the entire dataset by repeatedly changing the forecast time t0 and applied two metrics, namely mean absolute error (MAE) and skill, to the median of the ensemble members. MAE is the size of the mean difference between the forecast made by AnEn and what was actually observed. The mean of the absolute errors over all the forecasts (taken as median of the ensemble) is taken and we end up with a value for each lead time. Figure 2 shows the MAE for AnEn median and the reference forecasts. We see that AnEn has the smallest (best) MAE at short lead times and outperforms the reference forecasts for all lead times up to a week. 

Figure 2 – The mean absolute error of the AnEn median and reference forecasts.

An error metric such as MAE cannot take into account that certain conditions are inherently more difficult to forecast such as storm times. For this we can use a skill metric defined by  

{\text{Skill} = 1 - \frac{\text{Forecast error}}{\text{Reference error}}}

where in this case we use climatology as the reference forecast. Skill can take any value between -\infty and 1 where a perfect forecast would receive a value of 1 and an unskilful forecast would receive a value of 0. A negative value of skill signifies that the forecast is worse than the reference forecast. 

Figure 3 shows the skill of AnEn and 27-day recurrence with respect to climatology. We see that AnEn is most skilful for short lead times and outperforms 27-day recurrence for all lead times considered.  

Figure 3 – The skill of the AnEn median and 27-day recurrence with respect to climatology.

In summary, the analogue ensemble forecast method matches current conditions with historical events and lifts the previously seen timeseries as the prediction. AnEn seems to perform well for this application and outperforms the reference forecasts of climatology and 27-day recurrence. The code for AnEn is available at https://github.com/Carl-Haines/AnalogueEnsemble

The work presented here makes up a part of a paper that is under review in the journal of Space Weather. 

Here, AnEn has been applied to a dataset from the space weather domain. If you would like to find out more about space weather then take a look at these previous blog posts from Shannon Jones (https://socialmetwork.blog/2018/04/13/the-solar-stormwatch-citizen-science-project/) and I (https://socialmetwork.blog/2019/11/15/the-variation-of-geomagnetic-storm-duration-with-intensity/). 

[1] Delle Monache, L., Eckel, F. A., Rife, D. L., Nagarajan, B., & Searight, K.(2013) Probabilistic Weather Prediction with an Analog Ensemble. doi: 10.1175/mwr-d-12-00281.1 

[2] Owens, M. J., Riley, P., & Horbury, T. S. (2017a). Probabilistic Solar Wind and Ge-704omagnetic Forecasting Using an Analogue Ensemble or “Similar Day” Approach. doi: 10.1007/s11207-017-1090-7 

Why renewables are difficult

Adriaan Hilbers – PhD researcher at Imperial and Reading a.hilbers17@imperial.ac.uk

Adapted from a 2018 blog post: see the original here

Renewable energy represents one of the most promising solutions to climate change since it emits no greenhouse gases. However, it poses some difficulties for power systems. Source: U. Leone

The public have been aware of the importance of reducing carbon emissions since around the 1980’s. Furthermore, renewable technologies such as solar and wind have been around for decades. Under these conditions, it’s surprising that most countries still generate the majority of their electricity from carbon-emitting fossil fuels. Why, after decades of both the problem and a possible solution being known, haven’t renewables taken off yet? This article describes why renewables are “difficult”, and how the world can keep the lights on into the future in a cheap, secure, and sustainable way. 

Until recently, the primary reason was economics. It was impossible to build wind turbines and solar panels cheaply enough to compete with fossil fuel technologies, which have become highly cost effective after more than 100 years of use. Governments were not willing to spend billions to subsidise renewables when electricity could be generated cheaply by other means. Recently, however, improved manufacturing methods, economies of scale and increased competition sent prices plummeting. The price of solar panels has decreased by a factor of 200 in the last 45 years, and wind farms (even offshore) are now cost-effective without subsidy.  

So, is it just a matter of time before fossil fuel electricity disappears? Why are societies still so hesitant to go 100% renewable? To understand why, one needs a quick introduction to power systems: the industries, infrastructures and markets based around electricity. 

At their core, power systems are supply & demand problems. Industries and consumers use electricity provided by generators. One key feature distinguishes power systems from other economic markets: there is very limited means of storing it at large scale (with the notable exception of hydropower, discussed below). For this reason, supply must match demand on a second-by-second basis. 

A still from Drax Electric Insights, where electricity demand and generation levels can be browsed through, both in real time and historically. Source: Drax Electric Insights

(As an aside, in the UK, there is a fantastic website, called Drax Electric Insights, in which the total UK electricity demand, and exactly from which sources it is being generated, can be browsed through in real time as well as historically. Looking through it for a few minutes will give a better feel for how power systems work than any blog post can). 

Before renewables, most electricity came from fossil fuel plants. Fuel (mostly coal or gas) was burnt at different rates and level of electricity supply was directly adjusted to meet demand. This isn’t always easy; for example, the UK’s system operator had to deal with a massive demand spike just after the royal wedding, as millions turned on their kettles at the same time.  

A famous graph showing total UK electricity demand during the 1990 World Cup semi-final against Germany, with spikes at times that viewers turned on their kettles en masse. System operators had to rapidly adjust supply to ensure the lights stayed on. Source: National Grid

With renewables, the single biggest difficulty is that their production levels can’t be controlled. It’s not always windy or sunny, and times of high renewable output do not always align with times of high demand. How does one ensure the lights stay on on a cloudy day or when the wind tails off? 

In most countries, this is not yet a problem since renewable capacity is still small and there’s ample conventional backup capacity. Renewables produce whatever electricity they can, and the rest is picked up by the conventional plants.  

A problem occurs when countries start generating most of their electricity from renewables as this drastically changes the economic outlook of power markets. In a nutshell, building renewable capacity displaces fossil fuel generation, but not generation capacity; all power plants must be kept open for the rare days when there isn’t any wind or sun. Keeping these plants open but using them infrequently is very expensive, and closing them is impossible, unless you want to accept significant risks of blackouts on calm, cloudy days. It’s a perilous choice: higher electricity prices or reduced security of supply, and this problem defines the difficulties of renewable electricity systems. 

Thankfully, there are a few ways that society can generate most of their electricity from renewables while keeping prices low and supply secure. They fall broadly into two categories. 

The first is electricity storage. With grid-scale storage, excess electricity production on windy or sunny days can be stored and used in times when renewable output is low. Besides adding to supply security, this would enhance the economic picture since storage owners buy up electricity when price is low and sell it when price is high, evening out price jumps and allowing a smaller number of conventional plants to run more often. Almost all grid-scale storage currently in existence is hydropower, which countries like Norway use to generate almost all their electricity but requires a mountainous terrain and access to water. The reason other grid-scale storage is rare is economics. Most storage technology (e.g. battery) prices still have to drop significantly before they can be used at large scale. 

Hydropower provides an economical option to store electricity, but requires mountainous terrain. Source: skeeze

A second solution is interconnecting different countries and allowing them to share electricity. When it is wind-free in London, it usually is in Scotland as well, bit it may be windy in Germany or Spain. Transporting electricity around could help alleviate supply insecurity. Many countries are doing just this; the UK, for example, currently has interconnections with France, the Netherlands, Belgium and Ireland, and more are in the pipeline. They may eventually from part of the European Supergrid, where electricity can be transported across Europe to balance out regional renewable supply peaks and troughs. 

The prospect of combining hydropower and interconnections between countries is tempting, since it means countries with lots of wind but little storage capacity, like Germany or Denmark, could “use Norway as a battery” by exporting their excess wind power to Norway in windy periods, which allows dams to accumulate water. In calm spells, hydropower generation levels are increased and excess electricity exported back the other way. Making this work will require significant increases in Norwegian hydropower infrastructure, interconnection lines and international cooperation. 

The batteries in electric cars can be used for grid management provided that owners agree to this. Source: Marilyn Murphy

Another creative solution to the storage problem is to use the batteries in electric cars. Electric car uptake will lead to demand spikes when people return from work and plug them in. An electric car owner can get the option of cheaper electricity if it means her car’s battery is not charged (smart charging), or even emptied (known as vehicle-to-grid), during demand spikes and recharged when demand is lower. Such approaches are currently being trialled in the UK

Current power systems are not yet ready to use renewables for the majority of their electricity supply. However, the immediacy of the climate change danger means business-as-usual is not an option, and a total energy revolution is required. Presently, the most realistic solution is the use of renewables (see a separate blog post on nuclear power here). Nobody knows exactly how the power system of the future will look. But everyone agrees it will be very different. 

A still from an online tutorial on power system models, showing generation from different sources.

Want to know more? For a similar discussion on the merits of nuclear power, see this blog post. To get a feel for how a power system works, see this page. It allows users, inside a cloud (no downloads or installs necessary), to create their own power system for the United Kingdom, and see how electricity is generated from renewable and conventional sources. 

Note: this article was adapted from a 2018 blog post: see the original here