Representing the organization of convection in climate models

Email: m.muetzelfeldt@pgr.reading.ac.uk

Current generation climate models are typically run with horizontal resolutions of 25–50 km. This means that the models cannot explicitly represent atmospheric phenomena that are smaller than these resolutions. An analogy for this is with the resolution of a camera: in a low-resolution, blocky image you cannot make out all the finer details. In the case of climate models, the unresolved phenomena might still be important for what happens at the larger, resolved scales. This is true for convective clouds – clouds such as cumulus and cumulonimbus that are formed from differences in density, caused by latent heat release, between the clouds and the environmental air. Convective clouds are typically around hundreds to thousands of metres in their horizontal size, and so are much smaller than the size of individual grid-columns of a climate model.

Convective clouds are produced by instability in the atmosphere. Air that rises ends up being warmer, and so less dense, than the air that surrounds it, due to the release of latent heat as water is formed by the condensation of water vapour. The heating they produce acts to reduce this instability, leading to a more stable atmosphere. To ensure that this stabilizing effect is included in climate model simulations, convective clouds are represented through what is called a convection parametrization scheme – the stabilization is boiled down to a small number of parameters that model how the clouds act to reduce the instability in a given grid-column. The parametrization scheme then models the action of the clouds in a grid-column by heating the atmosphere higher up, which reduces the instability.

Convection parametrization schemes work by making a series of assumptions about the convective clouds in each grid-column. These include the assumption that there will be many individual convective clouds in grid-columns where convection is active (Fig. 1), and that these clouds will only interact through stabilizing a shared environment. However, in nature, many forms of convective organization are observed, which are not currently represented by convection parametrization schemes.

Figure 1: From Arakawa and Schubert, 1974. Cloud field with many clouds in it – each interacting with each other only by modifying a shared environment.

In my PhD, I am interested in how vertical wind shear can cause the organization of convective cloud fields. Wind shear occurs when the wind is stronger at one height than another. When there is wind shear in the lower part of the atmosphere – the boundary layer – it can organize individual clouds into much larger cloud groups. An example of this is squall lines, which are often seen over the tropics and in mid-latitudes over the USA and China. Squall lines are a type of Mesoscale Convective System (MCS), which account for a large part of the total precipitation over the tropics – between 50 – 80 %. Including their effects in a climate model can therefore have an impact of the distribution of precipitation over the tropics, which is one area where there are substantial discrepancies between climate models and observations.

The goal of my PhD is to work out how to represent shear-induced organization of cloud fields in a climate model’s convection parametrization scheme. The approach I am taking is as follows. First, I need to know where in the climate model the organization of convection is likely to be active. To do this, I have developed a method for examining all of the wind profiles that are produced by the climate model over the tropics, and grouping these into a set of 10 wind profiles that are probably associated with the organization of convection. The link between organization and each grid-column is made by checking that the atmospheric conditions have enough instability to produce convective clouds, and that there is enough low-level shear to make organization likely to happen. With these wind profiles in hand, where they occur can be worked out (Fig. 2 shows the distribution for one of these profiles). The distributions can be compared with distributions of MCSs from satellite observations, and the similarities between the distributions builds confidence that the method is finding wind profiles that are associated with the organization of convection.

Figure 2: Geographical distribution of one of the 10 wind profiles that represents where organization is likely to occur over the tropics. The profile shows a high degree of activity in the north-west tropical Pacific, an area where organization of convection also occurs. This region can be matched to an area of high MCS activity from a satellite derived climatology produced by Mohr and Zipser, 1996.

Second, with these profiles, I can run a set of high-resolution idealized models. The purpose of these is to check that the wind profiles do indeed cause the organization of convection, then to work out a set of relationships that can be used to parametrize the organization that occurs. Given the link between low-level shear and organization, it seems like a good place to start is to check that this link appears in my experiments. Fig. 3 shows the correlation between the low-level shear, and a measure of organization. A clear relationship is seen to hold between these two variables, providing a simple means of parametrizing the degree of organization from the low-level shear in a grid-column.

Figure 3: Correlation of low-level shear (LLS) against a measure of organization (cluster_index). A high degree of correlation is seen, and r-squared values close to 1 indicate that a lot of the variance of cluster_index is explained by the LLS. A p-value of less than 0.001 indicates this is unlikely to have occurred by chance.

Finally, I will need to modify a convection parametrization scheme in light of the relationships that have been uncovered and quantified. To do this, the way that the parametrization scheme models the convective cloud field must be changed to reflect the degree of organization of the clouds. One way this could be done would be by changing the rate at which environmental air mixes into the clouds (the entrainment rate), based on the amount of organization predicted by the new parametrization. From the high-resolution experiments, the strength of the clouds was also seen to be related to the degree of organization, and this implies that a lower value for the entrainment rate should be used when the clouds are organized.

The proof of the pudding is, as they say, in the eating. To check that this change to a parametrization scheme produces sensible changes to the climate model, it will be necessary to make the changes and to run the model. Then the differences in, for example, the distribution of precipitation between the control and the changed climate model can be tested. The hope is then that the precipitation distributions in the changed model will agree more closely with observations of precipitation, and that this will lead to increased confidence that the model is representing more of the aspects of convection that are important for its behaviour.

  • Arakawa, A., & Schubert, W. H. (1974). Interaction of a cumulus cloud ensemble with the large-scale environment, Part I. Journal of the Atmospheric Sciences, 31(3), 674-701.
  • Mohr, K. I., & Zipser, E. J. (1996). Mesoscale convective systems defined by their 85-GHz ice scattering signature: Size and intensity comparison over tropical oceans and continents. Monthly Weather Review, 124(11), 2417-2437.

Workshop on Predictability, dynamics and applications research using the TIGGE and S2S ensembles

Email: s.h.lee@pgr.reading.ac.uk

From April 2nd-5th I attended the workshop on Predictability, dynamics and applications research using the TIGGE and S2S ensembles at ECMWF in Reading. TIGGE (The International Grand Global Ensemble, formerly THORPEX International Grand Global Ensemble) and S2S (Sub-seasonal-to-Seasonal) are datasets hosted at primarily at ECMWF as part of initiatives by the World Weather Research Programme (WWRP) and the World Climate Research Programme (WCRP). TIGGE has been running since 2006 and stores operational medium-range forecasts (up to 16 days) from 10 global weather centres, whilst S2S has been operational since 2015 and houses extended-range (up to 60 days) forecasts from 11 different global weather centres (e.g. ECMWF, NCEP, UKMO, Meteo-France, CMA…etc.). The benefit of these centralised datasets is their common format, which enables straightforward data requests and multi-model analysis with minimal data manipulation allowing scientists to focus on doing science!

Attendees of the workshop came from around the world (not just Europe) although there was a particularly sizeable cohort from Reading Meteorology and NCAS.

Figure 1: Workshop group photo featuring the infamous ECMWF ducks!

In my PhD so far, I have been making extensive use of the S2S database – looking at both operational and re-forecast datasets to assess stratospheric predictability and biases – and it was rewarding to attend the workshop and see what a diverse range of applications the datasets have across the world. From the oceans to the stratosphere, tropics to poles, predictability mathematics to farmers and energy markets, it was immediately very clear that TIGGE and S2S are wonderfully useful tools for both the research and applications communities. A particular aim of the workshop was to discuss “user-oriented variables” – derived variables from model output which represent the meteorological conditions to which a user is sensitive (such as wind speed at a specific height for wind power applications).

The workshop mainly consisted of 15-minute conference-style talks in the main lecture theatre and poster sessions, but the final two days also featured parallel working group sessions of about 15 members each. The topics discussed in the working groups can be found here. I was part of working group 4, and we discussed dynamical processes and ensemble diagnostics. We reflected on some of the points raised by speakers over the preceding days – particular attention was given to diagnostics needed to understand dynamical effects of model biases (such as their influence on Rossby wave propagation and weather-regime transition) alongside what other variables researchers needed to make full use of the potentials S2S and TIGGE offer (I don’t think I could say “more levels in the stratosphere!” loudly enough – TIGGE does not go above 50 hPa, which is not useful when studying stratospheric warming events defined at 10 hPa).

Data analysis tools are also becoming increasingly important in atmospheric science. Several useful and perhaps less well-known tools were presented at the workshop – Mio Matsueda’s TIGGE and S2S museum websites provide a wide variety of pre-prepared plots of variables like the NAO and MJO which are excellent for exploratory data analysis without needing many gigabytes of data downloads. Figure 2 shows an example of NAO forecasts from S2S data – the systematic negative NAO bias at longer lead-times was frequently discussed during the workshop, whilst the inability to capture the transition to a positive NAO regime beginning around February 10th is worth further analysis. In addition to these, IRI’s Data Library has powerful abilities to manipulate, analyse, plot, and download data from various sources including S2S with server-side computation.


Figure 2: Courtesy of the S2S Museum, this figure shows S2S model forecasts of the NAO launched on January 31st 2019. The verifying scenario is shown in black, with ensemble means in grey. All models exhibited a negative ensemble-mean bias and did not capture the development of a positive NAO after February 10th.

It’s inspiring and motivating to be part of the sub-seasonal forecast research community and I’m excited to present some of my work in the near future!

TIGGE and S2S can be accessed via ECMWF’s Public Datasets web interface.

On relocating to the Met Office for five weeks of my PhD

Some PhD projects are co-organised by an industrial CASE partner which provides supervisory support and additional funding. As part of my CASE partnership with the UK Met Office, in January I had the opportunity to spend 5 weeks at the Exeter HQ, which proved to be a fruitful experience. As three out of my four supervisors are based there, it was certainly a convenient set-up to seek their expertise on certain aspects of my PhD project!

One part of my project aims to understand how certain neighbourhood-based verification methods can affect the level of surface air quality forecast accuracy. Routine verification of a forecast model against observations is necessary to provide the most accurate forecast possible. Ensuring that this happens is crucial, as a good forecast may help keep the public aware of potential adverse health risks resulting from elevated pollutant concentrations.

The project deals with two sides of one coin: evaluating forecasts of regional surface pollutant concentrations; and evaluating those of meteorological fields such as wind speed, precipitation, relative humidity or temperature. All of the above have unique characteristics: they vary in resolution, spatial scale, homogeneity, randomness… The behaviour of the weather and pollutant variables is also tricky to compare against one another because the locations of their numerous measurement sites nearly never coincide, whereas the forecast encompasses the entirety of the domain space. This is kind of the crux of this part of my PhD: how can we use these irregularly located measurements to our advantage in verifying the skill of the forecast in the most useful way? And – zooming out still – can we determine the extent to which the surface air pollution forecast is dependent on some of those aforementioned weather variables? And can this knowledge (once acquired!) be used to further improve the pollution forecast?

IMG_4407
Side view of the UK Met Office on a cold day in February.

While at the Met Office, I began my research specifically into methods which analyse the forecast skill when a model “neighbourhood” of a particular size around a particular point-observation is evaluated. These methods are being developed as part of a toolkit for evaluation of high resolution forecasts, which can be (and usually are) more accurate than a lower resolution equivalent, but traditional metrics (e.g. root mean square error (RMSE) or mean error (ME)) often fail to demonstrate the improvement (Mittermaier, 2014). They can also fall victim to various verification errors such as the double-penalty problem. This is when an ‘event’ might have been missed at a particular time in the forecast at one gridpoint because it was actually forecast in the neighbouring grid-point one time-step out, so the RMSE counts this error both in the spatial and temporal axes. Not fair, if you ask me. So as NWP continues to increase in resolution, there is a need for robust verification methods which relax the spatial (or temporal) restriction on precise forecast-to-observation matching somewhat (Ebert, 2008).

One way to proceed forward is via a ‘neighbourhood’ approach which treats a deterministic forecast almost as an ensemble by considering all the grid-points around an observation as an individual forecast and formulating a probabilistic score. Neighbourhoods are made of varying number of model grid-points, i.e. a 3×3 or a 5×5 or even bigger. A skill score such as the ranked probability score (RPS) or Brier Score is calculated using the cumulative probability distribution across the neighbourhood of the exceedance of a sensible pollutant concentration threshold. So, for example, we can ask what proportion of a 5×5 neighbourhood around an observation has correctly forecasted an observed exceedance (i.e. ‘hit’)? What if an exceedance forecast has been made, but the observed quantity didn’t reach that magnitude (i.e. ‘false alarm’)? And how do these scores change when larger (or smaller) neighbourhoods are considered? And, if these spatial verification methods prove informative, how could they be implemented in operational air quality forecast verification? All these questions will hopefully have some answers in the near future and form a part of my PhD thesis!

Although these kind of methods have been used for meteorological variables, they haven’t yet been widely researched in the context of regional air quality forecasts. The verification framework for this is called HiRA – High Resolution Assessment, which is part of the wider verification network Model Evaluation Tools (which, considering it is being developed as a means of uniformly assessing high-resolution meteorological forecasts, has the most unhelpful acronym: MET). It is quite an exciting opportunity to be involved in the testing and evaluation of this new set of verification tools for a surface pollution forecast at a regional scale, and I am very grateful to be involved in this. Also, having the opportunity to work at the Met Office and “pretend” to be a real research scientist for a while is awesome!

Email: k.m.milczewska@pgr.reading.ac.uk

Extending the predictability of flood hazard at the global scale

Email: rebecca.emerton@reading.ac.uk

When I started my PhD, there were no global scale operational seasonal forecasts of river flow or flood hazard. Global overviews of upcoming flood events are key for organisations working at the global scale, from water resources management to humanitarian aid, and for regions where no other local or national forecasts are available. While GloFAS (the Global Flood Awareness System, run by the European Centre for Medium-Range Weather Forecasts (ECMWF) and the European Commission Joint Research Centre (JRC) as part of the Copernicus Emergency Management Services) was producing operational, openly-available flood forecasts out to 30 days ahead, there was a need for more extended-range forecast information. Often, due to a lack of hydrological forecasts, seasonal rainfall forecasts are used as a proxy for flood hazard – however, the link between precipitation and floodiness is nonlinear, and recent research has shown that seasonal rainfall forecasts are not necessarily the best indicator of potential flood hazard. The aim of my PhD research was to look into ways in which we could provide earlier warning information, several weeks to months ahead, using hydrological analysis in addition to the meteorology.

Presidente Kuczynski recorre zonas afectadas por lluvias e inund
Flooding in Trujillo, Peru, March 2017 (Photo: Presidencia Perú on Twitter)

Broadly speaking, there are two key ways in which to provide early warning information on seasonal timescales: (1) through statistical analysis based on large-scale climate variability and teleconnections, and (2) by producing dynamical seasonal forecasts using coupled ocean-atmosphere GCMs. Over the past 4.5 years, I worked on providing hydrologically-relevant seasonal forecast products using these two approaches, at the global scale. This blog post will give a quick overview of the two new forecast products we produced as part of this research!

Can we use El Niño to predict flood hazard?

ENSO (the El Niño Southern Oscillation), is known to influence river flow and flooding across much of the globe, and often, statistical historical probabilities of extreme precipitation during El Niño and La Niña (the extremes of ENSO climate variability) are used to provide information on likely flood impacts. Due to its global influence on weather and climate, we decided to assess whether it is possible to use ENSO as a predictor of flood hazard at the global scale, by assessing the links between ENSO and river flow globally, and estimating the equivalent historical probabilities for high and low river flow, to those that are already used for meteorological variables.

With a lack of sufficient river flow observations across much of the globe, we needed to use a reanalysis dataset – but global reanalysis datasets for river flow are few and far between, and none extended beyond ~40 years (which includes a sample of ≤10 El Niños and ≤13 La Niñas). We ended up producing a 20th Century global river flow reconstruction, by forcing the Camaflood hydrological model with ECMWF’s ERA-20CM atmospheric reconstruction, to produce a 10-member river flow dataset covering 1901-2010, which we called ERA-20CM-R.

elnino_flood_hazard_gif_beccalize

Using this dataset, we calculated the percentage of past El Niño and La Niña events, during which the monthly mean river flow exceeded a high flow threshold (the 75th percentile of the 110-year climatology) or fell below a low flow threshold (the 25th percentile), for each month of an El Niño / La Niña. This percentage is then taken as the probability that high or low flow will be observed in future El Niño/La Niña events. Maps of these probabilities are shown above, for El Niño, and all maps for both El Niño and La Niña can be found here. When comparing to the same historical probabilities calculated for precipitation, it is evident that additional information can be gained from considering the hydrology. For example, the River Nile in northern Africa is likely to see low river flow, even though the surrounding area is likely to see more precipitation – because it is influenced more by changes in precipitation upstream. In places that are likely to see more precipitation but in the form of snow, there would be no influence on river flow or flood hazard during the time when more precipitation is expected. However, several months later, there may be no additional precipitation expected, but there may be increased flood hazard due to the melting of more snow than normal – so we’re able to see a lagged influence of ENSO on river flow in some regions.

While there are locations where these probabilities are high and can provide a useful forecast of hydrological extremes, across much of the globe, the probabilities are lower and much more uncertain (see here for more info on uncertainty in these forecasts) than might be useful for decision-making purposes.

Providing openly-available seasonal river flow forecasts, globally

For the next ‘chapter’ of my PhD, we looked into the feasibility of providing seasonal forecasts of river flow at the global scale. Providing global-scale flood forecasts in the medium-range has only become possible in recent years, and extended-range flood forecasting was highlighted as a grand challenge and likely future development in hydro-meteorological forecasting.

To do this, I worked with Ervin Zsoter at ECMWF, to drive the GloFAS hydrological model (Lisflood) with reforecasts from ECMWF’s latest seasonal forecasting system, SEAS5, to produce seasonal forecasts of river flow. We also forced Lisflood with the new ERA5 reanalysis, to produce an ERA5-R river flow reanalysis with which to initialise Lisflood, and to provide a climatology. The system set-up is shown in the flowchart below.

glofas_seasonal_flowchart_POSTER_EGU

I also worked with colleagues at ECMWF to design forecast products for a GloFAS seasonal outlook, based on a combination of features from the GloFAS flood forecasts, and the EFAS (the European Flood Awareness System) seasonal outlook, and incorporating feedback from users of EFAS.

After ~1 year of working on getting the system set up and finalising the forecast products, including a four-month research placement at ECMWF, the first GloFAS -Seasonal forecast was released in November 2017, with the release of SEAS5. GloFAS-Seasonal is now running operationally at ECMWF, providing forecasts of high and low weekly-averaged river flow for the global river network, up to 4 months ahead, with 3 new forecast layers available through the GloFAS interface. These provide a forecast overview for 307 major river basins, a map of the forecast for the entire river network at the sub-basin scale, and ensemble hydrographs at thousands of locations across the globe (which change with each forecast depending on forecast probabilities). New forecasts are produced once per month, and released on the 10th of each month. You can find more information on each of the different forecast layers and the system set-up here, and you can access the (openly available) forecasts here. ERA5-R, ERA-20CM-R and the GloFAS-Seasonal reforecasts are also all freely available – just get in touch! GloFAS-Seasonal will continue to be developed by ECMWF and the JRC, and has already been updated to v2.0, including a calibrated version of the hydrological model.

NEW_WEB_figure1_basins
Screenshot of the GloFAS seasonal outlook at www.globalfloods.eu

So, over the course of my PhD, we developed two new seasonal forecasts for hydrological extremes, at the global scale. You may be wondering whether they’re skilful, or in fact, which one provides the most useful forecasts! For information on the skill or ‘potential usefulness’ of GloFAS-Seasonal, head to our paper, and stay tuned for a paper coming soon (hopefully! [update: this paper has just been accepted and can be accessed online here]) on the ‘most useful approach for forecasting hydrological extremes during El Niño’, in which we compare the skill of the two forecasts at predicting observed high and low flow events during El Niño.

 

With thanks to my PhD supervisors & co-authors:

Hannah Cloke1, Liz Stephens1, Florian Pappenberger2, Steve Woolnough1, Ervin Zsoter2, Peter Salamon3, Louise Arnal1,2, Christel Prudhomme2, Davide Muraro3

1University of Reading, 2ECMWF, 3European Commission Joint Research Centre

The Circumglobal Teleconnection and its Links to Seasonal Forecast Skill for the European Summer

Email: j.beverley@pgr.reading.ac.uk

Recent extreme weather events such as the central European heatwave in 2003, flooding in the UK in 2007, and even the recent dry summer in the UK in 2018, have highlighted the need for more accurate long-range forecasts for the European summer. Recent research has led to improvements in European winter seasonal forecasts, however summer forecast skill remains relatively low. One potential source of predictability for Europe is the Indian summer monsoon, which can affect European weather via a global wave train known as the “Circumglobal Teleconnection” (CGT).

figure1
Figure 1: One-point correlation between 200 hPa geopotential height at the base point (35°-40°N, 60°-70°E) and 200 hPa geopotential height elsewhere in the ERA-Interim (1981–2014) reanalysis dataset, for August. The boxes indicate the regions defined as the “centres of action” of the CGT – these are North Pacific (NPAC), North America (NAM), Northwest Europe (NWEUR), Ding and Wang (D&W) and East Asia (EASIA).

The CGT was first identified by Ding and Wang (2005) as having a major role in modulating observed weather patterns in the Northern Hemisphere summer. Using a 200 hPa geopotential height index centred in west-central Asia (35°-40°N, 60°-70°E), they constructed a one-point correlation map of geopotential height with reference to this index (reproduced in Figure 1). From this, they identified a wavenumber-5 structure where the pressure variations over the Northeast Atlantic, East Asia, North Pacific and North America are all nearly in phase with the variations over west-central Asia (these are known as the “centres of action”). They also showed that the CGT is associated with significant temperature and precipitation anomalies in Europe, so accurate representation this mechanism in seasonal forecast models could provide an important source of subseasonal to seasonal forecast skill.

The model used here is a version of the European Centre for Medium-Range Weather Forecasts (ECMWF)’s coupled seasonal forecast model. Reforecasts are initialised on 1st May and are run for four months, so cover May-August, with start dates from 1981-2014. The skill of the model 200 hPa geopotential height is shown in Figure 2, defined as the correlation between the model ensemble mean and ERA-Interim. The model has good skill in May (to be expected given that the reforecasts are initialised in May) but in June, July and August areas of zero or negative correlation develop across much of the northern hemisphere extratropics. The areas of reduced skill align closely with the location of the centres of action of the CGT shown in Figure 1, suggesting that there is a link between the model skill and the model representation of the CGT.

figure2
Figure 2: Model ensemble mean skill for 200 hPa geopotential height as defined as the correlation between ERA-Interim and model ensemble mean for (a) May (b) June (c) July and (d) August

To determine how well the model represents the CGT, Figure 3 shows the correlation between the D&W region and the other centres of action of the CGT, as defined in Figure 1. Focussing on August (as August has the strongest CGT pattern) it can be seen that the model correlations, indicated by the box and whisker plots, are weaker than in observations (red diamond) for the D&W vs. North Pacific (NPAC), North America (NAM) and Northwest Europe (NWEUR) regions. This indicates that the model has a weak representation of the wavetrain associated with the CGT.

figure3
Figure 3: Distribution of correlation coefficients for the D&W Index correlated against the other centres of action of the CGT. The box plots represent the upper and lower quartiles, and the whiskers extend to the 5th and 95th percentiles. The black horizontal line represents the median value and the red diamond the observed correlation coefficient from ERA-Interim.

There are likely to be several reasons for the weak representation of the CGT in the model. One important factor is the presence of a northerly jet bias in the model across much of the Northern Hemisphere. This can be seen in Figure 4, which shows the model jet biases relative to ERA-Interim in the coloured contours, and the observed zonal wind in the black contours. The dipole structure of the biases which exists across much of the hemisphere, particularly in June, July and August, indicates that the model jet stream is located too far to the north. This means that Rossby waves forced in this region will have different wave propagation characteristics to reality – they may propagate at the incorrect speed, in the wrong direction or may not propagate at all, and this is likely to be an important factor in the weak representation of the CGT in the model.

figure4
Figure 4: Model 200 hPa zonal wind bias (filled contours, m/s), defined as the model ensemble mean minus ERA-Interim zonal wind, and ERA-I 200 hPa zonal wind (black contours) for (a) May (b) June (c) July and (d) August. The location of the centres of action of the CGT are marked with white crosses.

Other potential factors involved are a poor representation of the link between monsoon precipitation and the geopotential height in west-central Asia (which was shown by Ding and Wang (2007) to be important in the maintenance of the CGT) and errors in the forcing of Rossby waves associated with the monsoon. For a more detailed explanation of these, see my paper in Climate Dynamics (Beverley et al. 2018). It seems likely that the pattern of reduced skill in Figure 2, with negative correlations located at the centres of action of the CGT, including over Europe, is related to the poor representation of the CGT in the model. This raises the question of whether an improvement in the model’s representation of the CGT would lead to an improvement in forecast skill for the European summer. To address this question, sensitivity experiments have been carried out, in which the observed circulation is imposed in several centres of action along the CGT pathway to explore the impact on forecast skill for European summer weather.

References

Beverley, J. D., S. J. Woolnough, L. H. Baker, S. J. Johnson and A. Weisheimer, 2018: The northern hemisphere circumglobal teleconnection in a seasonal forecast model and its relationship to European summer forecast skill. Clim. Dyn. https://doi.org/10.1007/s00382-018-4371-4

Ding, Q., and B. Wang, 2005: Circumglobal teleconnection in the northern hemisphere summer. J. Clim. 18, 3483–3505.  https://doi.org/10.1175/JCLI3473.1

Ding, Q., and B. Wang, 2007: Intraseasonal teleconnection between the summer Eurasian wave train and the Indian monsoon. J. Clim. 20, 3751-3767. https://doi.org/10.1175/JCLI4221.1

APPLICATE General Assembly and Early Career Science event

5

On 28th January to 1st February I attended the APPLICATE (Advanced Prediction in Polar regions and beyond: modelling, observing system design and LInkages associated with a Changing Arctic climaTE (bold choice)) General Assembly and Early Career Science event at ECMWF in Reading. APPLICATE is one of the EU Horizon 2020 projects with the aim of improving weather and climate prediction in the polar regions. The Arctic is a region of rapid change, with decreases in sea ice extent (Stroeve et al., 2012) and changes to ecosystems (Post et al., 2009). These changes are leading to increased interest in the Arctic for business opportunities such as the opening of shipping routes (Aksenov et al., 2017). There is also a lot of current work being done on the link between changes in the Arctic and mid-latitude weather (Cohen et al., 2014), however there is still much uncertainty. These changes could have large impacts on human life, therefore there needs to be a concerted scientific effort to develop our understanding of Arctic processes and how this links to the mid-latitudes. This is the gap that APPLICATE aims to fill.

The overarching goal of APPLICATE is to develop enhanced predictive capacity for weather and climate in the Arctic and beyond, and to determine the influence of Arctic climate change on Northern Hemisphere mid-latitudes, for the benefit of policy makers, businesses and society.

APPLICATE Goals & Objectives

Attending the General Assembly was a great opportunity to get an insight into how large scientific projects work. The project is made up of different work packages each with a different focus. Within these work packages there are then a set of specific tasks and deliverables spread out throughout the project. At the GA there were a number of breakout sessions where the progress of the working groups was discussed. It was interesting to see how these discussions worked and how issues, such as the delay in CMIP6 experiments, are handled. The General Assembly also allows the different work packages to communicate with each other to plan ahead, and for results to be shared.

2
An overview of APPLICATE’s management structure take from: https://applicate.eu/about-the-project/project-structure-and-governance

One of the big questions APPLICATE is trying to address is the link between Arctic sea-ice and the Northern Hemisphere mid-latitudes. Many of the presentations covered different aspects of this, such as how including Arctic observations in forecasts affects their skill over Eurasia. There were also initial results from some of the Polar Amplification (PA)MIP experiments, a project that APPLICATE has helped coordinate.

1
Attendees of the Early Career Science event co-organised with APECS

At the end of the week there was the Early Career Science Event which consisted of a number of talks on more soft skills. One of the most interesting activities was based around engaging with stakeholders. To try and understand the different needs of a variety of stakeholders in the Arctic (from local communities to shipping companies) we had to try and lobby for different policies on their behalf. This was also a great chance to meet other early career scientists working in the field and get to know each other a bit more.

What a difference a day makes, heavy snow getting the ECMWF’s ducks in the polar spirit.

Email: sally.woodhouse@pgr.reading.ac.uk

References

Aksenov, Y. et al., 2017. On the future navigability of Arctic sea routes: High-resolution projections of the Arctic Ocean and sea ice. Marine Policy, 75, pp.300–317.

Cohen, J. et al., 2014. Recent Arctic amplification and extreme mid-latitude weather. Nature Geoscience, 7(9), pp.627–637.

Post, E. & Others, 24, 2009. Ecological Dynamics Across the Arctic Associated with Recent Climate Change. Science, 325(September), pp.1355–1358.

Stroeve, J.C. et al., 2012. Trends in Arctic sea ice extent from CMIP5, CMIP3 and observations. Geophysical Research Letters, 39(16), pp.1–7.

Evaluating aerosol forecasts in London

Email: e.l.warren@pgr.reading.ac.uk

Aerosols in urban areas can greatly impact visibility, radiation budgets and our health (Chen et al., 2015). Aerosols make up the liquid and solid particles in the air that, alongside noxious gases like nitrogen dioxide, are the pollution in cities that we often hear about on the news – breaking safety limits in cities across the globe from London to Beijing. Air quality researchers try to monitor and predict aerosols, to inform local councils so they can plan and reduce local emissions.

Figure 1: Smog over London (Evening Standard, 2016).

Recently, large numbers of LiDARs (Light Detection and Ranging) have been deployed across Europe, and elsewhere – in part to observe aerosols. They effectively shoot beams of light into the atmosphere, which reflect off atmospheric constituents like aerosols. From each beam, many measurements of reflectance are taken very quickly over time – and as light travels further with more time, an entire profile of reflectance can be constructed. As the penetration of light into the atmosphere decreases with distance, the reflected light is usually commonly called attenuated backscatter (β). In urban areas, measurements away from the surface like these are sorely needed (Barlow, 2014), so these instruments could be extremely useful. When it comes to predicting aerosols, numerical weather prediction (NWP) models are increasingly being considered as an option. However, the models themselves are very computationally expensive to run so they tend to only have a simple representation of aerosol. For example, for explicitly resolved aerosol, the Met Office UKV model (1.5 km) just has a dry mass of aerosol [kg kg-1] (Clark et al., 2008). That’s all. It gets transported around by the model dynamics, but any other aerosol characteristics, from size to number, need to be parameterised from the mass, to limit computation costs. However, how do we know if the estimates of aerosol from the model are actually correct? A direct comparison between NWP aerosol and β is not possible because fundamentally, they are different variables – so to bridge the gap, a forward operator is needed.

In my PhD I helped develop such a forward operator (aerFO, Warren et al., 2018). It’s a model that takes aerosol mass (and relative humidity) from NWP model output, and estimates what the attenuated backscatter would be as a result (βm). From this, βm could be directly compared to βo and the NWP aerosol output evaluated (e.g. see if the aerosol is too high or low). The aerFO was also made to be computationally cheap and flexible, so if you had more information than just the mass, the aerFO would be able to use it!

Among the aerFO’s several uses (Warren et al., 2018, n.d.), was the evaluation of NWP model output. Figure 2 shows the aerFO in action with a comparison between βm and observed attenuated backscatter (βo) measured at 905 nm from a ceilometer (a type of LiDAR) on 14th April 2015 at Marylebone Road in London. βm was far too high in the morning on this day. We found that the original scheme the UKV used to parameterise the urban surface effects in London was leading to a persistent cold bias in the morning. The cold bias would lead to a high relative humidity, so consequently the aerFO condensed more water than necessary, onto the aerosol particles as a result, causing them to swell up too much. As a result, bigger particles mean bigger βm and an overestimation. Not only was the relative humidity too high, the boundary layer in the NWP model was developing too late in the day as well. Normally, when the surface warms up enough, convection starts, which acts to mix aerosol up in the boundary layer and dilute it near the surface. However, the cold bias delayed this boundary layer development, so the aerosol concentration near the surface remained high for too long. More mass led to the aerFO parameterising larger sizes and total numbers of particles, so overestimated βm. This cold bias effect was reflected across several cases using the old scheme but was notably smaller for cases using a newer urban surface scheme called MORUSES (Met Office – Reading Urban Surface Exchange Scheme). One of the main aims for MORUSES was to improve the representation of energy transfer in urban areas, and at least to us it seemed like it was doing a better job!

Figure 2: Vertical profiles of attenuated backscatter [m−1 sr−1] (log scale) that are (a, g) observed (βo) with estimated mixing layer height (red crosses, Kotthaus and Grimmond,2018) and (b, h) forward modelled (βm) using the aerFO (section 2).(c, i) Attenuated backscatter difference (βm – βo) calculated using the hourly βm vertical profile and the vertical profile of βo nearest in time; (d, j) aerosol mass mixing ratio (m) [μg kg−1]; (e, k) relative humidity (RH) [%] and (f, l) air temperature (T) [°C] at MR on 14th April 2015.

References

Barlow, J.F., 2014. Progress in observing and modelling the urban boundary layer. Urban Clim. 10, 216–240. https://doi.org/10.1016/j.uclim.2014.03.011

Chen, C.H., Chan, C.C., Chen, B.Y., Cheng, T.J., Leon Guo, Y., 2015. Effects of particulate air pollution and ozone on lung function in non-asthmatic children. Environ. Res. 137, 40–48. https://doi.org/10.1016/j.envres.2014.11.021

Clark, P.A., Harcourt, S.A., Macpherson, B., Mathison, C.T., Cusack, S., Naylor, M., 2008. Prediction of visibility and aerosol within the operational Met Office Unified Model. I: Model formulation and variational assimilation. Q. J. R. Meteorol. Soc. 134, 1801–1816. https://doi.org/10.1002/qj.318

Warren, E., Charlton-Perez, C., Kotthaus, S., Lean, H., Ballard, S., Hopkin, E., Grimmond, S., 2018. Evaluation of forward-modelled attenuated backscatter using an urban ceilometer network in London under clear-sky conditions. Atmos. Environ. 191, 532–547. https://doi.org/10.1016/j.atmosenv.2018.04.045

Warren, E., Charlton-Perez, C., Kotthaus, S., Marenco, F., Ryder, C., Johnson, B., Lean, H., Ballard, S., Grimmond, S., n.d. Observed aerosol characteristics to improve forward-modelled attenuated backscatter. Atmos. Environ. Submitted