## The (real) butterfly effect: the impact of resolving the mesoscale range

What does the ‘butterfly effect’ exactly mean? Many people would attribute the butterfly effect to the famous 3-dimensional non-linear model of Lorenz (1963) whose attractor looks like a butterfly when viewed from a particular angle. While it serves as an important foundation to chaos theory (by establishing that 3 dimensions are not only necessary for chaos as mandated in the Poincaré-Bendixson Theorem, but are also sufficient), the term ‘butterfly effect’ was not coined until 1972 (Palmer et al. 2014) based on a scientific presentation that Lorenz gave on a more radical, more recent work (Lorenz 1969) on the predictability barrier in multi-scale fluid systems. In this work, Lorenz demonstrated that under certain conditions, small-scale errors grow faster than large-scale errors in such a way that the predictability horizon cannot be extended beyond an absolute limit by reducing the initial error (unless the initial error is perfectly zero). Such limited predictability, or the butterfly effect as understood in this context, has now become a ‘canon in dynamical meteorology’ (Rotunno and Snyder 2008). Recent studies with advanced numerical weather prediction (NWP) models estimate this predictability horizon to be on the order of 2 to 3 weeks (Buizza and Leutbecher 2015; Judt 2018), in agreement with Lorenz’s original result.

The predictability properties of a fluid system primarily depend on the energy spectrum, whereas the nature of the dynamics per se only plays a secondary role (Rotunno and Snyder 2008). It is well-known that a slope shallower than (equal to or steeper than) -3 in the energy spectrum is associated with limited (unlimited) predictability (Lorenz 1969; Rotunno and Snyder 2008), which could be understood through analysing the characteristics of the energy spectrum of the error field. As shown in Figure 1, the error appears to grow uniformly across scales when predictability is indefinite, and appears to ‘cascade’ upscale when predictability is limited. In the latter case, the error spectra peak at the small scale and the growth rate is faster there.

The Earth’s atmospheric energy spectrum consists of a -3 range in the synoptic scale and a $-\frac{5}{3}$ range in the mesoscale (Nastrom and Gage 1985). While the limited predictability of the atmosphere arises from mesoscale physical processes, it would be of interest to understand how errors grow under this hybrid spectrum, and to what extent do global numerical weather prediction (NWP) models, which are just beginning to resolve the mesoscale $-\frac{5}{3}$ range, demonstrate the fast error growth proper to the limited predictability associated with this range.

We use the Lorenz (1969) model at two different resolutions: $K_{max}=11$, corresponding to a maximal wavenumber of $2^{11}=2048$, and $K_{max}=21$. The former represents the approximate resolution of global NWP models (~ 20 km), and the latter represents a resolution about 1000 times finer so that the shallower mesoscale range is much better resolved. Figure 2 shows the growth of a small-scale, small-amplitude initial error under these model settings.

In the $K_{max}=11$ case where the $-\frac{5}{3}$ range is not so much resolved, the error growth remains more or less up-magnitude, and the upscale cascade is not visible. The error is still much influenced by the synoptic-scale -3 range. Such behaviour largely agrees with the results of a recent study using a full-physics global NWP model (Judt 2018). In contrast, with the higher resolution $K_{max}=21$, the upscale propagation of error in the mesoscale is clearly visible. As the error spreads to the synoptic scale, its growth becomes more up-magnitude.

To understand the dependence of the error growth rate on scales, we use the parametric model of Žagar et al. (2017) by fitting the error-versus-time curve for every wavenumber / scale to the equation $E\left ( t \right )=A\tanh\left ( at+b\right )+B$, so that the parameters $A, B, a$ and $b$ are functions of the wavenumber / scale. Among the parameters, a describes the rate of error growth, the larger the quicker. A dimensional argument suggests that $a \sim (k^3 E(k))^{1/2}$, so that $a$ should be constant for a $-3$ range $(E(k) \sim k^{-3})$, and should grow $10^{2/3}>4.5$-fold for every decade of wavenumbers in the case of a $-\frac{5}{3}$ range. These scalings are indeed observed in the model simulations, except that the sharp increase pertaining to the $-\frac{5}{3}$ range only kicks in at $K \sim 15$ (1 to 2 km), much smaller in scale than the transition between the $-3$ and $-\frac{5}{3}$ ranges at $K \sim 7$ (300 to 600 km). See Figure 3 for details.

This explains the absence of the upscale cascade in the $K_{max}=11$ simulation. As models go into very high resolution in the future, the strong predictability constraints proper to the mesoscale $-\frac{5}{3}$ range will emerge, but only when it is sufficiently resolved. Our idealised study with the Lorenz model shows that this will happen only if $K_{max} >15$. In other words, motions at 1 to 2 km have to be fully resolved in order for error growth in the small scales be correctly represented. This would mean a grid resolution of ~ 250 m after accounting for the need of a dissipation range in a numerical model (Skamarock 2004).

While this seems to be a pessimistic statement, we have observed that the sensitivity of the error growth behaviour to the model resolution is itself sensitive to the initial error profile. The results presented above are for an initial error confined to a single small scale. When the initial error distribution is changed, the qualitative picture of error growth may not present such a contrast between the two resolutions. Thus, we highlight the need of further research to assess the potential gains of resolving more scales in the mesoscale, especially for the case of a realistic distribution of error that initiates the integrations of operational NWP models.

A manuscript on this work has been submitted and is currently under review.

This work is supported by a PhD scholarship awarded by the EPSRC Centre for Doctoral Training in the Mathematics of Planet Earth, with additional funding support from the ERC Advanced Grant ‘Understanding the Atmospheric Circulation Response to Climate Change’ and the Deutsche Forschungsgemeinschaft (DFG) Grant ‘Scaling Cascades in Complex Systems’.

References

Buizza, R. and Leutbecher, M. (2015). The forecast skill horizon. Quart. J. Roy. Meteor. Soc. 141, 3366—3382. https://doi.org/10.1002/qj.2619

Judt, F. (2018). Insights into atmospheric predictability through global convection-permitting model simulations. J. Atmos. Sci. 75, 1477—1497. https://doi.org/10.1175/JAS-D-17-0343.1

Leung, T. Y., Leutbecher, M., Reich, S. and Shepherd, T. G. (2019). Impact of the mesoscale range on error growth and the limits to atmospheric predictability. Submitted.

Lorenz, E. N. (1963). Deterministic Nonperiodic Flow. J. Atmos. Sci. 20, 130—141. https://doi.org/10.1175/1520-0469(1963)020<0130:DNF>2.0.CO;2

Lorenz, E. N. (1969). The predictability of a flow which possesses many scales of motion. Tellus 21, 289—307. https://doi.org/10.3402/tellusa.v21i3.10086

Nastrom, G. D. and Gage, K. S. (1985). A climatology of atmospheric wavenumber spectra of wind and temperature observed by commercial aircraft. J. Atmos. Sci. 42, 950—960. https://doi.org/10.1175/1520-0469(1985)042<0950:ACOAWS>2.0.CO;2

Palmer, T. N., Döring, A. and Seregin, G. (2014). The real butterfly effect. Nonlinearity 27, R123—R141. https://doi.org/10.1088/0951-7715/27/9/R123

Rotunno, R. and Snyder, C. (2008). A generalization of Lorenz’s model for the predictability of flows with many scales of motion. J. Atmos. Sci. 65, 1063—1076. https://doi.org/10.1175/2007JAS2449.1

Skamarock, W. C. (2004). Evaluating mesoscale NWP models using kinetic energy spectra. Mon. Wea. Rev. 132, 3019—3032. https://doi.org/10.1175/MWR2830.1

Žagar, N., Horvat, M., Zaplotnik, Ž. and Magnusson, L. (2017). Scale-dependent estimates of the growth of forecast uncertainties in a global prediction system. Tellus A 69:1, 1287492. https://doi.org/10.1080/16000870.2017.1287492

## Combining multiple streams of environmental data into a soil moisture dataset

An accurate estimate of soil moisture has a vital role in a number of scientific research areas. It is important for day to day numerical weather prediction, extreme weather event forecasting such as for flooding and droughts, crop suitability to a particular region and crop yield estimation to mention a few. However, in-situ measurements of soil moisture are generally expensive to obtain, labour intensive and have sparse spatial coverage. To assist this, satellite measurements and models are used as a proxy of the ground measurement. Satellite missions such as SMAP (Soil Moisture Active Passive) observe the soil moisture content for the top few centimetres from the surface of the earth. On the other hand, soil moisture estimates from models are prone to errors due to model errors in representing the physics or the parameter values used.

Data assimilation is a method of combining numerical models with observed data and its error statistics. In principle, the state estimate after data assimilation is expected to be better than the standalone numerical model estimate of the state or the observations. There are a variety of data assimilation methods: Variational, Sequential, Monte Carlo methods and a combination of them. The Joint UK Land Environment Simulator (JULES) is a community land surface model which calculates several land surface processes such as surface energy balance and carbon cycle and used by the Met Office – the UK’s national weather service.

My PhD aims to improve the estimate of soil moisture from the JULES model using satellite data from SMAP and the Four-Dimensional Ensemble Variational (4DEnVar) data assimilation method introduced by Liu et al. (2008) and implemented by Pinnington (2019; under review), a combination of Variational and Ensemble data assimilation methods. In addition to satellite soil moisture data assimilation, ground measurement soil moisture data from Oklahoma Mesoscale Networks (Mesonet) are also assimilated.

The time series of soil moisture from the JULES model (prior), soil moisture obtained after assimilation (posterior) and observed soil moisture for Antlers station in Mesonet are depicted in Figure 1. Figure 2 shows the distance of prior soil moisture estimates and posterior soil moisture estimates from the assimilated observations. The smaller the distance is the better as the primary objective of data assimilation is to optimally fit the model trajectory into the observations and background. From Figure 1 and Figure 2 we can conclude that posterior soil moisture estimates are closer to the observations compared to the prior. Looking at particular months, prior soil moisture is closer to observations compared to the posterior around January and October. This is due to the fact that 4DEnVar considers all the observations to calculate an optimal trajectory which fits observations and background. Hence, it is not surprising to see the prior being closer to the observations than the posterior in some places.

Data assimilation experiments are repeated for different sites in Mesonet with varying soil type, topography and different climate and with different soil moisture dataset. In all the experiments, we have observed that posterior soil moisture estimates are closer to the observations than the prior soil moisture estimates. As a verification, soil moisture reanalysis is calculated for the year 2018 and compared to the observations. Figure 3 is SMAP soil moisture data assimilated into the JULES model and hind-casted for the following year.

References

Liu, C., Q. Xiao, and B. Wang, 2008: An Ensemble-Based Four-Dimensional Variational Data Assimilation Scheme. Part I: Technical Formulation and Preliminary Test. Mon. Weather Rev., 136 (9), 3363–3373., https://doi.org/10.1175/2008MWR2312.1

Pinnington, E., T. Quaife, A. Lawless, K. Williams, T. Arkebauer, and D. Scoby, 2019: The Land Variational Ensemble Data Assimilation fRamework:
LaVEnDAR. Geosci. Model Dev. Discuss. https://doi.org/10.5194/gmd-2019-60

## APPLICATE General Assembly and Early Career Science event

On 28th January to 1st February I attended the APPLICATE (Advanced Prediction in Polar regions and beyond: modelling, observing system design and LInkages associated with a Changing Arctic climaTE (bold choice)) General Assembly and Early Career Science event at ECMWF in Reading. APPLICATE is one of the EU Horizon 2020 projects with the aim of improving weather and climate prediction in the polar regions. The Arctic is a region of rapid change, with decreases in sea ice extent (Stroeve et al., 2012) and changes to ecosystems (Post et al., 2009). These changes are leading to increased interest in the Arctic for business opportunities such as the opening of shipping routes (Aksenov et al., 2017). There is also a lot of current work being done on the link between changes in the Arctic and mid-latitude weather (Cohen et al., 2014), however there is still much uncertainty. These changes could have large impacts on human life, therefore there needs to be a concerted scientific effort to develop our understanding of Arctic processes and how this links to the mid-latitudes. This is the gap that APPLICATE aims to fill.

The overarching goal of APPLICATE is to develop enhanced predictive capacity for weather and climate in the Arctic and beyond, and to determine the influence of Arctic climate change on Northern Hemisphere mid-latitudes, for the benefit of policy makers, businesses and society.

APPLICATE Goals & Objectives

Attending the General Assembly was a great opportunity to get an insight into how large scientific projects work. The project is made up of different work packages each with a different focus. Within these work packages there are then a set of specific tasks and deliverables spread out throughout the project. At the GA there were a number of breakout sessions where the progress of the working groups was discussed. It was interesting to see how these discussions worked and how issues, such as the delay in CMIP6 experiments, are handled. The General Assembly also allows the different work packages to communicate with each other to plan ahead, and for results to be shared.

One of the big questions APPLICATE is trying to address is the link between Arctic sea-ice and the Northern Hemisphere mid-latitudes. Many of the presentations covered different aspects of this, such as how including Arctic observations in forecasts affects their skill over Eurasia. There were also initial results from some of the Polar Amplification (PA)MIP experiments, a project that APPLICATE has helped coordinate.

At the end of the week there was the Early Career Science Event which consisted of a number of talks on more soft skills. One of the most interesting activities was based around engaging with stakeholders. To try and understand the different needs of a variety of stakeholders in the Arctic (from local communities to shipping companies) we had to try and lobby for different policies on their behalf. This was also a great chance to meet other early career scientists working in the field and get to know each other a bit more.

What a difference a day makes, heavy snow getting the ECMWF’s ducks in the polar spirit.

#### References

Aksenov, Y. et al., 2017. On the future navigability of Arctic sea routes: High-resolution projections of the Arctic Ocean and sea ice. Marine Policy, 75, pp.300–317.

Cohen, J. et al., 2014. Recent Arctic amplification and extreme mid-latitude weather. Nature Geoscience, 7(9), pp.627–637.

Post, E. & Others, 24, 2009. Ecological Dynamics Across the Arctic Associated with Recent Climate Change. Science, 325(September), pp.1355–1358.

Stroeve, J.C. et al., 2012. Trends in Arctic sea ice extent from CMIP5, CMIP3 and observations. Geophysical Research Letters, 39(16), pp.1–7.

## Evaluating aerosol forecasts in London

Aerosols in urban areas can greatly impact visibility, radiation budgets and our health (Chen et al., 2015). Aerosols make up the liquid and solid particles in the air that, alongside noxious gases like nitrogen dioxide, are the pollution in cities that we often hear about on the news – breaking safety limits in cities across the globe from London to Beijing. Air quality researchers try to monitor and predict aerosols, to inform local councils so they can plan and reduce local emissions.

Recently, large numbers of LiDARs (Light Detection and Ranging) have been deployed across Europe, and elsewhere – in part to observe aerosols. They effectively shoot beams of light into the atmosphere, which reflect off atmospheric constituents like aerosols. From each beam, many measurements of reflectance are taken very quickly over time – and as light travels further with more time, an entire profile of reflectance can be constructed. As the penetration of light into the atmosphere decreases with distance, the reflected light is usually commonly called attenuated backscatter (β). In urban areas, measurements away from the surface like these are sorely needed (Barlow, 2014), so these instruments could be extremely useful. When it comes to predicting aerosols, numerical weather prediction (NWP) models are increasingly being considered as an option. However, the models themselves are very computationally expensive to run so they tend to only have a simple representation of aerosol. For example, for explicitly resolved aerosol, the Met Office UKV model (1.5 km) just has a dry mass of aerosol [kg kg-1] (Clark et al., 2008). That’s all. It gets transported around by the model dynamics, but any other aerosol characteristics, from size to number, need to be parameterised from the mass, to limit computation costs. However, how do we know if the estimates of aerosol from the model are actually correct? A direct comparison between NWP aerosol and β is not possible because fundamentally, they are different variables – so to bridge the gap, a forward operator is needed.

In my PhD I helped develop such a forward operator (aerFO, Warren et al., 2018). It’s a model that takes aerosol mass (and relative humidity) from NWP model output, and estimates what the attenuated backscatter would be as a result (βm). From this, βm could be directly compared to βo and the NWP aerosol output evaluated (e.g. see if the aerosol is too high or low). The aerFO was also made to be computationally cheap and flexible, so if you had more information than just the mass, the aerFO would be able to use it!

Among the aerFO’s several uses (Warren et al., 2018, n.d.), was the evaluation of NWP model output. Figure 2 shows the aerFO in action with a comparison between βm and observed attenuated backscatter (βo) measured at 905 nm from a ceilometer (a type of LiDAR) on 14th April 2015 at Marylebone Road in London. βm was far too high in the morning on this day. We found that the original scheme the UKV used to parameterise the urban surface effects in London was leading to a persistent cold bias in the morning. The cold bias would lead to a high relative humidity, so consequently the aerFO condensed more water than necessary, onto the aerosol particles as a result, causing them to swell up too much. As a result, bigger particles mean bigger βm and an overestimation. Not only was the relative humidity too high, the boundary layer in the NWP model was developing too late in the day as well. Normally, when the surface warms up enough, convection starts, which acts to mix aerosol up in the boundary layer and dilute it near the surface. However, the cold bias delayed this boundary layer development, so the aerosol concentration near the surface remained high for too long. More mass led to the aerFO parameterising larger sizes and total numbers of particles, so overestimated βm. This cold bias effect was reflected across several cases using the old scheme but was notably smaller for cases using a newer urban surface scheme called MORUSES (Met Office – Reading Urban Surface Exchange Scheme). One of the main aims for MORUSES was to improve the representation of energy transfer in urban areas, and at least to us it seemed like it was doing a better job!

References

Barlow, J.F., 2014. Progress in observing and modelling the urban boundary layer. Urban Clim. 10, 216–240. https://doi.org/10.1016/j.uclim.2014.03.011

Chen, C.H., Chan, C.C., Chen, B.Y., Cheng, T.J., Leon Guo, Y., 2015. Effects of particulate air pollution and ozone on lung function in non-asthmatic children. Environ. Res. 137, 40–48. https://doi.org/10.1016/j.envres.2014.11.021

Clark, P.A., Harcourt, S.A., Macpherson, B., Mathison, C.T., Cusack, S., Naylor, M., 2008. Prediction of visibility and aerosol within the operational Met Office Unified Model. I: Model formulation and variational assimilation. Q. J. R. Meteorol. Soc. 134, 1801–1816. https://doi.org/10.1002/qj.318

Warren, E., Charlton-Perez, C., Kotthaus, S., Lean, H., Ballard, S., Hopkin, E., Grimmond, S., 2018. Evaluation of forward-modelled attenuated backscatter using an urban ceilometer network in London under clear-sky conditions. Atmos. Environ. 191, 532–547. https://doi.org/10.1016/j.atmosenv.2018.04.045

Warren, E., Charlton-Perez, C., Kotthaus, S., Marenco, F., Ryder, C., Johnson, B., Lean, H., Ballard, S., Grimmond, S., n.d. Observed aerosol characteristics to improve forward-modelled attenuated backscatter. Atmos. Environ. Submitted

## Quantifying the skill of convection-permitting ensemble forecasts for the sea-breeze occurrence

On the afternoon of 16th August 2004, the village of Boscastle on the north coast of Cornwall was severely damaged by flooding (Golding et al., 2005). This is one example of high impact hazardous weather associated with small meso- and convective-scale weather phenomena, the prediction of which can be uncertain even a few hours ahead (Lorenz, 1969; Hohenegger and Schar, 2007). Taking advantage of the increased computer power (e.g. https://www.metoffice.gov.uk/research/technology/supercomputer) this has motivated many operational and research forecasting centres to introduce convection-permitting ensemble prediction systems (CP-EPSs), in order to give timely weather warnings of severe weather.

However, despite being an exciting new forecasting technology, CP-EPSs place a heavy burden on the computational resources of forecasting centres. They are usually run on limited areas with initial and boundary conditions provided by global lower resolution ensembles (LR-EPS). They also produce large amounts of data which needs to be rapidly digested and utilized by operational forecasters. Assessing whether the convective-scale ensemble is likely to provide useful additional information is key to successful real-time utilisation of this data. Similarly, knowing where equivalent information can be gained (even if partially) from LR-EPS using statistical/dynamical post-processing both extends lead time (due to faster production time) and also potentially provides information in regions where no convective-scale ensemble is available.

There have been many studies on the verification of CP-EPSs (Klasa et al., 2018, Hagelin et al., 2017, Barret et al., 2016, Beck et al., 2016 amongst the others), but none of them has dealt with the quantification of the skill gained by CP-EPSs in comparison with LR-EPSs, when fully exploited, for specific weather phenomena and for a long enough evaluation period.

In my PhD, I have focused on the sea-breeze phenomenon for different reasons:

1. Sea breezes have an impact on air quality by advecting pollutants, on heat stress by providing a relief on hot days and also on convection by providing a trigger, especially when interacting with other mesoscale flows (see for examples figure 1 or figures 6, 7 in Golding et al., 2005).
2. Sea breezes occur on small spatio-temporal scales which are properly resolved at convection-permitting resolutions, but their occurrence is still influenced by synoptic-scale conditions, which are resolved by the global LR-EPS.

Therefore this study aims to investigate whether the sea breeze is predictable by only knowing a few predictors or whether the better representation of fine-scale structures (e.g. orography, topography) by the CP-EPS implies a better sea-breeze prediction.

In order to estimate probabilistic forecasts from both the models, two different methods have been applied. A novel tracking algorithm for the identification of sea-breeze front, in the domain represented in figure 2, was applied to CP-EPSs data. A Bayesian model was used instead to estimate the probability of sea-breeze conditioned on two LR-EPSs predictors and trained on CP-EPSs data. More details can be found in Cafaro et al. (2018).

The results of the probabilistic verification are shown in figure 3. Reliability (REL) and resolution (RES) terms have been computed decomposing the Brier score (BS) and Information gain (IGN) score. Finally, scores differences (BSD and IG) have been computed to quantify any gain in the skill by the CP-EPS. Figure 3 shows that CP-EPS forecast is significantly more skilful than the Bayesian forecast. Nevertheless, the Bayesian forecast has more resolution than a climatological forecast (figure 3e,f), which has no resolution by construction.

This study shows the additional skill provided by the Met Office convection-permitting ensemble forecast for the sea-breeze prediction. The ability of CP-EPSs to resolve meso-scale dynamical features is thus proven to be important and only two large-scale predictors, relevant for the sea-breeze, are not sufficient for skilful prediction.

It is believed that both the methodologies can, in principle, be applied to other locations of the world and it is thus hoped they could be used operationally.

References:

Barrett, A. I., Gray, S. L., Kirshbaum, D. J., Roberts, N. M., Schultz, D. M., and Fairman J. G. (2016). The utility of convection-permitting ensembles for the prediction of stationary convective bands. Monthly Weather Review, 144(3):1093–1114, doi: 10.1175/MWR-D-15-0148.1

Beck,  J., Bouttier, F., Wiegand, L., Gebhardt, C., Eagle, C., and Roberts, N. (2016). Development and verification of two convection-allowing multi-model ensembles over Western europe. Quarterly Journal of the Royal Meteorological Society, 142(700):2808–2826, doi: 10.1002/qj.2870

Cafaro C., Frame T. H. A., Methven J., Roberts N. and Broecker J. (2018), The added value of convection-permitting ensemble forecasts of sea breeze compared to a Bayesian forecast driven by the global ensemble, Quarterly Journal of the Royal Meteorological Society., under review.

Golding, B. , Clark, P. and May, B. (2005), The Boscastle flood: Meteorological analysis of the conditions leading to flooding on 16 August 2004. Weather, 60: 230-235, doi: 10.1256/wea.71.05

Hagelin, S., Son, J., Swinbank, R., McCabe, A., Roberts, N., and Tennant, W. (2017). The Met Office convective-scale ensemble, MOGREPS-UK. Quarterly Journal of the Royal Meteorological Society, 143(708):2846–2861, doi: 10.1002/qj.3135

Hohenegger, C. and Schar, C. (2007). Atmospheric predictability at synoptic versus cloud-resolving scales. Bulletin of the American Meteorological Society, 88(11):1783–1794, doi: 10.1175/BAMS-88-11-1783

Klasa, C., Arpagaus, M., Walser, A., and Wernli, H. (2018). An evaluation of the convection-permitting ensemble cosmo-e for three contrasting precipitation events in Switzerland. Quarterly Journal of the Royal Meteorological Society, 144(712):744–764, doi: 10.1002/qj.3245

Lorenz, E. N. (1969). Predictability of a flow which possesses many scales of motion.Tellus, 21:289 – 307, doi: 10.1111/j.2153-3490.1969.tb00444.x

## Atmospheric blocking: why is it so hard to predict?

Atmospheric blocks are nearly stationary large-scale flow features that effectively block the prevailing westerly winds and redirect mobile cyclones. They are typically characterised by a synoptic-scale, quasi-stationary high pressure system in the midlatitudes that can remain over a region for several weeks. Blocking events can cause extreme weather: heat waves in summer and cold spells in winter, and the impacts associated with these events can escalate due to a block’s persistence. Because of this, it is important that we can forecast blocking accurately. However, atmospheric blocking has been shown to be the cause of some of the poorest forecasts in recent years. Looking at all occasions when the ECMWF model experienced a period of very low forecast skill, Rodwell et al. (2013) found that the average flow pattern for which these forecasts verified was an easily-distinguishable atmospheric blocking pattern (Figure 1). But why are blocks so hard to forecast?

There are several reasons why forecasting blocking is a challenge. Firstly, there is no universally accepted definition of what constitutes a block. Several different flow configurations that could be referred to as blocks are shown in Figure 2. The variety in flow patterns used to define blocking brings with it a variety of mechanisms that are dynamically important for blocks developing in a forecast (Woollings et al. 2018). Firstly, many phenomena must be well represented in a model for it to forecast all blocking events accurately. Secondly, there is no complete dynamical theory for block onset and maintenance- we do not know if a process key for blocking dynamics is missing from the equation set solved by numerical weather prediction models and is contributing to the forecast error. Finally, many of the known mechanisms associated with block onset and maintenance are also know sources of model uncertainty. For example, diabatic processes within extratropical cyclones have been shown to contribute substantially to blocking events (Pfahl et al. 2015), the parameterisation of which has been shown to affect medium-range forecasts of ridge building events (Martínez-Alvarado et al. 2015).

We do, however, know some ways to improve the representation of blocking: increase the horizontal resolution of the model (Schiemann et al. 2017); improve the parameterisation of subgrid physical processes (Jung et al. 2010); remove underlying model biases (Scaife et al. 2010); and in my PhD we found that improvements to a model’s dynamical core (the part of the model used to solved the governing equations) can also improve the medium-range forecast of blocking. In Figure 3, the frequency of blocking that occurred during two northern hemisphere winters is shown for the ERA-Interim reanalysis and three operational weather forecast centres (the ECMWF, Met Office (UKMO) and the Korean Meteorological Administration (KMA)). Both KMA and UKMO use the Met Office Unified Model – however, before the winter of 2014/15 the UKMO updated the model to use a new dynamical core whilst KMA continued to use the original. This means that for the 2013/14 the UKMO and KMA forecasts are from the same model with the same dynamical core whilst for the 2014/15 winter the UKMO and KMA forecasts are from the same model but with different dynamical cores. The clear improvement in forecast from the UKMO in 2014/15 can hence be attributed to the new dynamical core. For a full analysis of this improvement see Martínez-Alvarado et al. (2018).

In the remainder of my PhD I aim to investigate the link between errors in forecasts of blocking with the representation of upstream cyclones. I am particularly interested to see if the parameterisation of diabatic processes (a known source of model uncertainty) could be causing the downstream error in Rossby wave amplification and blocking.

References:

Rodwell, M. J., and Coauthors, 2013: Characteristics of occasional poor medium-range weather  forecasts for Europe. Bulletin of the American Meteorological Society, 94 (9), 1393–1405.

Woollings, T., and Coauthors, 2018: Blocking and its response to climate change. Current Climate Change Reports, 4 (3), 287–300.

Pfahl, S., C. Schwierz, M. Croci-Maspoli, C. Grams, and H. Wernli, 2015: Importance of latent  heat release in ascending air streams for atmospheric blocking. Nature Geoscience, 8 (8), 610– 614.

Mart´ınez-Alvarado, O., E. Madonna, S. Gray, and H. Joos, 2015: A route to systematic error in forecasts of Rossby waves. Quart. J. Roy. Meteor. Soc., 142, 196–210.

Mart´ınez-Alvarado, O., and R. Plant, 2014: Parametrized diabatic processes in numerical simulations of an extratropical cyclone. Quart. J. Roy. Meteor. Soc., 140 (682), 1742–1755.

Scaife, A. A., T. Woollings, J. Knight, G. Martin, and T. Hinton, 2010: Atmospheric blocking and mean biases in climate models. Journal of Climate, 23 (23), 6143–6152.

Schiemann, R., and Coauthors, 2017: The resolution sensitivity of northern hemisphere blocking in four 25-km atmospheric global circulation models. Journal of Climate, 30 (1), 337–358.

Jung, T., and Coauthors, 2010: The ECMWF model climate: Recent progress through improved physical parametrizations. Quart. J. Roy. Meteor. Soc., 136 (650), 1145–1160.

## Communicating uncertainties associated with anthropogenic climate change

This week Prof. Ed Hawkins from the Department of Meteorology and NCAS-Climate gave a University of Reading public lecture discussing the science of climate change. A plethora of research was presented, all highlighting that humans are changing our climate. As scientists we can study the greenhouse effect in scientific labs, observe increasing temperatures across the majority of the planet, or simulate the impact of human actions on the Earth’s climate through using climate models.

Fig. 1, presented in Ed Hawkins’ lecture, shows the global mean temperature rise associated with human activities. Two sets of climate simulations have been performed to produce this plot. The first set, shown in blue, are simulations controlled solely by natural forcings, i.e. variations in radiation from the sun and volcanic eruptions. The second, shown in red, are simulations which include both natural forcing and forcing associated with greenhouse gas emissions from human activities. The shading indicates the spread amongst climate models, whilst the observed global-mean temperature is shown by the solid black line. From this plot it is evident that all climate models attribute the rising temperatures over the 20th and 21st century to human activity. Climate simulations without greenhouse gas emissions from human activity indicate a much smaller rise, if any, in global-mean temperature.

#### However, whilst there is much agreement amongst climate scientists and climate models that our planet is warming due to human activity, understanding the local impact of anthropogenic climate change contains its uncertainties.

For example, my PhD research aims to understand what controls the location and intensity of the Intertropical Convergence Zone. The Intertropical Convergence Zone is a discontinuous, zonal precipitation band in the tropics that migrates meridionally over the seasonal cycle (see Fig. 2). The Intertropical Convergence Zone is associated with wet and dry seasons over Africa, the development of the South Asian Monsoon and the life-cycle of tropical cyclones. However, currently our climate models struggle to simulate characteristics of the Intertropical Convergence Zone. This, alongside other issues, results in climate models differing in the response of tropical precipitation to anthropogenic climate change.

Figure 3 is a plot taken from a report written by the Intergovernmental Panel on Climate Change (Climate Change 2013: The Physical Science Basis). Both maps show the projected change from climate model simulations in Northern Hemisphere winter precipitation between the years 2016 to 2035 (left) and 2081 to 2100 (right) relative to 1986 to 2005 under a scenario where minimal action is taken to limit greenhouse gas emissions (RCP8.5) . Whilst the projected changes in precipitation are an interesting topic in their own right, I’d like to draw your attention to the lines and dots annotated on each map. The lines indicate where the majority of climate models agree on a small change. The map on the left indicates that most climate models agree on small changes in precipitation over the majority of the globe over the next two decades. Dots, meanwhile, indicate where climate models agree on a substantial change in Northern Hemisphere winter precipitation. The plot on the right indicates that across the tropics there are substantial areas where models disagree on changes in tropical precipitation due to anthropogenic climate change. Over the majority of Africa, South America and the Maritime Continent, models disagree on the future of precipitation due to climate change.

#### How should scientists present these uncertainties?

I must confess that I am nowhere near an expert in communicating uncertainties, however I hope some of my thoughts will encourage a discussion amongst scientists and users of climate data. Here are some of the ideas I’ve picked up on during my PhD and thoughts associated with them:

• Climate model average – Take the average amongst climate model simulations. With this method though you take the risk of smoothing out large positive and negative trends. The climate model average is also not a “true” projection of changes due to anthropogenic climate change.
• Every climate model outcome – Show the range of climate model projections to the user. Here you face the risk of presenting the user with too much climate data. The user may also trust certain model outputs which suit their own agenda.
• Storylines – This idea was first shown to me in a paper by Zappa, G. and Shepherd, T. G., (2017). You present a series of storylines in which you highlight the key processes that are associated with variability in the regional weather pattern of interest. Each change in the set of processes leads to a different climate model projection. However, once again, the user of the climate model data has to reach their own conclusion on which projection to take action on.
• Probabilities with climate projections – Typically with short- and medium-range weather forecasts probabilities are used to support the user. These probabilities are generated by re-performing the simulations, each with either different initial conditions or a slight change in model physics, to see the percentage of simulations that agree on model output. However, with climate model simulations, it is slightly more difficult to associate probabilities with projections. How do you generate the probabilities? Climate models have similarities in the methods which they use to represent the physics of our atmosphere and therefore you don’t want the probabilities associated with each climate projection due to similarity amongst climate model set-up. You could base the probabilities on how well the climate model simulates the past, however just because a model simulates the past correctly, doesn’t mean it will correctly simulate the forcing in the future.

There is much more that can be said about communicating uncertainty among climate model projections – a challenge which will continue for several decades. As climate scientists we can sometimes fall into the trap on concentrating on uncertainties. We need to keep on presenting the work that we are confident about, to ensure that the right action is taken to mitigate against anthropogenic climate change.