NCAS Climate Modelling Summer School

Shammi Akhter – s.akhter@pgr.reading.ac.uk

The Virtual Climate Modelling Summer School covers the fundamental principles of climate modelling. The school is run for 2 weeks in the September of each year by the leading researchers from the National Centre for Atmospheric Science (NCAS) and from the Department of Meteorology at the University of Reading. I attended the school mainly because I have recently started using climate models in the second aspect of my research work and also because one of my supervisors recommended this to me.

What happened during the first week?

In the first week, lecturers introduced us to numerical methods used in climate models and we had a practical assignment implementing a chosen numerical method of our own in Python. We mostly worked individually on our projects that week. There were also lectures on convection parameterisation and statistical analysis for climate models.

What happened during the second week?

Figure 1: Earth energy budget comparison diagram between control (red) and flat earth (green) experiments produced by me in week 2.

In week two, with the assistance of NCAS and university scientists, we analysed climate model outputs. I personally was involved in the Flat Earth experiment- in which we tested the effect of changing surface elevation for terrain such as mountains, high plateaus on the climate. In this experiment, the perturbation is imposed by reducing the elevation of mountains to sea level. There were eight people in our team. As you may know, we PhD students have the occasional opportunity to do research collaboratively with other students in Reading Meteorology and encourage our teammates. For this reason, it felt very nice to me to work as the part of a research group. I was amazed by how we had been able to produce a small good piece of scientific work just within a matter of days due to our team effort. In Figure 1, I have presented a small part of our work which is the global energy budget comparison between a control experiment and the flat earth experiment (where the elevation of the mountains has been reduced to sea level). Along with our practical, we also attended some lectures on the ocean dynamics and physics, water in the climate system and land-atmosphere coupling and surface energy balance during this week.

How was it like to socialize with people virtually?

We used the Gather.town during the lunchtime and after work to socialize. I was a bit surprised though that I was the only student joining the Gather.town and as a result I always had to hang out (virtually) with NCAS and university scientists all the time. I rather consider it a blessing for me as there was no competition to introduce myself to the professionals. I even received a kind offer from one of our professors to assist him as a teaching assistant in his course in the department.

Concluding Remarks

I learnt about some of the basic concepts of climate modelling and I hope to use these things in my research someday. It was also very refreshing to talk to and work with other students as well as the scientists. While working in a group in week 2, I once again realized there are so many things we can accomplish if we work together and encourage each other.

Machine Learning: complement or replacement of Numerical Weather Prediction? 

Emanuele Silvio Gentile – e.gentile@pgr.reading.ac.uk

Figure 1 Replica of the first 1643 Torricelli barometer [1]

Humans have tried, for millennia, to predict the weather by finding physical relationships between observed weather events, a notable example being the descent in barometric pressure used as an indicator of an upcoming precipitation event. It should come as no surprise that one of the first weather measuring instrument to be invented was the barometer, by Torricelli (see in Fig. 1 a replica of the first Torricelli barometer), nearly concurrently with a reliable thermometer. Only two hundred years later, the development of the electric telegraph allowed for a nearly instant exchange of weather data, leading to the creation of the first synoptic weather maps in the US, followed by Europe. Synoptic maps allowed amateur and professional meteorologists to look at patterns between weather data in an unprecedented effective way for the time, allowing the American meteorologists Redfield and Epsy to resolve the dispute on which way the air flowed in a hurricane (anticlockwise in the Northern Hemisphere).

Figure 2 High Resolution NWP – model concept [2]

By the beginning of the 20th century many countries around the globe started to exchange data daily (thanks to the recently laid telegraphic cables) leading to the creation of global synoptic maps, with information in the upper atmosphere provided by radiosondes, aeroplanes, and in the 1930s radars. By then, weather forecasters had developed a large set of experimental and statistical rules on how to compute the changes to daily synoptic weather maps looking at patterns between historical sets of synoptic daily weather maps and recorded meteorological events, but often, prediction of events days in advance remained challenging.

In 1954, a powerful tool became available to humans to objectively compute changes on the synoptic map over time: Numerical Weather Prediction models. NWPs solve numerically the primitive equations, a set of nonlinear partial differential equations that approximate the global atmospheric flow, using as initial conditions a snapshot of the state of the atmosphere, termed analysis, provided by a variety of weather observations. The 1960s, marked by the launch of the first satellites, enabled 5-7 days global NWP forecasts to be performed. Thanks to the work of countless scientists over the past 40 years, global NWP models, running at a scale of about 10km, can now simulate skilfully and reliably synoptic-scale and meso-scale weather patterns, such as high-pressure systems and midlatitude cyclones, with up to 10 days of lead time [3].

The relatively recent adoption of limited-area convection-permitting models (Fig. 2) has made possible even the forecast of local details of weather events. For example, convection-permitting forecasts of midlatitude cyclones can accurately predict small-scale multiple slantwise circulations, the 3-D structure of convection lines, and the peak cyclone surface wind speed [4].

However, physical processes below convection permitting resolution, such as wind gusts, that present an environmental risk to lives and livelihoods, cannot be explicitly resolved, but can be derived from the prognostic fields such as wind speed and pressure. Alternative techniques, such as statistical modelling (Malone model), haven’t yet matched (and are nowhere near to) the power of numerical solvers of physical equations in simulating the dynamics of the atmosphere in the spatio-temporal dimension.

Figure 3 Error growth over time [5]

NWPs are not without flaws, as they are affected by numerical drawbacks: errors in the prognostic atmospheric fields build up through time, as shown in Fig. 3, reaching a comparable forecast error to that of a persisted forecast, i.e. at each time step the forecast is constant, and of a climatology-based forecast, i.e. mean based on historical series of observations/model outputs. Errors build up because NWPs iteratively solve the primitive equations approximating the atmospheric flow (either by finite differences or spectral methods). Sources of these errors are: too coarse model resolution (which leads to incorrect representation of topography), long integration time steps, and small-scale/sub-grid processes which are unresolved by the model physics approximations. Errors in parametrisations of small-scale physical processes grow over time, leading to significant deterioration of the forecast quality after 48h. Therefore, high-fidelity parametrisations of unresolved physical processes are critical for an accurate simulation of all types of weather events.

Figure 4 Met Office HPC [6]

Another limitation of NWPs is the difficulty in simulating the chaotic nature of weather, which leads to errors in model initial conditions and model physics approximations that grow exponentially over time. All these limitations, combined with instability of the atmosphere at the lower and upper bound, make the forecast of rapidly developing events such as flash floods particularly challenging to predict. A further weakness of NWP forecasts is that they rely on the use of an expensive High Parallel Computing (HPC) facility (Fig. 4), owned by a handful of industrialised nations, which run coarse scale global models and high-resolution convection-permitting forecasts on domains covering area of corresponding national interest. As a result, a high resolution prediction of weather hazards, and climatological analysis remains off-limits for the vast majority of developing and third-world countries, with detrimental effects not just on first line response to weather hazards, but also for the development of economic activities such agriculture, fishing, and renewable energies in a warming climate. In the last decade, the cloud computing technological revolution led to a tremendous increase in the availability and shareability of weather data sets, which transitioned from local storage and processing to network-based services managed by large cloud computing companies, such as Amazon, Microsoft or Google, through their distributed infrastructure.

Combined with the wide availability of their cloud computing facilities, the access to weather data has become more and more democratic and ubiquitous, and consistently less dependent on HPC facilities owned by National Agencies. This transformation is not without drawbacks in case these tech giants decide to close the taps of the flow of data. During a row with the Australian government, Facebook banned access to Australian news content in Feb 2021. Although by accident, also government-related agencies such as the Bureau of Meteorology were banned, leaving citizens with restricted access to important weather information until the pages were restored. It is hoped that with more companies providing distributed infrastructure, the accessibility to vital data for citizen security will become more resilient.

The exponential accessibility of weather data sets has stimulated the development and the application of novel machine learning algorithms. As a result, weather scientists worldwide can crunch increasingly effectively multi-dimensional weather data, ultimately providing a new powerful paradigm to understand and predict the atmospheric flow based on finding relationships between the available large-scale weather datasets.

Machine learning (ML) finds meaningful representations of the patterns between the data through a series of nonlinear transformations of the input data. ML pattern recognition is distinguished into two types: supervised and unsupervised learning.

Figure5 Feed-forward neural network [6]

Supervised Learning is concerned with predicting an output for a given input. It is based on learning the relationship between inputs and outputs, using training data consisting in example input/output pairs, being divided into regression or classification, depending on the type of the output variable to be predicted (discrete or continuous). Support Vector Machine (SVM) or Regression (SVR), Artificial Neural Network (ANN, with the feed-forward step shown in Fig. 5), and Convolutional Neural Network (CNN) are examples of supervised learning.

Unsupervised learning is the task of finding patterns within data without the presence of any ground truth or labelling of the data, with a common unsupervised learning task being clustering (group of data points that are close to one another, relative to data points outside the cluster). Examples of unsupervised learning are the K-means and K-Nearest Neighbour (KNN) algorithms [7].

So far, ML algorithms have been applied to four key problems in weather prediction:  

  1. Correction of systematic error in NWP outputs, which involves post-processing data to remove biases [8]
  1. Assessment of the predictability of NWP outputs, evaluating the uncertainty and confidence scores of ensemble forecasting [9]
  1. Extreme detection, involving prediction of severe weather such as hail, gust or cyclones [10]
  1. NWP parametrizations, replacing empirical models for radiative transfer or boundary-layer turbulence with ML techniques [11]

The first key problem, which concerns the correction of systematic error in NWPs, is the most popular area of application of ML methods in meteorology. In this field, wind speed and precipitation observational data are often used to perform an ML linear regression on the NWP data with the end goal of enhancing its accuracy and resolving local details of the weather which were unresolved by NWP forecasts. Although attractive for its simplicity and robustness, linear regression presents two problems: (1) least-square methods used to solve linear regression do not scale well with the size of datasets (since matrix inversion required by least square is increasingly expensive for increasing datasets size), (2) Many relationships between variables of interest are nonlinear. Instead, classification tree-based methods have proven very useful to model non-linear weather events, from thunderstorm and turbulence detection to extreme precipitation events, and the representation of the circular nature of the wind. In fact, compared to linear regression, random trees exhibit an easy scalability with large-size datasets which have several input variables. Besides preserving the scalability to large datasets of tree-based method, ML methods such as ANN and SVM/R provide also a more generic and more powerful alternative for nonlinear-processes modelling. These improvements have come at the cost of a difficult interpretation of the underlying physical concepts that the model can identify, which is critical given that scientists need to couple these ML models with physical-equations based NWP for variable interdependence. As a matter of fact, it has proven challenging to interpret the physical meaning of the weights and nonlinear activation functions that describe in the ANN model the data patterns and relationships found by the model [12].

The second key problem, represented by the interpretation of ensemble forecasts, is being addressed by ML unsupervised learning methods such as clustering, which represents the likelihood of a forecast aggregating ensemble members by similarity. Examples include grouping of daily weather phenomena into synoptic types, defining weather regimes from upper air flow patterns, and grouping members of forecast ensembles [13].

The third key problem, which concerns the prediction of weather extremes, corresponding to weather phenomena which are a hazard to lives and economic activities, ML based methods tend to underestimate these events. The problem here lies with imbalanced datasets, since extreme events represent only a very small fraction of the total events observed [14].

The fourth key problem to which ML is currently being applied, is parametrisation. Completely new stochastic ML approaches have been developed, and their effectiveness, along with their simplicity compared to traditional empirical models has highlighted promising future applications in (moist) convection [15]

Further applications of ML methods are currently limited by intrinsic problems affecting the ML methods in relation to the challenges posed by weather data sets. While the reduction of the dimensionality of the data by ML techniques has proven highly beneficial for image pattern recognition in the context of weather data, it leads to a marked simplification of the input weather data, since it constrains the input space to individual grid cells in space or time [16]. The recent expansion of ANN into deep learning has provided new methodologies that can address these issues. This has pushed further the capability of ML models within the weather forecast domain, with CNNs providing a methodology for extracting complex patterns from large, structured datasets have been proposed, an example being the CNN model developed by Yunjie Liu in 2016 [17] to classify atmospheric rivers from climate datasets (atmospheric rivers are an important physical process for prediction of extreme rainfall events).

Figure 7 Sample images of atmospheric rivers correctly classified (true positive) by the deep CNN model in [18]

At the same time, Recursive Neural Networks (RNN), developed for natural language processing, are improving nowcasting techniques exploiting their excellent ability to work with the temporal dimension of data frames. CNN and RNN have now been combined, as illustrated in Fig. 6, providing the first nowcasting method in the context of precipitation, using radar data frames as input [18].

Figure 6 Encoding-forecasting ConvLSTM network for precipitation nowcasting [18]

While these results show a promising application of ML models to a variety of weather prediction tasks which extend beyond the area of competence of traditional NWPs, such as analysis of ensemble clustering, bias correction, analysis of climate data sets and nowcasting, they also show that ML models are not ready to replace NWP to forecast synoptic-scale and mesoscale weather patterns.

As a matter of fact, NWPs have been developed and improved for over 60 years with the very purpose to simulate very accurately and reliably the wind, pressure, temperature and other relevant prognostic fields, so it would be unreasonable to expect ML models to outperform NWPs on such tasks.

It is also true that, as noted earlier, the amount of available data will only grow in the coming decades, so it will be critical as well as strategic to develop ML models capable to extract patterns and interpret the relationships within such data sets, complementing NWP capabilities. But how long before an ML model will be capable to replace an NWP by crunching the entire set of historical observations of the atmosphere, extracting the patterns and the spatial-temporal relationships between the variables, and then performing weather forecasts?

Acknowledgement: I would like to thank my colleagues and friends Brian Lo, James Fallon, and Gabriel M. P. Perez, for reading and providing feedback on this article.

References

  1. https://collection.sciencemuseumgroup.org.uk/objects/co54518/replica-of-torricellis-first-barometer-1643-barometer-replica 
  1. https://www.semanticscholar.org/paper/High-resolution-numerical-weather-prediction-(NWP)-Allan-Bryan/a40e0ebd388b915bdd357f398baa813b55cef727/figure/6 
  1. Buizza, R., Houtekamer, P., Pellerin, G., Toth, Z., Zhu, Y. and Wei, M. (2005) A comparison of the ECMWF, MSC, and NCEP global ensemble prediction systems. Mon Weather Rev, 133, 1076 – 1097 
  1. Lean, H. and Clark, P. (2003) The effects of changing resolution on mesocale modelling of line convection and slantwise circulations in FASTEX IOP16. Q J R Meteorol Soc, 129, 2255–2278 
  1. http://www.chanthaburi.buu.ac.th/~wirote/met/tropical/textbook_2nd_edition/navmenu.php_tab_10_page_4.3.5.htm 
  1. Bishop, C., and Christopher, M., Pattern Recognition and Machine Learning, Springer 
  1. https://www.arup.com/projects/met-office-high-performance-computer 
  1. J. L. Aznarte and N. Siebert, “Dynamic Line Rating Using Numerical Weather Predictions and Machine Learning: A Case Study,” in IEEE Transactions on Power Delivery, vol. 32, no. 1, pp. 335-343, Feb. 2017, doi: 10.1109/TPWRD.2016.2543818. 
  1. Foley, Aoife M et al. (2012). “Current methods and advances in forecasting of wind power generation”. In: Renewable Energy 37.1, pp. 1–8. 
  1. McGovern, Amy et al. (2017). “Using artificial intelligence to improve real-time decision making for high-impact weather”. In: Bulletin of the American Meteorological Society 98.10, pp. 2073–2090 
  1. O’Gorman, Paul A and John G Dwyer (2018). “Using machine learning to parameterize moist convection: Potential for modeling of climate, climate change and extreme events”. In: arXiv preprint arXiv:1806.11037 
  1. Moghim, Sanaz and Rafael L Bras (2017). “Bias correction of climate modeled temperature and precipitation using artificial neural networks”. In: Journal of Hydrometeorology 18.7, pp. 1867–1884.  
  1. Camargo S J, Robertson A W Gaffney S J Smyth P and M Ghil (2007). “Cluster analysis of typhoon tracks. Part I: General properties”. In: Journal of Climate 20.14, pp. 3635–3653. 
  1. Ahijevych, David et al. (2009). “Application of spatial verification methods to idealized and NWP-gridded precipitation forecasts”. In: Weather and Forecasting 24.6, pp. 1485–1497. 
  1. Berner, Judith et al. (2017). “Stochastic parameterization: Toward a new view of weather and climate models”. In: Bulletin of the American Meteorological Society 98.3, pp. 565–588. 
  1. Fan, Wei and Albert Bifet (2013). “Mining big data: current status, and forecast to the future”. In: ACM sIGKDD Explorations Newsletter 14.2, pp. 1–5 
  1. Liu, Yunjie et al. (2016). “Application of deep convolutional neural networks for detecting extreme weather in climate datasets”. In: arXiv preprint arXiv:1605.01156. 
  1. Xingjian, SHI et al. (2015). “Convolutional LSTM network: A machine learning approach for precipitation nowcasting”. In: Advances in neural information processing systems, pp. 802–810. 

Ensemble Data Assimilation with auto-correlated model error

Haonan Ren – h.ren@pgr.reading.ac.uk

Data assimilation is a mathematical method to combine forecasts with observations, in order to improve the accuracy of the original forecast. Normally data assimilation methods are performed with the perfect-model assumption (weak-constraint). However, there are different sources that can produce model error, such as missing description of the dynamic system and numerical discretisation. Therefore, in recent years, the model error has been more often considered in the data assimilation process (strong-constraint settings). 

There are several data assimilation methods applied in various fields. My PhD project mainly focuses on the ensemble/Monte Carlo formulation of the Kalman Filter-based methods, more specifically, the ensemble Kalman Smoother (EnKS). Different from the filter, a smoother updates the state of the system using observations from the past, present and possibly the future. The smoother does not only improve the forecast at the observation time, instead, it updates the whole simulation period. 

The main purpose of my research is to investigate the performance of the data assimilation methods with auto-correlated model error. We want to know what will happen if we propose a misspecified auto-correlation in the model error, for both state update and parameter estimation. We start our project with a very simple linear auto-regressive model. As for the auto-correlation in model error, we propose an exponential decaying decorrelation. Naturally, the system has a exponential decaying parameter ωr, and the parameter we use in the forecast and data assimilation is ωg which can be different from the real one. 

A simple example can illuminate the decorrelation issue. In Figure 1, we show results of a smoothing process for a simple one-dimensional system over a time window of 20 nature time steps. We use an ensemble Kalman Smoother with two different observation densities in time. The memories in the nature model and the forecasts models do not coincide. We can see that with ωr = 0.0, when the actual model error is a white-in-time random variable, the evolution of the true state of the system behaves rather erratically with the present model settings. If we do not know the memory and assume the model error is a bias in the data assimilation process (ωg → ∞), the estimation made by the data assimilation method is not even close to the truth, even with very dense observations in the simulation period, as shown in the left two subplots in Figure 1. On the other hand, if the model error in the true model evolution behaves like a bias, and we assume that the model error is white in time in the data assimilation process, the results are quite different with different observation frequencies. As shown in two subplots on the right in Figure 1, with very frequent observations, we can see a fairly good performance of the data assimilation process, but with a single observation, the estimation is still not accurate. 

Figure 1: Plots of the trajectories of the true state of the system (black), three posterior ensemble members (pink, randomly chosen from 200 members), and the posterior ensemble mean (red). The left subplots show results for a true white noise model error and an assumed bias model error for two observation densities. Note that the posterior estimates are poor in both cases. The right subplots depict a bias true model error and an assumed white noise model error. The result with one observation is poor, while if many observations are present the assimilation result is consistent within the ensemble spread.

In order to evaluate the performance of the EnKS, we need to compare the root-mean-square error (RMSE) with the ensemble spread of the posterior. The best performance of the EnKS is when ratio of RMSE over the spread is equal to 1.0. The results are shown in Figure 2. As we can see, the Kalman Smoother works well when ωg ωr for all the cases, with the ratio of RMSE over the spread equal to 1.0. With relatively high observational frequency, 5 observations or more in the simulation window, the RMSE is larger than the spread when ωg > ωr, and vice versa. In a further investigation, the mismatch between the two timescales ωr and ωg barely has any impact on the RMSE. The ratio is dominated by the ensemble spread.

Figure 2: Ratio of MSE over the posterior variance for the 1-dimensional system, calculated using numerical evaluation of the exact analytical expressions. The different panels show results for different numbers of observations.

Then, we move to estimating the parameter encoded in the auto-correlation of the model error. We estimate the exponential decaying parameter by the state augmentation using the EnKS, and the results are shown in Figure 3. Instead of the exponential parameter, ωg, we use the log scale of the memory timescale to avoid negative memory estimates. The initial log-timescale values are drawn from a normal distribution: ln ωgi ∈ N (ln ωg, 1.0). Hence we assume that the prior distribution of the memory time scale is lognormal distributed. According to Figure 3, with an increasing number of windows we obtain better estimates. Also, the convergence is faster with more observations. And in some cases the solution did not converge to the correct value. This is not surprising given the highly nonlinear character of the parameter estimation problem, especially with only one observation per window. When we observe every time step the convergence is much faster, and the variance in the estimate decreases, as shown in the lower two subplots. In this case we always found fast convergence with different first guess and true timescale combinations, demonstrating that more observations bring us closer to the truth, and hence make the parameter estimation problem more linear.

Figure 3: PDFs of the prior (blue) and posterior (reddish colours) estimated ωg, using an increasing number of assimilation windows. The different panels show results for different observation densities and prior mean larger (a, c) or smaller (b, d) than ωr. The vertical black line denotes the true value ωr.

As the results of the experiments show, the influence of an incorrect decorrelation timescale in the model error can be significant. We found that when the observation density is high, state augmentation is sufficient to obtain converging results. The new element is that online estimation is possible beyond a relatively simple bias estimate of the model error.

As a next step we will explore the influence of incorrectly specified model errors in nonlinear systems, and a more complex auto-correlation in the model error.

Fluid Dynamics Summer School 

Charlie Suitters – c.c.suitters@pgr.reading.ac.uk 

Every year, Cambridge and École Polytechnique in Paris alternate hosting duties of the Fluid Dynamics of Sustainability and the Environment (FDSE) summer school. This ran for two weeks earlier in September, and like many other things took place online. After talking to previous attendees of the summer school, I went into the fortnight with excitement but also trepidation, as I had heard that it has an intense programme! Here is my experience of a thoroughly enjoyable couple of weeks. 

Structure 

The summer school brought together around 50 PhD students and a few postdocs from all over the world, from Japan to Europe to Arizona, and I have to admire the determination of those students who attended the school at unsociable times of the day! We all came from different backgrounds – some had a meteorological background like myself, but there were also oceanographers, fluid dynamicists, engineers and geographers to name but a few. It was great to hear from so many students who are passionate about their work in two brief ice-breaker sessions where we introduced ourselves to the group and I got to appreciate how wide-reaching the FDSE community is. 

Each day consisted of four 1-hour lectures – normally three ‘core’ subjects (fluid dynamics basics, atmospheric dynamics, climate, oceanography, etc.) and one guest lecturer per day (including our very own Sue Gray who gave us a whistle-stop tour of the mesoscale and extratropical cyclones). After this, there was the opportunity to split into breakout groups and speak to the day’s lecturers to ask them questions and spark discussions in small groups. On the final day, we also had a virtual tour of the various fluid dynamics labs that Cambridge has (there are a lot!) and a few of the students in the labs spoke about their work. 

Core Lectures 

Figure 1. Demonstration of a density current (blue) of salty water in a tank of less dense tap water. Taken from Jean-Marc Chomaz’s lecture

These lectures were given by very engaging specialists including Colm-Cille Caulfield, John Taylor, Alison Ming, Jerome Neufeld and Jean-Marc Chomaz; and provided the perfect opportunity to see lots of pretty videos about fluid flows (Fig. 1). Having done an undergraduate course in Meteorology, a lot of these gave me a refresher of things I should already know, but it was refreshing to see how other lecturers approach the same material. 

The most interesting core lectures to me were those regarding renewable energy, given by Riwal Plougonuen and Alex Stegner. Plougonuen lectured us on wind turbines, telling us how they worked and why they are designed like they are – did you know that actually the most efficient wind turbines have 2 blades, but the vast majority have three for better structural stability? On the other hand, Stegner spoke to us about hydroelectricity, and I learned that Norway produces nearly all of its electricity through hydropower. Other highlights from these core lectures include watching a video of a research hut being swamped by an avalanche (Nathalie Vriend, see video at the link here), and seeing Jerome Neufeld demonstrate ice flows using golden syrup (he likes his food!) 

Guest Lectures 

Figure 2. Flow patterns around a sash window with both slots open – the blue arrows showing incoming cold air and the red arrows showing warm flow to the outside. Taken from Megan Davies Wykes’ lecture.

For me, the guest lectures were the highlights of my time at the summer school. These lectures often explored things beyond my area of expertise, and demonstrated just how the fluid mechanics we had learned are highly applicable to many different areas of life. We had a lecture about building ventilation from Megan Davies Wykes, which made me realise that adequately ventilating a room is more than simply cracking open a window – you have to consider everything from the size of the room, outside wind speed, how many windows there are, and even the body heat from people inside the room. Davies Wykes’s passion about people using their sash windows correctly will always stick with me – turns out you need to open both the top and the bottom panes for the best ventilation (something she emphasised more than once!), see Fig. 2.  

Figure 3. Demonstration of how droplets and plumes of air from the mouth are kept closer to the body when wearing a mask (Bhagat et al. 2020).

Fittingly, we also had a lecture from Paul Linden about the transmission of Covid, and he demonstrated how effective masks are at preventing transmission using a great visualisation (Fig. 3). It was great to have topics such as these that are relevant in today’s world, and provided yet another real-world application of the fluid dynamics we had learned. 

Breakout Discussion Sessions 

Every afternoon, the day’s lecturers returned and invited us to ask them questions about their lectures, or just have an intelligent discussion about their area of expertise. Admittedly these sessions could get a little awkward when everyone was too tired to ask anything towards the end of the long two weeks, but these sessions were still incredibly useful. They provided us the means to speak to a professional in their field about their research, and allowed us time to network and ask them some challenging questions. 

Concluding Remarks 

Of course, over the course of the two weeks we learned so much more than what I described above, and yet again demonstrates the versatility of the field! The summer school as a whole was organised really well and the lecturers were engaging and genuinely interested in hearing about us and our projects. I would highly recommend attending this summer school next year to any PhD student – the scope of the school was so broad that I am sure there will be something for everyone in the programme, and fingers crossed it goes ahead in Paris next year! 

References 

Bhagat, R., Davies Wykes, M., Dalziel, S., & Linden, P. (2020). Effects of ventilation on the indoor spread of COVID-19. Journal of Fluid Mechanics, 903, F1. doi:10.1017/jfm.2020.720 

Diagnosing solar wind forecast errors

Harriet Turner – h.turner3@pgr.reading.ac.uk

The solar wind is a continual outflow of charged particles that comes off the Sun, ranging in speed from 250 to 800 km s-1. During the first six months of my PhD, I have been investigating the errors in a type of solar wind forecast that uses spacecraft observations, known as corotation forecasts. This was the topic of my first paper, where I focussed on extracting the forecast error that occurs due to a separation in the spacecraft latitude. I found that up to a latitudinal separation of 6 degrees, the error contribution was approximately constant. Above 6 degrees, the error contribution increases as the latitudinal separation increases. In this blog post I will explain the importance of forecasting the solar wind and the principle behind corotation forecasts. I will also explain how this work has wider implications for future space missions and solar wind forecasting.

The term “space weather” refers to the changing conditions in near-Earth space. Extreme space weather events can cause several effects on Earth, such as damaging power grids, disrupting communications, knocking out satellites and harming the health of humans in space or on high-altitude flights (Cannon, 2013). These effects are summarised in Figure 1. It is therefore important to accurately forecast space weather to help mitigate against these effects. Knowledge of the background solar wind is an important aspect of space weather forecasting as it modulates the severity of extreme events. This can be achieved through three-dimensional computer simulations or through more simple methods, such as corotation forecasts as discussed below.

Figure 1. Cosmic rays, solar energetic particles, solar flare radiation, coronal mass ejections and energetic radiation belt particles cause space weather. Subsequently, this produces a number of effects on Earth. Source: ESA.

Solar wind flow is mostly radial away from the Sun, however the fast/slow structure of the solar wind rotates round with the Sun. If you were looking down on the ecliptic plane (where the planets lie, at roughly the Sun’s equator), then you would see a spiral shape of fast and slow solar wind, as in Figure 2. This makes a full rotation in approximately 27 days. As this rotates around, it allows us to use observations on this plane as a forecast for a point further on in that rotation, assuming a steady-state solar wind (i.e., the solar wind does not evolve in time). For example, in Figure 2, an observation from the spacecraft represented by the red square could be used as a forecast at Earth (blue circle), some time later. This time depends on the longitudinal separation between the two points, as this determines the time it takes for the Sun to rotate through that angle.

Figure 2. The spiral structure of the solar wind, which rotates anticlockwise. Here, STA and STB are the STEREO-A and STEREO-B spacecraft respectively. The solar wind shown here is the radial component. Source: HUXt model (Owens et al, 2020).

In my recent paper I have been investigating how the corotation forecast error varies with the latitudinal separation of the observation and forecast points.  Latitudinal separation varies throughout the year, and it was theorised that it should have a significant impact on the accuracy of corotation forecasts. I used the two spacecraft from the STEREO mission, which are on the same plane as Earth, and a dataset for near-Earth. This allowed for six different configurations to compute corotation forecasts, with a maximum latitudinal separation of 14 degrees. I analysed the 18-month period from August 2009 to February 2011 to help eliminate other affecting variables. Figure 3 shows the relationship between forecast error and latitudinal separation. Up to approximately 6 degrees, there is no significant relationship between error and latitudinal separation. Above this, however, the error increases approximately linearly with the latitudinal separation.

Figure 3. Variation of forecast error with the latitudinal separation between the spacecraft making the observation and the forecast location. Error bars span one standard error on the mean.

This work has implications for the future Lagrange space weather monitoring mission, due for launch in 2027. The Lagrange spacecraft will be stationed in a gravitational null, 60degrees in longitude behind Earth on the ecliptic plane. Gravitational nulls occur when the gravitational fields between two or more massive bodies balance out. There are five of these nulls, called the Lagrange points, and locating a spacecraft at one reduces the amount of fuel needed to stay in position. The goal of the Lagrange mission is to provide a side-on view of the Sun-Earth line, but it also presents an opportunity for consistent corotation forecasts to be generated at Earth. However, the Lagrange spacecraft will oscillate in latitude compared to Earth, up to a maximum of about 5 degrees. My results indicate that the error contribution from latitudinal separation would be approximately constant.

The next steps are to use this information to help improve the performance of solar wind data assimilation. Data assimilation (DA) has led to large improvements in terrestrial weather forecasting and is beginning to be used in space weather forecasting. DA combines observations and model output to find an optimum estimation of reality. The latitudinal information found here can be used to inform the DA scheme how to better handle the observations and to, hopefully, produce an improved solar wind representation.

The work I have discussed here has been accepted into the AGU Space Weather journal and is available at https://agupubs.onlinelibrary.wiley.com/doi/epdf/10.1029/2021SW002802.

References

Cannon, P.S., 2013. Extreme space weather – A report published by the UK royal academy of engineering. Space Weather, 11(4), 138-139.  https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/swe.20032

ESA, 2018. https://www.esa.int/ESA_Multimedia/Images/2018/01/Space_weather_effects 

Owens, M.J., Lang, M.S., Barnard, L., Riley, P., Ben-Nun, M., Scott, C.J., Lockwood, M., Reiss, M.A., Arge, C.N. & Gonzi, S., 2020. A Computationally Efficient, Time-Dependent Model of the Solar Wind for use as a Surrogate to Three-Dimensional Numerical Magnetohydrodynamic Simulations. Solar Physics, 295(3), https://doi.org/10.1007/s11207-020-01605-3

Connecting Global to Local Hydrological Modelling Forecasting – Virtual Workshop

Gwyneth Matthews g.r.matthews@pgr.reading.ac.uk
Helen Hooker h.hooker@pgr.reading.ac.uk 

ECMWF- CEMS – C3S – HEPEX – GFP 

What was it? 

The workshop was organised under the umbrella of ECMWF, the Copernicus services CEMS and C3S, the Hydrological Ensemble Prediction EXperiment (HEPEX) and the Global Flood Partnership (GFP). The workshop lasted 3 days, with a keynote speaker followed by Q&A at the start of each of the 6 sessions. Each keynote talk focused on a different part of the forecast chain, from hybrid hydrological forecasting to the use of forecasts for anticipatory humanitarian action, and how the global and local hydrological scales could be linked. Following this were speedy poster pitches from around the world and poster presentations and discussion in the virtual ECMWF (Gather.town).  

Figure 1: Gather.town was used for the poster sessions and was set up to look like the ECMWF site in Reading, complete with a Weather Room and rubber ducks. 

What was your poster about? 

Gwyneth – I presented Evaluating the post-processing of the European Flood Awareness System’s medium-range streamflow forecasts in Session 2 – Catchment-scale hydrometeorological forecasting: from short-range to medium-range. My poster showed the results of the recent evaluation of the post-processing method used in the European Flood Awareness System. Post-processing is used to correct errors and account for uncertainties in the forecasts and is a vital component of a flood forecasting system. By comparing the post-processed forecasts with observations, I was able to identify where the forecasts were most improved.  

Helen – I presented An evaluation of ensemble forecast flood map spatial skill in Session 3 – Monitoring, modelling and forecasting for flood risk, flash floods, inundation and impact assessments. The ensemble approach to forecasting flooding extent and depth is ideal due to the highly uncertain nature of extreme flooding events. The flood maps are linked directly to probabilistic population impacts to enable timely, targeted release of funding. The Flood Foresight System forecast flood inundation maps are evaluated by comparison with satellite based SAR-derived flood maps so that the spatial skill of the ensemble can be determined.  

Figure 2: Gwyneth (left) and Helen (right) presenting their posters shown below in the 2-minute pitches. 

What did you find most interesting at the workshop? 

Gwyneth – All the posters! Every session had a wide range of topics being presented and I really enjoyed talking to people about their work. The keynote talks at the beginning of each session were really interesting and thought-provoking. I especially liked the talk by Dr Wendy Parker about a fitness-for-purpose approach to evaluation which incorporates how the forecasts are used and who is using the forecast into the evaluation.  

Helen – Lots! All of the keynote talks were excellent and inspiring. The latest developments in detecting flooding from satellites include processing the data using machine learning algorithms directly onboard, before beaming the flood map back to earth! If openly available and accessible (this came up quite a bit) this will potentially rapidly decrease the time it takes for flood maps to reach both flood risk managers dealing with the incident and for use in improving flood forecasting models. 

How was your virtual poster presentation/discussion session? 

Gwyneth – It was nerve-racking to give the mini-pitch to +200 people, but the poster session in Gather.town was great! The questions and comments I got were helpful, but it was nice to have conversations on non-research-based topics and to meet some of the EC-HEPEXers (early career members of the Hydrological Ensemble Prediction Experiment). The sessions felt more natural than a lot of the virtual conferences I have been to.  

Helen – I really enjoyed choosing my hairdo and outfit for my mini self. I’ve not actually experienced a ‘real’ conference/workshop but compared to other virtual events this felt quite realistic. I really enjoyed the Gather.town setting, especially the duck pond (although the ducks couldn’t swim or quack! J). It was great to have the chance talk about my work and meet a few people, some thought-provoking questions are always useful.  

Helicopter Underwater Escape Training for Arctic Field Campaign

Hannah Croad h.croad@pgr.reading.ac.uk

The focus of my PhD project is investigating the physical mechanisms behind the growth and evolution of summer-time Arctic cyclones, including the interaction between cyclones and sea ice. The rapid decline of Arctic sea ice extent is allowing human activity (e.g. shipping) to expand into the summer-time Arctic, where it will be exposed to the risks of Arctic weather. Arctic cyclones produce some of the most impactful Arctic weather, associated with strong winds and atmospheric forcings that have large impacts on the sea ice. Hence, there is a demand for improved forecasts, which can be achieved through a better understanding of Arctic cyclone mechanisms. 

My PhD project is closely linked with a NERC project (Arctic Summer-time Cyclones: Dynamics and Sea-ice Interaction), with an associated field campaign. Whereas my PhD project is focused on Arctic cyclone mechanisms, the primary aims of the NERC project are to understand the influence of sea ice conditions on summer-time Arctic cyclone development, and the interaction of cyclones with the summer-time Arctic environment. The field campaign, originally planned for August 2021 based in Svalbard in the Norwegian Arctic, has now been postponed to August 2022 (due to ongoing restrictions on international travel and associated risks for research operations due to the evolving Covid pandemic). The field campaign will use the British Antarctic Survey’s low-flying Twin Otter aircraft, equipped with infrared and lidar instruments, to take measurements of near-surface fluxes of momentum, heat and moisture associated with cyclones over sea ice and the neighbouring ocean. These simultaneous observations of turbulent fluxes in the atmospheric boundary layer and sea ice characteristics, in the vicinity of Arctic cyclones, are needed to improve the representation of turbulent exchange over sea ice in numerical weather prediction models. 

Those wishing to fly onboard the Twin Otter research aircraft are required to do Helicopter Underwater Escape Training (HUET). Most of the participants on the course travel to and from offshore facilities, as the course is compulsory for all passengers on the helicopters to rigs. In the unlikely event that a helicopter must ditch on the ocean, although the aircraft has buoyancy aids, capsize is likely because the engine and rotors make the aircraft top heavy. I was apprehensive about doing the training, as having to escape from a submerged aircraft is not exactly my idea of fun. However, I realise that being able to fly on the research aircraft in the Arctic is a unique opportunity, so I was willing to take on the challenge! 

The HUET course is provided by the Petans training facility in Norwich. John Methven, Ben Harvey, and I drove to Norwich the night before, in preparation for an early start the next day. We spent the morning in the classroom, covering helicopter escape procedures and what we should expect for the practical session in the afternoon. We would have to escape from a simulator recreating a crash landing on water. The simulator replicates a helicopter fuselage, with seats and windows, attached to the end of a mechanical arm for controlled submersion and rotation. The procedure is (i) prepare for emergency landing: check seatbelt is pulled tight, headgear is on, and that all loose objects are tucked away, (ii) assume the brace position on impact, and (iii) keep one hand on the window exit and the other on your seatbelt buckle. Once submerged, undo your seatbelt and escape through the window. After a nervy lunch, it was time to put this into practice. 

The aircraft simulator being submerged in the pool (Source: Petans promotional video

The practical part of the course took place in a pool (the temperature resembled lukewarm bath water, much warmer than the North Atlantic!). We were kitted up with two sets of overalls over our swimming costumes, shoes, helmets, and jackets containing a buoyancy aid. We then began the training in the aircraft simulator. Climb into the aircraft and strap yourself into a seat. The seatbelt had to be pulled tight, and was released by rotating the central buckle. On the pilots command, prepare for emergency landing. Assume the brace position, and the aircraft drops into the water. Hold on to the window and your seatbelt buckle, and as the water reaches your chest, take a deep breath. Wait for the cabin to completely fill with water and stop moving – only then undo your seatbelt and get out! 

The practical session consisted of three parts. In the first exercise, the aircraft was submerged, and you had to escape through the window. The second exercise was similar, except that panes were fitted on the windows, which you had to push out before escaping. In the final exercise, the aircraft was submerged and rotated 180 degrees, so you ended up upside down (and with plenty of water up your nose), which was very disorientating! Each exercise required you to hold your breath for roughly 10 seconds at a time. Once we had escaped and reached the surface, we deployed our buoyancy aids, and climbed to safety onto the life raft. 

Going for a spin! The aircraft simulator being rotated with me strapped in
Ben and I happy to have survived the training!

The experience was nerve-wracking, and really forced me to push myself out of my comfort zone. I didn’t need to be too worried though, even after struggling with undoing the seatbelt a couple of times, I was assisted by the diving team and encouraged to go again. I was glad to get through the exercises, and pass the course along with the others. This was an amazing experience (definitely not something I expected to do when applying for a PhD!), and I’m now looking forward to the field campaign next year. 

Forecast Verification Summer School

Lily Greig – l.greig@pgr.reading.ac.uk

A week-long summer school on forecast verification was held jointly at the end of June by the MPECDT (Mathematics of Planet Earth Centre for Doctoral Training) and JWGFVR (Joint Working Group on Forecast Verification Research). The school featured lectures from scientists and academics from many different countries around the world including Brazil, USA and Canada. They each specialised in different topics within forecast verification. Participants gained a large overview of the field and how the fields within it interact.

Structure of school

The virtual school consisted of lectures from individual members of the JWGFVR on their own subjects, along with drop-in sessions for asking questions and dedicated time to work on group projects. Four groups of 4-5 students were given an individual forecast verification challenge. The themes of the projects were precipitation forecasts, comparing high resolution global model and local area model wind speed forecasts, and ensemble seasonal forecasts. The latter was the topic of our project.

Content

The first lecture was given by Barbara Brown, who provided a broad summary of verification and gave examples of questions that verifiers may ask themselves as they attempt to assess the “goodness” of a forecast. The next day, a lecture by Barbara Casati covered continuous scores (verification of continuous variables e.g., temperature), such as linear bias, mean-squared error (MSE) and Pearson coefficient. She also outlined the deficits of different scores and how it is best to use a variety of them when assessing the quality of a forecast. Marion Mittermaier then spoke about categorical scores (yes/no events or multi category events such as precipitation type). She gave examples such as contingency tables which portray how well a model is able to predict a given event, based on hit rates (how often the model predicted an event when the event happened), and false alarm rates (how often the model predicted the event when it didn’t happen). Further lectures were given by Ian Joliffe on methods of determining the significance of your forecast scores, Nachiketa Acharya on probabilistic scores and ensembles, Caio Coelho on sub-seasonal to seasonal timescales, and then Raghavendra Ashrit, Eric Gilleland and Caren Marzban on severe weather, spatial verification and experimental design. The lectures have been made available online and you can find them here.

Forecast Verification

So, forecast verification is as it sounds: a part of assessing the ‘goodness’ of a forecast as opposed to its value. Verification is helpful for economic purposes (e.g. decision making), as well as administrative and scientific ones (e.g. identifying model flaws). The other aspect of measuring how well a forecast is performing is knowing the user’s needs, and therefore how to apply the forecast. It is important to consider the goal of your verification process beforehand, as it will outline your choice of metrics and your assessment of them. An example of how forecast goodness hinges on the user was given by Barbara in her talk: a precipitation forecast may have a spatial offset of where a rain patch falls, but if both observation and forecast fall along the flight path, this may be all the aviation traffic strategic planner needs to know. For a watershed manager on the ground, however, this would not be a helpful forecast. The lecturers also emphasised the importance of performing many different measures on a forecast and then understanding the significance of your measures in order to help you understand its overall goodness. Identifying standards of comparison for your forecast is also important, such as persistence or climatology. Then there are further challenges such as spatial verification, which requires methods of ‘matching’ the location of your observations with the model predictions on the model grid.

Figure 1: Problem statement for group presentation on 2m temperature ensemble seasonal forecasts, presented by Ryo Kurashina

Group Project

Our project was on verification of 2 metre temperature ensemble seasonal forecasts (see Figure 1). We were looking at seasonal forecast data with a 1-month lead time for the summer months for three different models and investigating ways of validating the forecasts, finally deciding which one was the better. We decided to focus on the models’ ability to predict hot and cold events as a simple metric for El Nino. We looked at scatter plots and rank histograms to investigate the biases in our data, Brier scores for assessing model accuracy (level of agreement between forecast and truth) and Receiver Operating Characteristic curves to look model skill (the relative accuracy of the forecast over some reference forecast). The ROC curve (see Fig. 2) refers to the curve formed by plotting hit rates against false alarm rates based on probability thresholds. The further above the diagonal line your curve lies, the better your forecast is at discriminating events compared to a random coin toss. The combination of these verification methods were used to assess which model we thought was best.

Of course, virtual summer schools are less than ideal compared to the real (in person) deal, but with Teams meetings, shared code and chat channel we made the most of it. It was fun to work with everyone, even (or especially?) if the topic was new for all of us.

Figure 2: Presenting our project during group project presentations on Friday

Conclusions

The summer school was incredibly smoothly run, very engaging to people both new and experienced in the topic and provided plenty of opportunity to ask questions to the enthusiastic lecturers. Would recommend to PhD students working with forecasts and wanting to assess them!