The workshop was organised under the umbrella of ECMWF, the Copernicus services CEMS and C3S, the Hydrological Ensemble Prediction EXperiment (HEPEX) and the Global Flood Partnership (GFP). The workshop lasted 3 days, with a keynote speaker followed by Q&A at the start of each of the 6 sessions. Each keynote talk focused on a different part of the forecast chain, from hybrid hydrological forecasting to the use of forecasts for anticipatory humanitarian action, and how the global and local hydrological scales could be linked. Following this were speedy poster pitches from around the world and poster presentations and discussion in the virtual ECMWF (Gather.town).
What was your poster about?
Gwyneth – I presented Evaluating the post-processing of the European Flood Awareness System’s medium-range streamflow forecasts in Session 2 – Catchment-scale hydrometeorological forecasting: from short-range to medium-range. My poster showed the results of the recent evaluation of the post-processing method used in the European Flood Awareness System. Post-processing is used to correct errors and account for uncertainties in the forecasts and is a vital component of a flood forecasting system. By comparing the post-processed forecasts with observations, I was able to identify where the forecasts were most improved.
Helen – I presented An evaluation of ensemble forecast flood map spatial skill in Session 3 – Monitoring, modelling and forecasting for flood risk, flash floods, inundation and impact assessments. The ensemble approach to forecasting flooding extent and depth is ideal due to the highly uncertain nature of extreme flooding events. The flood maps are linked directly to probabilistic population impacts to enable timely, targeted release of funding. The Flood Foresight System forecast flood inundation maps are evaluated by comparison with satellite based SAR-derived flood maps so that the spatial skill of the ensemble can be determined.
What did you find most interesting at the workshop?
Gwyneth – All the posters! Every session had a wide range of topics being presented and I really enjoyed talking to people about their work. The keynote talks at the beginning of each session were really interesting and thought-provoking. I especially liked the talk by Dr Wendy Parker about a fitness-for-purpose approach to evaluation which incorporates how the forecasts are used and who is using the forecast into the evaluation.
Helen – Lots! All of the keynote talks were excellent and inspiring. The latest developments in detecting flooding from satellites include processing the data using machine learning algorithms directly onboard, before beaming the flood map back to earth! If openly available and accessible (this came up quite a bit) this will potentially rapidly decrease the time it takes for flood maps to reach both flood risk managers dealing with the incident and for use in improving flood forecasting models.
How was your virtual poster presentation/discussion session?
Gwyneth – It was nerve-racking to give the mini-pitch to +200 people, but the poster session in Gather.town was great! The questions and comments I got were helpful, but it was nice to have conversations on non-research-based topics and to meet some of the EC-HEPEXers (early career members of the Hydrological Ensemble Prediction Experiment). The sessions felt more natural than a lot of the virtual conferences I have been to.
Helen – I really enjoyed choosing my hairdo and outfit for my mini self. I’ve not actually experienced a ‘real’ conference/workshop but compared to other virtual events this felt quite realistic. I really enjoyed the Gather.town setting, especially the duck pond (although the ducks couldn’t swim or quack! J). It was great to have the chance talk about my work and meet a few people, some thought-provoking questions are always useful.
The ECMWF/EUMETSAT NWP SAF Workshop (European Centre for Medium-Range Weather Forecasts/European Organisation for the Exploitation of Meteorological Satellites Numerical Weather Prediction Satellite Application Facilities Workshop) was originally to be held at the ECMWF centre in Reading, but as with everything else in 2020 was moved online. The workshop was designed to be a place to share new ideas and theories for dealing with errors in satellite data assimilation: encompassing the treatment of random errors; biases in observations; and biases in the model.
Group photo of attendees of ECMWF/EUMETSAT NWP SAT Workshop – Virtual Event: ECMWF/EUMETSAT NWP SAF Workshop on the treatment of random and systematic errors in satellite data assimilation for NWP.
It was held over four days: consisting of oral and poster presentations; panel discussions; and concluded on the final day with the participants split into groups to discuss what methods are currently in use and what needs to be addressed in the future.
The oral presentations were split into four sessions: scene setting talks; estimating uncertainty; correction of model and observation biases; and observation errors. The talks were held over Zoom for the main presenters and shown via a live broadcast on the workshop website. This worked well as the audience could only see the individual presenter and their slides, without having the usual worry of checking that mics and videos were off for other people in the call!
Scene Setting Talks
I found the scene setting talks by Niels Bormann (ECMWF) and Dick Dee (Joint Center for Satellite Data Assimilation – JCSDA) very useful as they gave overviews of observation errors and biases respectively: both explaining the current methods as well as the evolution of different methods over the years. Both Niels and Dick are prominent names amongst data assimilation literature, so it was interesting to hear explanations of the underlying theories from the experts in the field before moving onto the more focused talks later in the day.
Correction of Model and Observation Biases
The session about the correction of model and observation biases, was of particular interest to me as it discussed many new theoretical methods for disentangling model and observation biases which are beginning to be used in operational NWP.
The first talk by Patrick Laloyaux (ECMWF) was titled Estimation of Model Biases and Importance of Scale Separation and looked at weak-constraint 4D-Var: a variational bias correction technique that includes an error term in the model, such that solving the cost function involves varying three variables: the state; the observation bias correction coefficient; and the model error. When the background and model errors have different spatial scales and when there are sufficient reference observations, it has been shown in a simplified model that weak-constraint 4D-Var can accurately correct model and initial state errors. They argue that the background error covariance matrix contains small spatial scales, and the model error covariance matrix contains large spatial scales, which means that the errors can be disentangled in the system. However, without this scale difference, separating the errors would be much harder, so this method can only be considered when there are vast differences within the spatial scales.
On the other hand, the talk by Mark Buehner (Environment and Climate Change Canada) discussed an offline technique that performs 3D-Var analysis every six hours using only unbiased, also known as “anchor”, observations to reduce the effects of model bias. These analyses can then be used as reference states in the main 4D-EnVar assimilation cycle to estimate the bias in the radiance observations. This method was much discussed over the course of the workshop, as it is yet to be used operationally and was very interesting to see a completely different bias correction technique to tackle the problem of disentangling model and observation biases.
Poster presentations were shown via individual pages on the workshop website, with a comments section for small questions and virtual rooms – where presenters were available for a set two hours over the week. There were 12 poster presentations available, ranging from the theoretical statistics behind errors as well as operational techniques to tackle these errors.
My poster, focused on figures 1 and 2 which show the scalar state analysis error variances when (1) we vary the state background error variance accuracy for (a) underestimating and (b) overestimating the bias background error variance; (2) we vary the bias background error variance accuracy for (a) underestimating and (b) overestimating the state background error variance.
I presented a poster on work that I had been focussing on for the past few months titled Sensitivity of VarBC to the misspecification of background error covariances. My work focused on the effects of wrongly specifying the state and bias background error covariances on the analysis error covariances for the state and the bias. This was the first poster that I had ever presented so was a fast learning curve in how to clearly present detailed work in an aesthetic way. It was a useful experience as it gave me a hard deadline to conclude my current work and I had to really think about my next steps as well as why my work was important. Presenting online was a very different experience to presenting in person as it involved a lot of waiting around in a virtual room by myself, but when people did come, I was able to have some useful conversations, as well as the added bonus of being able to share my screen to share relevant papers.
On the final day we split ourselves into four working groups to discuss two different topics: the treatment of biases and the treatment of observation errors. The goal was to discuss current methods, as well as what we thought needed to be researched in the future or potential challenges that we would come across. This was hosted via the BlueJeans app, which provided a good space to talk as well as share screens and had the useful option to choose the ratio of viewing people’s videos, to viewing the presenter’s screen. Although I wasn’t able to contribute much, this was a really interesting day as I was able to listen to views of the experts in the field and listen to their discussions on what they believed to be the most important current issues, such as increasing discussion between data centres receiving the data and numerical weather prediction centres assimilating the data; and disentangling biases from different sources. Unfortunately for me, some of them felt that we were focussing too much on the theoretical statistics behind NWP and not enough on the operational testing, but I guess that’s experimentalists for you!
Although I was exhausted by the end of the week, the ECMWF/EUMETSAT NWP SAF Workshop was a great experience and I would love to attend next time, regardless of whether it is virtual or in person. As much as I missed the opportunity to talk to people face to face, the organisers did a wonderful job of presenting the workshop online and there were many opportunities to talk to the presenters. There were also some benefits of the virtual workshop: people from across the globe could easily join; the presentations were recorded, so can easily be re-watched (all oral and poster presentations can be found via this link – https://events.ecmwf.int/event/170/overview); and resource sharing was easy via screen sharing. I wonder whether future workshops and conferences could be a mixture of online as well as in person, in order to get the best of both worlds? I would absolutely recommend this workshop, both for people who are just starting out in DA as well as for researchers with years of experience, as it encompassed presentations from big names who have been working in error estimation for many years as well as new presenters and new ideas from worldwide speakers.
From April 2nd-5th I attended the workshop on Predictability, dynamics and applications research using the TIGGE and S2S ensembles at ECMWF in Reading. TIGGE (The International Grand Global Ensemble, formerly THORPEX International Grand Global Ensemble) and S2S (Sub-seasonal-to-Seasonal) are datasets hosted at primarily at ECMWF as part of initiatives by the World Weather Research Programme (WWRP) and the World Climate Research Programme (WCRP). TIGGE has been running since 2006 and stores operational medium-range forecasts (up to 16 days) from 10 global weather centres, whilst S2S has been operational since 2015 and houses extended-range (up to 60 days) forecasts from 11 different global weather centres (e.g. ECMWF, NCEP, UKMO, Meteo-France, CMA…etc.). The benefit of these centralised datasets is their common format, which enables straightforward data requests and multi-model analysis with minimal data manipulation allowing scientists to focus on doing science!
Attendees of the workshop came from around the world (not just Europe) although there was a particularly sizeable cohort from Reading Meteorology and NCAS.
In my PhD so far, I have been making extensive use of the S2S database – looking at both operational and re-forecast datasets to assess stratospheric predictability and biases – and it was rewarding to attend the workshop and see what a diverse range of applications the datasets have across the world. From the oceans to the stratosphere, tropics to poles, predictability mathematics to farmers and energy markets, it was immediately very clear that TIGGE and S2S are wonderfully useful tools for both the research and applications communities. A particular aim of the workshop was to discuss “user-oriented variables” – derived variables from model output which represent the meteorological conditions to which a user is sensitive (such as wind speed at a specific height for wind power applications).
The workshop mainly consisted of 15-minute conference-style talks in the main lecture theatre and poster sessions, but the final two days also featured parallel working group sessions of about 15 members each. The topics discussed in the working groups can be found here. I was part of working group 4, and we discussed dynamical processes and ensemble diagnostics. We reflected on some of the points raised by speakers over the preceding days – particular attention was given to diagnostics needed to understand dynamical effects of model biases (such as their influence on Rossby wave propagation and weather-regime transition) alongside what other variables researchers needed to make full use of the potentials S2S and TIGGE offer (I don’t think I could say “more levels in the stratosphere!” loudly enough – TIGGE does not go above 50 hPa, which is not useful when studying stratospheric warming events defined at 10 hPa).
Data analysis tools are also becoming increasingly important in atmospheric science. Several useful and perhaps less well-known tools were presented at the workshop – Mio Matsueda’s TIGGE and S2S museum websites provide a wide variety of pre-prepared plots of variables like the NAO and MJO which are excellent for exploratory data analysis without needing many gigabytes of data downloads. Figure 2 shows an example of NAO forecasts from S2S data – the systematic negative NAO bias at longer lead-times was frequently discussed during the workshop, whilst the inability to capture the transition to a positive NAO regime beginning around February 10th is worth further analysis. In addition to these, IRI’s Data Library has powerful abilities to manipulate, analyse, plot, and download data from various sources including S2S with server-side computation.
It’s inspiring and motivating to be part of the sub-seasonal forecast research community and I’m excited to present some of my work in the near future!