Devon Francis – email@example.com
The ECMWF/EUMETSAT NWP SAF Workshop (European Centre for Medium-Range Weather Forecasts/European Organisation for the Exploitation of Meteorological Satellites Numerical Weather Prediction Satellite Application Facilities Workshop) was originally to be held at the ECMWF centre in Reading, but as with everything else in 2020 was moved online. The workshop was designed to be a place to share new ideas and theories for dealing with errors in satellite data assimilation: encompassing the treatment of random errors; biases in observations; and biases in the model.
Group photo of attendees of ECMWF/EUMETSAT NWP SAT Workshop – Virtual Event: ECMWF/EUMETSAT NWP SAF Workshop on the treatment of random and systematic errors in satellite data assimilation for NWP.
It was held over four days: consisting of oral and poster presentations; panel discussions; and concluded on the final day with the participants split into groups to discuss what methods are currently in use and what needs to be addressed in the future.
The oral presentations were split into four sessions: scene setting talks; estimating uncertainty; correction of model and observation biases; and observation errors. The talks were held over Zoom for the main presenters and shown via a live broadcast on the workshop website. This worked well as the audience could only see the individual presenter and their slides, without having the usual worry of checking that mics and videos were off for other people in the call!
Scene Setting Talks
I found the scene setting talks by Niels Bormann (ECMWF) and Dick Dee (Joint Center for Satellite Data Assimilation – JCSDA) very useful as they gave overviews of observation errors and biases respectively: both explaining the current methods as well as the evolution of different methods over the years. Both Niels and Dick are prominent names amongst data assimilation literature, so it was interesting to hear explanations of the underlying theories from the experts in the field before moving onto the more focused talks later in the day.
Correction of Model and Observation Biases
The session about the correction of model and observation biases, was of particular interest to me as it discussed many new theoretical methods for disentangling model and observation biases which are beginning to be used in operational NWP.
The first talk by Patrick Laloyaux (ECMWF) was titled Estimation of Model Biases and Importance of Scale Separation and looked at weak-constraint 4D-Var: a variational bias correction technique that includes an error term in the model, such that solving the cost function involves varying three variables: the state; the observation bias correction coefficient; and the model error. When the background and model errors have different spatial scales and when there are sufficient reference observations, it has been shown in a simplified model that weak-constraint 4D-Var can accurately correct model and initial state errors. They argue that the background error covariance matrix contains small spatial scales, and the model error covariance matrix contains large spatial scales, which means that the errors can be disentangled in the system. However, without this scale difference, separating the errors would be much harder, so this method can only be considered when there are vast differences within the spatial scales.
On the other hand, the talk by Mark Buehner (Environment and Climate Change Canada) discussed an offline technique that performs 3D-Var analysis every six hours using only unbiased, also known as “anchor”, observations to reduce the effects of model bias. These analyses can then be used as reference states in the main 4D-EnVar assimilation cycle to estimate the bias in the radiance observations. This method was much discussed over the course of the workshop, as it is yet to be used operationally and was very interesting to see a completely different bias correction technique to tackle the problem of disentangling model and observation biases.
Poster presentations were shown via individual pages on the workshop website, with a comments section for small questions and virtual rooms – where presenters were available for a set two hours over the week. There were 12 poster presentations available, ranging from the theoretical statistics behind errors as well as operational techniques to tackle these errors.
My poster, focused on figures 1 and 2 which show the scalar state analysis error variances when (1) we vary the state background error variance accuracy for (a) underestimating and (b) overestimating the bias background error variance; (2) we vary the bias background error variance accuracy for (a) underestimating and (b) overestimating the state background error variance.
I presented a poster on work that I had been focussing on for the past few months titled Sensitivity of VarBC to the misspecification of background error covariances. My work focused on the effects of wrongly specifying the state and bias background error covariances on the analysis error covariances for the state and the bias. This was the first poster that I had ever presented so was a fast learning curve in how to clearly present detailed work in an aesthetic way. It was a useful experience as it gave me a hard deadline to conclude my current work and I had to really think about my next steps as well as why my work was important. Presenting online was a very different experience to presenting in person as it involved a lot of waiting around in a virtual room by myself, but when people did come, I was able to have some useful conversations, as well as the added bonus of being able to share my screen to share relevant papers.
On the final day we split ourselves into four working groups to discuss two different topics: the treatment of biases and the treatment of observation errors. The goal was to discuss current methods, as well as what we thought needed to be researched in the future or potential challenges that we would come across. This was hosted via the BlueJeans app, which provided a good space to talk as well as share screens and had the useful option to choose the ratio of viewing people’s videos, to viewing the presenter’s screen. Although I wasn’t able to contribute much, this was a really interesting day as I was able to listen to views of the experts in the field and listen to their discussions on what they believed to be the most important current issues, such as increasing discussion between data centres receiving the data and numerical weather prediction centres assimilating the data; and disentangling biases from different sources. Unfortunately for me, some of them felt that we were focussing too much on the theoretical statistics behind NWP and not enough on the operational testing, but I guess that’s experimentalists for you!
Although I was exhausted by the end of the week, the ECMWF/EUMETSAT NWP SAF Workshop was a great experience and I would love to attend next time, regardless of whether it is virtual or in person. As much as I missed the opportunity to talk to people face to face, the organisers did a wonderful job of presenting the workshop online and there were many opportunities to talk to the presenters. There were also some benefits of the virtual workshop: people from across the globe could easily join; the presentations were recorded, so can easily be re-watched (all oral and poster presentations can be found via this link – https://events.ecmwf.int/event/170/overview); and resource sharing was easy via screen sharing. I wonder whether future workshops and conferences could be a mixture of online as well as in person, in order to get the best of both worlds? I would absolutely recommend this workshop, both for people who are just starting out in DA as well as for researchers with years of experience, as it encompassed presentations from big names who have been working in error estimation for many years as well as new presenters and new ideas from worldwide speakers.